CN116941234A - Reference frame for motion capture - Google Patents

Reference frame for motion capture Download PDF

Info

Publication number
CN116941234A
CN116941234A CN202180079700.2A CN202180079700A CN116941234A CN 116941234 A CN116941234 A CN 116941234A CN 202180079700 A CN202180079700 A CN 202180079700A CN 116941234 A CN116941234 A CN 116941234A
Authority
CN
China
Prior art keywords
actor
location
mocap
hmd
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180079700.2A
Other languages
Chinese (zh)
Inventor
D·克罗斯比
C·吉斯兰迪
G·韦迪希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment LLC
Original Assignee
Sony Interactive Entertainment LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/535,623 external-priority patent/US20220180664A1/en
Application filed by Sony Interactive Entertainment LLC filed Critical Sony Interactive Entertainment LLC
Publication of CN116941234A publication Critical patent/CN116941234A/en
Pending legal-status Critical Current

Links

Abstract

Techniques are described for facilitating coordination of Audio Video (AV) production using multiple actors (404) at respective locations (402) remote from each other such that an integrated AV product may be generated by coordinating activities of multiple remote actors (404) in concert with each other. Facilitating motion capture (mocap) of a plurality of actors geographically distant from each other (200).

Description

Reference frame for motion capture
Technical Field
The present application relates generally to a technically innovative unconventional solution that must originate from computer technology and lead to specific technical improvements. In particular, the present application relates to techniques for enabling collaborative remote performance at multiple locations.
Background
People are increasingly collaborating from remote locations for health and cost reasons. As understood herein, collaborative movie and computer simulation (e.g., computer game) generation using remote actors may present unique coordination problems because the director must guide multiple actors, each of which may take a movie and conduct computer simulation related activities, such as motion capture (MoCap), in his or her own studio or studio. For example, challenges exist in providing physical references to remote actors on their respective stages in a coordinated manner. The present principles provide techniques for addressing some of these coordination challenges.
Disclosure of Invention
Accordingly, the present principles provide a method comprising providing a frame of reference for at least a first actor at a first location when shooting the first actor for motion capture (mocap) at least in part by presenting at least one reference image on a head-mounted display (HMD) worn by the first actor. The light reflected from the retro-reflector may come from the light emitter. Additionally or alternatively, the method may include providing a retro-reflector on a wall of the first location for reflecting light toward the first actor. Additionally or alternatively, the method may comprise providing a visible marking on the floor of the first location.
In some examples, the method may include providing a frame of reference for at least a second actor at a second location when shooting the second actor for mocap. The first location may be geographically remote from the second location, and mocap from the first actor and the second actor may be presented in web practice (WebEx) on at least one director display in communication with the first location and the second location.
The audio played at the first location may be used, at least in part, to provide a frame of reference to the first actor. A plurality of light emitters may be provided on the HMD. The mocap videos of the first actor and the second actor may be synchronized in time.
In another aspect, an apparatus includes at least one computer storage device that is not a transient signal and that in turn includes instructions executable by at least one processor to receive motion capture (mocap) video of a first actor from a first camera at a first location. The instructions are executable to receive mocap videos of a second actor from a second camera at a second location, synchronize the mocap videos with each other, and merge the mocap videos into a single scene on at least one display at a third location geographically remote from the first location and the second location.
In another aspect, an apparatus includes at least one Head Mounted Display (HMD) component that in turn includes at least one processor configured with instructions and at least one display controlled by the processor. The HMD may also include speakers. At least one projector is configured to project motion capture (mocap) reference light onto at least one surface visible to a wearer of the HMD assembly to provide a spatial reference for the wearer during mocap.
The details of the present application, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
drawings
FIG. 1 is a block diagram of an example system consistent with the present principles;
FIG. 2 illustrates example logic in an example flow diagram format consistent with the present principles;
FIG. 3 illustrates a screen shot of an example stage display;
FIG. 4 shows an example distributed performance environment showing, for illustration, two remote studio or movie scenes and one remote director computer presenting video from each of the scenes or studio;
FIG. 5 shows a camera on a boom of a Head Mounted Display (HMD) for illuminating a retro-reflector on a wall of a motion picture set;
FIG. 6 illustrates additional features of a retro-reflector;
FIG. 7 illustrates additional example logic in an example flow chart format consistent with the present principles;
FIG. 8 shows the marking of actors in a motion picture set on the floor of the motion picture set to aid in the motion picture set;
FIG. 9 illustrates additional example logic in an example flow chart format consistent with the present principles; and is also provided with
Fig. 10 shows a screenshot of an example HMD.
Detailed Description
Referring now to FIG. 1, the present disclosure relates generally to computer ecosystems, including aspects of computer networks that may include Consumer Electronic (CE) devices. The systems herein may include a server component and a client component that are connected by a network such that data may be exchanged between the client component and the server component. The client component may include one or more computing devices including portable televisions (e.g., smart TVs, internet-enabled TVs), portable computers such as laptop computers and tablet computers, as well as other mobile devices including smart phones and additional examples discussed below. These client devices may operate in a variety of operating environments. For example, some of the client computers may employ an operating system such as Microsoft, or Unix operating system, or an operating system produced by Apple Computer (Apple Computer) or Google (Google). These operating environments may be used to execute one or more browsing programs, such as a browser made by Microsoft or Google or Mozilla (Mozilla), or other browser programs that may access websites hosted by Internet servers discussed below.
The server and/or gateway may include one or more processors executing instructions that configure the server to receive and transmit data over a network, such as the internet. Alternatively, the client and server may be connected via a local intranet or a virtual private network. The server or controller may be controlled by a game console (such as Sony) Instantiation of a personal computer, etc.
Information may be exchanged between the client and the server over a network. To this end and for security purposes, the server and/or client may include firewalls, load balancers, temporary storage devices and proxies, as well as other network infrastructure for reliability and security.
As used herein, instructions refer to computer-implemented steps for processing information in a system. The instructions may be implemented in software, firmware, or hardware and include any type of programming step that is executed by a component of the system.
The processor may be a general purpose single or multi-chip processor that may execute logic by means of various lines, such as address lines, data lines, and control lines, as well as registers and shift registers.
The software modules described by the flowcharts and user interfaces herein may include various subroutines, programs, and the like. Without limiting the disclosure, logic stated as being executed by a particular module may be reassigned to other software modules and/or combined together in a single module and/or made available in a shareable library. Although a flowchart format may be used, it should be understood that software may be implemented as a state machine or other logic method.
The present principles described herein may be implemented as hardware, software, firmware, or a combination thereof; thus, the illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality.
In addition to what has been mentioned above, the logic blocks, modules, and circuits described below may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or other programmable logic device such as an Application Specific Integrated Circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be implemented as a combination of controllers or state machines or computing devices.
When implemented in software, the functions and methods described below may be written in an appropriate language such as, but not limited to, C# or C++, and may be stored on or transmitted through a computer-readable storage medium such as Random Access Memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), or other optical disc storage devices such as Digital Versatile Discs (DVDs), magnetic disk storage devices, or other magnetic storage devices that include removable thumb drives, and the like. The connection may establish a computer readable medium. Such connections may include, for example, hardwired cables, including fiber optic and coaxial cables, and Digital Subscriber Lines (DSL) and twisted pairs.
The components included in one embodiment may be used in other embodiments in any suitable combination. For example, any of the various components described herein and/or depicted in the figures may be combined, interchanged, or excluded from other implementations.
"a system having at least one of A, B and C" (likewise, "a system having at least one of A, B or C" and "a system having at least one of A, B, C") includes the following systems: having only a; having only B; having only C; having both A and B; having both A and C; having both B and C; and/or both A, B and C, etc.
Referring now in particular to FIG. 1, an example system 10 is shown that may include one or more of the example apparatus mentioned above and described further below in accordance with the present principles. It is noted that the computerized devices depicted in the figures herein may include some or all of the components set forth for the various devices in fig. 1.
The first of the example devices included in the system 10 is a Consumer Electronics (CE) device configured as an example primary display device, and in the illustrated embodiment, an Audio Video Display Device (AVDD) 12, such as, but not limited to, an internet-enabled TV with a TV tuner (equivalently, a set-top box that controls the TV). The AVDD 12 may be based onIs a system of (a). Alternatively, the AVDD 12 may also be a computerized internet ("smart") enabled phone, a tablet computer, a notebook computer, a wearable computerized device (such as a computerized internet enabled watch, a computerized internet enabled bracelet), other computerized internet enabled devices, a computerized internet enabled music player, a computerized internet enabled headset, a computerized internet enabled implantable device (such as an implantable skin device), or the like. Regardless, it should be appreciated that the AVDD 12 and/or other computers described herein are configured to employ the present principles (e.g., communicate with other CE devices to employ the present principles, to perform the logic described herein, and to perform any other functions and/or operations described herein).
Thus, to employ such principles, the AVDD 12 may be established by some or all of the components shown in fig. 1. For example, the AVDD 12 may include one or more displays 14 that may be implemented by a high definition or ultra-high definition "4K" or higher flat screen, and may or may not support touches to receive user input signals via touches on the display. The AVDD 12 may further include: one or more speakers 16 for outputting audio in accordance with the present principles; and at least one additional input device 18, such as an audio receiver/microphone, for inputting audible commands to the AVDD 12, for example, to control the AVDD 12. The example AVDD 12 may also include one or more network interfaces 20 for communicating over at least one network 22, such as the internet, other Wide Area Networks (WANs), local Area Networks (LANs), personal Area Networks (PANs), etc., under the control of one or more processors 24. Thus, the interface 20 may be, but is not limited to, a Wi-Fi transceiver, which is an example of a wireless computer network interface, such as, but not limited to, a mesh network transceiver. The interface 20 may be, but is not limited to, a bluetooth transceiver, a Zigbee transceiver, an IrDA transceiver, a wireless USB transceiver, a wired USB, a wired LAN, powerline, or MoCA. It should be appreciated that the processor 24 controls the AVDD 12 to employ the present principles, including other elements of the AVDD 12 described herein, such as controlling the display 14 to present images on the display and to receive inputs from the display. Further, it should be noted that the network interface 20 may be, for example, a wired or wireless modem or router or other suitable interface, such as a wireless telephone transceiver or Wi-Fi transceiver as mentioned above, or the like.
In addition to the foregoing, the AVDD 12 may also include one or more input ports 26, such as a High Definition Multimedia Interface (HDMI) port or a USB port for physically connecting (e.g., using a wired connection) to another CE device and/or a headset port for connecting headphones to the AVDD 12 for presenting audio from the AVDD 12 to a user through the headset. For example, the input port 26 may be connected to a wired or satellite source 26a of audiovisual content via a wire or in a wireless manner. Thus, the source 26a may be, for example, a separate or integrated set top box or satellite receiver. Alternatively, source 26a may be a game console or disk player.
The AVDD 12 may also include one or more computer memories 28 that are not transient signals, such as disk-based storage devices or solid state storage devices, which in some cases are embodied as stand-alone devices in the housing of the AVDD, or as a personal video recording device (PVR) or video disk player for playback of AV programs, either inside or outside the housing of the AVDD, or as a removable memory medium. Further, in some embodiments, the AVDD 12 may include a position or location receiver (such as, but not limited to, a cell phone receiver, a GPS receiver, and/or an altimeter 30) configured to receive geographic location information, for example, from at least one satellite or cell phone tower and provide that information to the processor 24 and/or in conjunction with the processor 24 to determine an altitude at which the AVDD 12 is set. However, it should be appreciated that another suitable position receiver other than a cell phone receiver, a GPS receiver, and/or an altimeter may be used, for example, to determine the position of the AVDD 12 in, for example, all three dimensions in accordance with the present principles.
Continuing with the description of the AVDD 12, in some embodiments, the AVDD 12 may include one or more cameras 32, which may be, for example, thermal imaging cameras, digital cameras (such as webcams), and/or cameras integrated into the AVDD 12 and controllable by the processor 24 to collect pictures/images and/or video, in accordance with the present principles. A bluetooth transceiver 34 and other Near Field Communication (NFC) element 36 may also be included on the AVDD 12 for communicating with other devices using bluetooth and/or NFC technology, respectively. An example NFC element may be a Radio Frequency Identification (RFID) element.
Further, the AVDD 12 may include one or more auxiliary sensors 38 (e.g., motion sensors such as accelerometers, gyroscopes, odometers or magnetic sensors, infrared (IR) sensors for receiving IR commands from a remote control, optical sensors, speed and/or cadence sensors, gesture sensors (e.g., for sensing gesture commands), etc.) that provide inputs to the processor 24. The AVDD 12 may include a wireless TV broadcast port 40 that provides input to the processor 24 for receiving OTA TV broadcasts. In addition to the foregoing, it should be noted that the AVDD 12 may also include an Infrared (IR) transmitter and/or an IR receiver and/or an IR transceiver 42, such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the AVDD 12.
Furthermore, in some implementations, the AVDD 12 may include a Graphics Processing Unit (GPU) 44 and/or a Field Programmable Gate Array (FPGA) 46. In accordance with the present principles, the AVDD 12 may use the GPU and/or FPGA for, for example, artificial intelligence processing, such as training a neural network and performing operations (e.g., reasoning) of the neural network. It should be noted, however, that the processor 24 may also be used for artificial intelligence processing, such as where the processor 24 may be a Central Processing Unit (CPU).
Still referring to FIG. 1, in addition to the AVDD 12, the system 10 may also include one or more other computer device types that may include some or all of the components shown for the AVDD 12. In one example, the first device 48 and the second device 50 are shown and may include components similar to some or all of the components of the AVDD 12. Fewer or more devices than shown may be used.
The system 10 may also include one or more servers 52. The server 52 may include at least one server processor 54, at least one computer memory 56 (such as a disk-based storage device or a solid state storage device), and at least one network interface 58 that allows communication with the other devices of fig. 1 over the network 22 under the control of the server processor 54, and in accordance with the present principles, may in fact facilitate communication between the server, the controller, and the client devices. It should be noted that the network interface 58 may be, for example, a wired or wireless modem or router, a Wi-Fi transceiver, or other suitable interface, such as a radiotelephone transceiver.
Thus, in some embodiments, server 52 may be an internet server and may include and perform "cloud" functionality such that in example embodiments, devices of system 10 may access a "cloud" environment via server 52. Alternatively, the server 52 may be implemented by a game console or other computer in the same room or in the vicinity of other devices shown in FIG. 1.
The apparatus described below may incorporate some or all of the elements described above.
"geographically remote" refers to locations beyond the visual and audible range of each other, typically separated by one mile or more from each other.
FIG. 2 illustrates example logic in an example flow chart format consistent with the present principles. In essence, projectors are used for motion capture (mocap) and mocap tracking the point of view (POV) of multiple actors on multiple geographically distant stages for position tracking for fidelity.
Beginning at block 200, movement of each of a plurality of actors at their respective stage or other location is captured using, for example, a projector to reflect light from a reflective label or other marker carried by the actor. At block 202, video images of the mocap data for each actor are combined into a single scene using, for example, a timestamp appended to the frames of the mocap for each actor to align the frames with each other in real world time or video scene time. In block 204, mocap of multiple actors are combined into a single scene, and since this occurs in real-time or near real-time as the actors are photographed, the director may guide the actors by giving the actors stage guidance at block 206, as discussed in more detail below.
In fact, fig. 3 shows a screen shot 300 of an example stage display 302 that may be installed in a studio or other shooting location in consideration of an actor as the actor performs for mocap. Text stage guidance 304 may be presented on the display to prompt the actor to make a particular action, such as looking up and to the left to the virtual location of the dragon rider in the video. An audio cue (such as a beep or voice guidance) may be emitted by one or more speakers 306 to achieve the same effect, e.g., beep from a speaker located in the upper left corner of the display. In this way, mocap actors can look at monitors installed on the motion capture stage so that they see their own and animations they react to (e.g., to not hit a person or wall in the animation).
Fig. 4 shows an example distributed performance environment 400 showing, for illustration, two remote studio or movie scenes 402 in which one or more corresponding actors 404 are performing for mocap purposes. One or more displays 406 and/or speakers 408 may be mounted in the studio 402 as shown, and may be instantiated by the display 302 shown in fig. 3, for example. Video feeds of actors 404 may be sent in a web exercise (WebEx) feed via wired and/or wireless paths, e.g., a Wide Area Network (WAN), to a director location 410 remote from studio or set 402 where a person 412, such as a director or Quality Control (QC) technician, may operate a director computer 414 that presents video from each set or studio 402.
Thus, fig. 4 shows that actor mocap videos may be captured using multiple stages/studios, which are streamed to a Virtual Reality (VR) world, so that each stream is combined into a single video at director computer 414. This facilitates creation of large scenes of multiple people based on actors geographically distant from each other, and is particularly useful for body motion capture of multiple actors aggregated on a corresponding plurality of stages.
An operator/leader (QC operator) combining mocap video may be located at a location 410 remote from the stage 402, for example, networked using a Virtual Private Network (VPN). Remote access software may be used to move mocap video into QC computer 414, which may feed back stage direction and other information, such as whether the remote camera was bumped, whether another shot is needed, etc.
Fig. 5 shows a wall of a retro-reflector 500 arranged, for example, as a grid, that may be illuminated by one or more projectors 502 mounted to a Head Mounted Display (HMD) 506 of a mocap actor by means of one or more booms 504 to illuminate the retro-reflector, which may be applied to the wall of the motion picture set 402. This provides a reference point in the real world for the mocap actor wearing HMD 506, as he views the reflection from the projector of retro-reflector 500 through HMD display 508. One or more speakers 510 may be provided on the HMD, and one or more transceivers 514 may be accessed by one or more processors 512 to enable output of the HMD, including control of projector 502.
The "wall" of retroreflective material 500 and projector 502 on HMD 506 provide different frames of reference relative to the wall for different actors. Only the person wearing HMD 506 can see the projected reflection from its perspective. In this manner, using the walls of retroreflective material 500, the virtual scenery actors can see the necessary references without impeding each other, thereby seeing aligned reference reflections.
It should be appreciated that HMD 506 may include one or more internal cameras 516 to track the wearer's head and eyes to better analyze what actors will see in the virtual environment, providing the scene to projector 502 to project the appropriate images onto retro-reflector 500.
Fig. 6 shows an additional feature of a retro-reflector 500, where an HMD projector (such as projector 502 in fig. 5) has projected various images from an existing virtual scene into which an actor mocap is to be combined. An image 602 of the actor along with a visual or audible identification 604 of the various images may be projected onto the retro-reflector 500 for viewing the actor's image. In the example of fig. 6, an image 606 of a dragon is projected at a location in the virtual world where the dragon was simulated, along with an image 608 of another character-based actor using mocap video from other actors.
Fig. 7 illustrates additional example logic in an example flowchart format consistent with an implementation in which a reference projection is presented on a display of an HMD 506. Beginning at block 700, for an HMD with an internal visible retroreflector, head and eye pose may be tracked at block 702 based on images from an internal camera of the HMD. Proceeding to block 704, the scale and size of projected images on the HMD 506 from the existing video of the virtual scene may be altered based on the head/eye tracking at block 702. In this way, the actor may look upward at an image of the head of the character modeled as being taller than the actor, or look downward at an image of the character modeled as being shorter than the actor.
Alternatively, fig. 8 shows a stage setting 800 where floor 802 has retroreflective markers 804 on which an actor 806 may walk, where a projector on the HMD or elsewhere in stage setting 800 projects images onto markers 804 to assist actor 806 in navigating virtual objects in the cinematic set during mocap. It should be appreciated that the techniques discussed above provide pre-recorded animations or videos for which mocap actors may perform.
Fig. 9 shows additional example logic, at block 900, using a computer game engine with a plug-in computer program to stream game data to a plug-in, which in turn automatically sends the game data (including audio and video) to any of the projectors herein to present a reference image to help the mocap actor perform its performance.
Fig. 10 shows a screenshot of an example HMD 506. Similar to the exterior wall of the retro-reflector 500 shown in fig. 5 and 6, an internal projector in the HMD 506 may project an image 1000 of an actor along with a visual or audible identification 1002 of the various images onto a display of the HMD for viewing the actor's image. In the example of fig. 10, an image 1004 of a dragon is projected at a location in the virtual world where the dragon was simulated along with an image 1006 of another character-based actor using mocap video from other actors. The display of the HMD may be a reflective surface, such as a mask on the HMD for projection, where head tracking is used to correct the size and scale of the various images.
Whether projected onto a wall, floor, or HMD component, a character (such as the dragon described above) may become part of the reference video, and on a different geographically remote stage, the actor may treat the character as being in the same location in VR space. Moreover, by sending the mocap feed of one actor to the display of another remote actor, both actors may receive the presence of other actors on another stage with the same dragon and real actor. Thus, each actor is captured on a video, which is sent to a fictitious virtual representation that combines a virtual character (such as a dragon) with the mocap video of the real actor into a single scene. In this way, the same combined scene can be seen by the person on each stage and adjusted appropriately. Regardless of what the actor should react to (another actor or prerecorded character), the screen or cage system gives the actor an indication of where to see.
Thus, the problem being solved is to provide the actor with a physical reference on the stage. The audio being played may also be a reminder to the actor
In the above example, the reference image may be stabilized by transforming the image based on head movements of the mocap actor. Moreover, just prior to the appearance of a character in the VR world, an audible alert, such as a "beep," may be raised before the time the character appears. A network synchronization protocol may be desirably implemented between the distributed computers herein to ensure that the various videos are aligned by frame and the same scene.
It should be appreciated that while the present principles have been described with reference to some exemplary embodiments, these embodiments are not intended to be limiting and that various alternative arrangements may be used to implement the subject matter claimed herein.

Claims (20)

1. A method, the method comprising:
providing a frame of reference for at least a first actor at a first location when shooting the first actor for motion capture (mocap) at least in part by:
presenting at least one reference image on a Head Mounted Display (HMD) worn by the first actor; and/or
Providing a retro-reflector on a wall of the first location to reflect light toward the first actor; and/or
A visible marking is provided on the floor in the first position.
2. The method of claim 1, comprising presenting at least one reference image on a Head Mounted Display (HMD) worn by the first actor.
3. The method of claim 1, comprising providing a retro-reflector on a wall of the first location to reflect light toward the first actor.
4. The method of claim 3, comprising providing a light emitter coupled to an HMD worn by the actor, the light reflected from the retro-reflector being from the light emitter.
5. A method as claimed in claim 3, comprising providing a visible marking on the floor of the first location.
6. The method of claim 1, the method comprising:
providing a frame of reference for at least a second actor at a second location when shooting the second actor for mocap, the first location being geographically distant from the second location; and
mocap from the first actor and the second actor are presented on at least one director display in communication with the first location and the second location in a web exercise (WebEx).
7. The method of claim 1, comprising providing a frame of reference to the first actor at least in part using audio played at the first location.
8. The method of claim 4, comprising providing a plurality of light emitters on the HMD.
9. The method of claim 6, comprising synchronizing mocap videos of the first actor and the second actor in time.
10. An apparatus, the apparatus comprising:
at least one computer storage device that is not a transient signal and that includes instructions executable by at least one processor to:
receiving motion capture (mocap) video of a first actor from a first camera at a first location;
receiving mocap video of a second actor from a second camera at a second location;
synchronizing the mocap videos with each other;
the mocap video is merged into a single scene on at least one display at a third location geographically remote from the first location and the second location.
11. The device of claim 10, wherein the instructions are executable to:
at least one light emitter at the first location is actuated to project reference light onto a wall of a retro-reflector to reflect reference light toward the first actor.
12. The device of claim 10, wherein the instructions are executable to:
at least one marker on the floor of the first location is actuated to reflect reference light toward the first actor.
13. The device of claim 10, wherein the instructions are executable to:
at least one stage instruction is sent to a Head Mounted Display (HMD) worn by the first actor.
14. The apparatus of claim 10, comprising the at least one processor.
15. An apparatus, the apparatus comprising:
at least one Head Mounted Display (HMD) component, the at least one HMD component comprising:
at least one processor configured with instructions;
at least one display;
at least one speaker; and
at least one projector configured to project motion capture (mocap) reference light onto at least one surface visible to a wearer of the HMD assembly to provide a spatial reference for the wearer during mocap.
16. The apparatus of claim 15, wherein the projector is mounted on a boom of the HMD assembly.
17. The device of claim 15, wherein the surface comprises the display of the HMD assembly.
18. The apparatus of claim 15, wherein the surface comprises a wall of a retro-reflector.
19. The apparatus of claim 15, comprising at least one wireless transceiver on the HMD assembly for receiving commands from a geographically remote director computer.
20. The apparatus of claim 19, wherein the instructions are executable to:
presenting the command on the display and/or playing audio on the speaker in response to the command.
CN202180079700.2A 2020-11-28 2021-11-26 Reference frame for motion capture Pending CN116941234A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/118,905 2020-11-28
US17/535,623 2021-11-25
US17/535,623 US20220180664A1 (en) 2020-11-28 2021-11-25 Frame of reference for motion capture
PCT/US2021/060899 WO2022115662A2 (en) 2020-11-28 2021-11-26 Frame of reference for motion capture

Publications (1)

Publication Number Publication Date
CN116941234A true CN116941234A (en) 2023-10-24

Family

ID=88386549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180079700.2A Pending CN116941234A (en) 2020-11-28 2021-11-26 Reference frame for motion capture

Country Status (1)

Country Link
CN (1) CN116941234A (en)

Similar Documents

Publication Publication Date Title
CA2913218C (en) Systems and methods for a shared mixed reality experience
US9779538B2 (en) Real-time content immersion system
US10277813B1 (en) Remote immersive user experience from panoramic video
US20120249424A1 (en) Methods and apparatus for accessing peripheral content
US9473810B2 (en) System and method for enhancing live performances with digital content
US11647354B2 (en) Method and apparatus for providing audio content in immersive reality
US20160179206A1 (en) Wearable interactive display system
WO2021095573A1 (en) Information processing system, information processing method, and program
CN116940966A (en) Real world beacons indicating virtual locations
US20220139050A1 (en) Augmented Reality Platform Systems, Methods, and Apparatus
KR20110006976A (en) Mutimedia syncronization control system for display space from experience space
US20230186552A1 (en) System and method for virtualized environment
US20220180664A1 (en) Frame of reference for motion capture
CN116941234A (en) Reference frame for motion capture
US11689704B2 (en) User selection of virtual camera location to produce video using synthesized input from multiple cameras
US20220180854A1 (en) Sound effects based on footfall
WO2023090038A1 (en) Information processing apparatus, image processing method, and program
CN117940976A (en) Adaptive rendering of games according to device capabilities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination