GB2598556A - Device and associated method for triggering an event during capture of an image sequence - Google Patents

Device and associated method for triggering an event during capture of an image sequence Download PDF

Info

Publication number
GB2598556A
GB2598556A GB2013384.9A GB202013384A GB2598556A GB 2598556 A GB2598556 A GB 2598556A GB 202013384 A GB202013384 A GB 202013384A GB 2598556 A GB2598556 A GB 2598556A
Authority
GB
United Kingdom
Prior art keywords
beacon
image
recording location
location
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2013384.9A
Other versions
GB202013384D0 (en
Inventor
Smuts Timo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Badweather Systems Inc
Original Assignee
Badweather Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Badweather Systems Inc filed Critical Badweather Systems Inc
Priority to GB2013384.9A priority Critical patent/GB2598556A/en
Publication of GB202013384D0 publication Critical patent/GB202013384D0/en
Publication of GB2598556A publication Critical patent/GB2598556A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

Triggering a filming event during image sequence capture, comprising: receiving an indication of a beacon’s position at a location; superimposing the beacon position on an image of the location; determining an object’s location at the location; determining, based on the indication and object location, whether the object is in a trigger region associated with the beacon; triggering the filming event if the object is within the trigger region. Also disclosed: networking the beacon with, and transmitting its location to, a controlling beacon; tracking object movement using an attached second beacon. Objects may be vehicles or stuntmen. Images of the recording location may be displayed on a device, eg. smart phone, tablet, film camera. Beacon position may be determined by: determining distance and vector between the beacon and device; using global navigation systems, eg. GPS, GNSS, GIS, GLONASS, BEIDOU, GALILEO. Probabilistic models may estimate velocity and predict future location of objects moving at high speed. Image processing may: detect objects and planar surfaces; form and stitch point clouds to represent topography. Trigger regions may be a threshold radius surrounding beacon location. Filming events such as explosions or cars flipping may be controlled automatically. Applications include safety during stunts and special effects.

Description

Device and Associated Method for Triggering an Event During Capture of an Image Sequence
Field of Invention
The invention is in the field of the capture of image sequences, for example in the film industry. In particular the invention relates to devices and methods that improve the efficiency and safety of recording filming events, such as special effects, stunts and other related events, during the capture of image sequences.
Backqround Filming events such as special effects, stunts, the use of hydraulic, pneumatic or pulley based systems, are used to enhance an image sequence. For example, an image sequence may incorporate a special effect such as an explosion, or a stunt such as a car being flipped in order to entertain the viewer. These effects can either be recorded during the recording of the image sequence at the recording location, or they can be added afterwards as computer generated imagery.
There are problems associated with recording a filming event at the recording location as this involves the use of expensive and dangerous consumables such as explosives, and vehicles. During the recording of a filming event there can be a risk to stuntmen who are involved in the filming event, and there is also a risk that the filming event is not executed to plan, and so needs to be re-recorded at considerable expense. This is particularly the case for filming events comprising special effects events such as explosions. Explosive charges are often placed in the ground and have to be activated at a precise time to create the filming event. If the explosive charge is activated too early then the filming event is not recorded to a satisfactory standard, and so may have to be re-recorded. If the explosive charge is activated too late then a stuntman may be too close to the explosive charge, and so the explosive charge may cause injury. Visual cues as to the position of the explosive charge may not be used during recording as this may detract from the image sequence. This is especially problematic in featureless areas such as deserts or oceans, or other such locations.
The current system for film safety involves the placement of rudimentary physical markers (traffic cones and flags) to denote the position of obstacles and or points of interest or danger. These are generally required in action sequences to mark the positions of predetermined locations of explosive charges or other interactive obstacles that may occur in a landscape on which filming is set to take place (but are removed prior to filming).
These markers are held in position to provide all necessary crew with visual/ positional data required for the sequence. The camera crew requires this information to capture the interaction. Stunt crew require this information to be able to position themselves or their vehicles' in accurate proximity to the obstacle to allow for the desired interaction. The special effects department is responsible for initiating the interaction between the obstacle and the stunt performer/ vehicle. The stunt and special effects crew are required to independently coordinate this interaction within a set of safety parameters largely concerned with proximity of the stunt performer to the obstacles in question.
At present the physical markers are placed for rehearsals, allowing all relevant crew to see the position of the obstacles, only. Rehearsals are essentially dry runs, which require the full or partial action of the sequence to be performed or run at, as close to reality as physically possible with the notable exception of interactions with obstacles such as the initiation of explosive charges.
The current practice allows for rehearsals at various speeds to allow for easier analysis and requires audio cues from those in charge of initialising and obstacle interactions. Special effects technicians will often shout "BANG" on shared communication channels. After rehearsals have been completed the markers are removed and the sequence is run with interactions, whilst capturing the image sequence. The positions of the obstacles are now in many instances no longer visible at all to many of the essential crew required for the sequence in question. The crew are required to rely on recent memory of those positions from rehearsals completed prior to the take.
The ability to recall this information in a largely undifferentiated environment such as snow or desert landscapes continues to prove extremely difficult. The reliance on memory is problematic especially in instances that require pinpoint interactions over very short periods of time. The entire process is almost exclusively reliant on human hand, eye and body coordination. From the stunt performer or driver's ability to navigate the track or path to and between obstacles and the special effect technicians ability to judge the vehicle or performer's speed and position relative to the obstacle he can no longer see.
This can lead to human error from the stuntmen (or indeed from the camera crew or the special effects technician) that can lead to either the filming event not being captured accurately, or even injury to the stuntmen. Tragically in some instances mistakes have lead to the death of some stuntmen during the capture of image sequences involving a filming event.
The use of computer generated imagery reduces the expense and risk of recording filming events, but often does not look as realistic to a viewer, and so is not a solution to such problems.
Aspects of the claims address the above problems associated with recording filming event at the recording location. For example, aspects of the invention improve the safety for stuntmen implementing the filming event, and improve the reliability of executing the filming event as planned.
Statements of Invention
Aspects of the invention are set out in the independent claims. Optional features are set out in the dependent claims.
According to a first aspect there is a device for triggering a filming event at a recording location during image capture of an image sequence of the recording location, the device comprising: a receiver, configured to receive an indication of the position of a first beacon in the recording location; a processor for determining, based on the indication and object location data, whether an object is in a trigger region of the recording location associated with the beacon; and a trigger for triggering a filming event in the recording location, during image capture of the image sequence, in the event that the object is within the trigger region. This device is advantageous as it allows a beacon position to be monitored during the capture of an image sequence involving a filming event, and to use this to determine whether, or when, the filming event should be triggered. This improves the safety, and accuracy of the filming event.
Optionally, the object location data comprises one of: image data from the image sequence indicating the location of the object, and/or data received from a second beacon co-located with the object within the recording location.
Optionally, determining whether an object is in the trigger region comprises one of: identifying the object in an image of the image sequence; and/or determining the location of the object based on image data from the image sequence; and/or comparing the object data to the position of the first beacon, and determining if the object and the first beacon are less than a set distance apart.
Optionally, determining whether an object is in a trigger region comprises detecting coincidence of the object and the trigger region, and triggering the filming event in response to said detection.
Optionally, the trigger comprises a signal output for providing a control signal to a filming event consumable such as an explosive.
Optionally, the device for triggering a filming event further comprises, a visualisation element to superimpose the position of the first beacon on the image of the recording location. The visualisation event may allow those working on the film set such as cameramen and explosive engineers to visualise the position of the beacon in the recording location, despite the lack of visual markers in the recording location.
Optionally, the visualisation element is configured to superimpose an indication of the trigger region on the image.
Optionally, the device for triggering a filming event further comprises a transmitter configured to transmit the location of the device to the first beacon.
According to a second aspect there is provided a method of controlling a filming event at a recording location during image capture of an image sequence of the recording location, the method comprising the steps of: receiving a position of a first beacon in the recording location; superimposing the position of the first beacon on an image of the recording location; controlling a filming event in the recording location in response to determining that a selected object is within a trigger region of the recording location associated with the beacon. Superimposing the position of the first beacon allows control of the filming event based on whether the object is in the trigger region, and avoids mistakes that occur due to human error associated with misremembering the position of a consumable such as an explosive which has no visual cue in the recording location.
Optionally, the trigger region comprises a selected distance between the selected object and the first beacon.
Optionally, determining that a selected object is a selected distance from the position of the first beacon comprises: determining that the object has reached a selected distance from the beacon; and/or determining that the beacon has reached a selected distance from the object.
Optionally, superimposing the position of the first beacon comprises providing a visual cue as to the position of the first beacon in the image of the recording location, for example wherein the visual cue comprises a graphical region indicating the trigger region.
Optionally, the method of controlling a filming event further comprises, in the event the position of the first beacon is behind an obstruction such that the position of the beacon is not visible in the image of the recording location, superimposing an indication that the position of the first beacon is hidden from view on the image of the recording location. This may stop a user such as an explosive engineer or cameraman from mistakenly believing that the first beacon is positioned on the obstruction Optionally, the method of controlling a filming event further comprises providing an indication of how far away the position of the first beacon is, and/or providing an indication of how far away the selected object is.
Optionally, the method of controlling a filming event further comprises, receiving the position of a second beacon indicating the position of the selected object.
Optionally, the filming event comprises actuating an event apparatus. The event apparatus may be any apparatus used in the filming event.
The event apparatus may comprise an explosive, and actuating the explosive causes an explosion. The event apparatus may comprise a pneumatic device comprising a reservoir for holding compressed gas. Actuating the pneumatic device may comprise releasing the compressed gas to simulate the effect of an explosion. The event apparatus may comprise a pulley or hydraulic system, and actuating the pulley or hydraulic system may cause a force to be applied to a prop, for example to flip a vehicle such as a car.
Optionally, the filming event is an explosion, and controlling the filming event comprises triggering the explosion.
Optionally the method of controlling a filming event further comprises, adjusting the superimposed image to account for movement of the device based on the position of the device Optionally, the method is performed by the device described above according to the first aspect.
According to a third aspect there is provided a computer program product comprising program instructions configured to perform the method as set out above.
According to a fourth aspect, there is provided a programmable device, programmed with the computer program product as set out above, for example wherein the programmable device is a mobile phone, tablet, wearable device or film camera.
According to a fifth aspect there is provided a method of installing a beacon system at a recording location, such that it is configured to control a filming event during the image capture of an image sequence at a recording location, the method comprising: positioning a beacon at a selected position within the recording location; connecting the beacon with a device, such that the beacon is configured to transmit its position to the device, wherein the device is configured to superimpose the position of the beacon on an image of the recording location. This may allow a simple and efficient set-up of the beacon system in a recording location.
According to a sixth aspect there s provided a second method of installing a beacon system at a recording location, such that it is configured to control a filming event during the during the image capture of an image sequence at a recording location, the method comprising: positioning a beacon at a selected position within the recording location; positioning a controlling beacon adjacent to, or in, the recording location; creating an a network between the controlling beacon and the beacon, such that the beacon is configured to transmit its location to the controlling beacon; connecting the controlling beacon with a device, such that the controlling beacon is configured to transmit the position of the beacon to the device, wherein the device is configured to superimpose the position of the beacon on an image of the recording location. Having the controlling beacon in sole communication with the visualisation device may reduce issues of compatibility.
According to a seventh aspect there is provided a network of devices configured to control a filming event during the image capture of an image sequence at a recording location, the network comprising: a first beacon configured to transmit its position; a second beacon configured to indicate the movement of an object to which it is configured to be attached; a visualisation device configured to superimpose the position of the beacon on an image of the recording location, and further configured to superimpose the movement of the object on the image of the recording location. This network may provide the functionality described above, and may be sold as one package to reduce compatibility concerns.
Optionally, the network of devices further comprises a controlling beacon configured to act as a central node of the network. The network therefore comprises the controlling beacon, the first beacon and the second beacon. The controlling beacon is configured to receive the position of the first beacon, and movement of the object, and pass this information on to the visualisation device.
Optionally, the visualisation device is the device described above according to the first aspect.
Brief Description of Figures
Figure 1 illustrates a block diagram of a device for triggering a filming event at a recording location during image capture of an image sequence of the recording location, and a block diagram of a beacon at the recording location in communication with the device.
Figure 2 illustrates an exemplary network comprising a visualisation device and a plurality of beacons in a recording location.
Figure 3 illustrates a block diagram of a network of devices configured to control a filming event during the image capture of an image sequence at a recording location.
Figure 4 illustrates the starting point of a first image sequence with a car driving through a featureless area.
Figure 5 illustrates a progression of the first image sequence with the car driving further though the featureless area.
Figure 6 illustrates a further progression of the first image sequence as the car drives into a trigger area associated with a first beacon.
Figure 7 illustrates the end of the first image sequence as the first filming event is triggered as the car is within the trigger area.
Figure 8 illustrates the starting point of a second image sequence with a boat sailing through a featureless ocean.
Figure 9 illustrates the progression of the second image sequence as the boat sails through the featureless area.
Figure 10 illustrates a further progression of the second image sequence as the boat is within the trigger area.
Figure 11 illustrates the end of the second image sequence as the second filming event is triggered as the boat is within the trigger area.
Figure 12 illustrates the starting point of a third image sequence with a missile travelling toward a tank.
Figure 13 illustrates the progression of the third image sequence with the missile still some distance from the tank.
Figure 14 illustrates further progression of the third image sequence with the missile about to engage with the tank, and the trigger area associated with the missile encompassing the tank, and the trigger area associated with the tank encompassing the missile.
Figure 15 illustrates the end of the third image sequence as the third filming event is triggered when the tank is within the trigger area associated with the missile.
Figure 16 illustrates a method of controlling a filming event at a recording location during image capture of an image sequence of the recording location.
Figure 17 illustrates a first method of installing a beacon system at a recording location, such that it is configured to control a filming event during the image capture of an image sequence at a recording location.
Figure 18 illustrates a second method of installing a beacon system at a recording location, such that it is configured to control a filming event during the image capture of an image sequence at a recording location, in which a controlling beacon is used.
Figure 19 is a still image of a well-known scene in the film The Matrix. Detailed Description of Figures There is described a device and method for triggering a filming event during capture of an image sequence. In particular, the device 101 comprises a receiver 103, configured to receive an indication of the position of a beacon in the recording location, a processor 105 for determining, based on the indication and object location data, whether an object is in a trigger region of the recording location associated with the beacon, and a trigger 109 for triggering a filming event in the recording location, during image capture of the image sequence, in the event that the object is within the trigger region.
Figure 1 shows a block diagram of a network comprising a device 101, and a beacon 220 according to a first embodiment. The device 101 and the beacon 220 are in communication with one another. The beacon 220 is positioned in the recording location, and the device is positioned in or adjacent to the recording location. As explained below the device comprises an image capturing element 111. The recording location is the field of view of the image capturing element and includes the position of an object, and the position of a beacon. The beacon is not visible in the field of view of the image capturing element, as it is hidden for the purpose of recording the image sequence.
The device comprises a receiver 103, a processor 105, a visualisation element 107, a trigger 109 and an image capturing element 111. The receiver 103 is configured to receive an indication of a position of the beacon 220 in the recording location from the beacon. The receiver 103 is in communication with a processor 105 such that the indication of the position of the beacon is communicated to the processor 105. The device 101 comprises the image capturing element 111 to capture an image of the recording location, the captured image including an object located within the recording location in its field of view. The image capturing element 111 is also in communication with the processor 105 to communicate the captured image to the processor 105. The position of the object within the recording location is then determined, along with its associated object location data, from the captured image of the recording location. The device comprises a visualisation element 107, such as a screen.
The processor 105 is configured to send the object location data, the indication of the position of the beacon 220, and an image of the recording location to the visualisation element 107. The processor 105 also sends a trigger region associated with the position of the beacon to the visualisation element 107. The trigger region in this embodiment is a set radius surrounding the position of the beacon. The radius of the trigger region is known and stored in the processor 105 prior to sending. The visualisation element 107 is configured to superimpose the indication of a trigger region associated with the position of the beacon on the image of the recording location. The device 101 further comprises a trigger 109 for triggering a filming event in the recording location, during image capture of the image sequence, in the event that the object is within the trigger region. In this embodiment the trigger is controlled by a user in response to the object being within the trigger region.
Figure 1 also shows a block diagram of a beacon 220 comprising a positioning element 221, a transmitter 223, a processor 225, a power source 227, an input/output 228 and a relative position sensor 229. The beacon comprises the positioning element 221 configured to communicate with a global navigation system to give a measurement indicative of the absolute location of the beacon. The positioning element is configured to communicate these measurements to a processor 225 to determine the absolute position of the beacon 220 from said measurements. The beacon 220 further comprises relative positional sensors 229 that are configured to measure parameters indicative of movement of the beacon 220 (for example acceleration, angular velocity, orientation etc.) and communicate these measurements to the processor 225. In this example the relative positional sensor 229 comprises an accelerometer. The processor 225 is then configured to determine the movement of the beacon from these relative position sensor measurements. In this example the processor 225 integrates the measured acceleration to determine the velocity of the beacon, and integrates the velocity again to determine the displacement of the beacon 220. The processor 225 is configured to be in communication with the transmitter 223 to communicate the absolute position of the beacon, and the movement information of the beacon to the transmitter 223. The transmitter 223 is configured to be in communication with the receiver 103 of the device 101 and is configured to send the absolute position of the beacon and the movement information of the beacon 220 to the device 101. The beacon 220 further comprises the power source 227 to power the components, and the input/output 228. The input/output enables a user to interact with the beacon 220.
In use, the positioning element 221 of the beacon 220 communicates with a global navigation system. The positioning element then receives a measurement indicative of the absolute location of the beacon from the global positioning system. The positioning element 221 of the beacon 220 then communicates the measurement indicative of the absolute location of the beacon 220 to the processor 225. The processor 225 then determines the absolute position of the beacon 220. Simultaneously the relative positional sensors 229 measure one or more parameters indicative of movement of the beacon 220 (for example acceleration, angular velocity, orientation etc.). In this case the measurement is of acceleration of the beacon 220. The relative positional sensors 229 then communicate the relative positional measurements to the processor 225, and the processor 225 determines movement information of the beacon from these measurements. In this case the velocity and the displacement of the beacon 220 are determined by the processor 225. The processor 225 then communicates the absolute position and movement information of the beacon to the transmitter 223. The transmitter 223 of the beacon 220 then transmits the absolute position and movement information of the beacon 220 to the receiver 103 of the device 103.
The receiver 103 of the device 101 therefore receives an indication of a position of the beacon 220 in the recording location, as well as movement information associated with the beacon 220. The receiver 103 then communicates the indication of the position of the beacon, and the movement information of the beacon to the processor 105 of the device 101. The processor 105 pre-stores a set distance that corresponds to the radius of the trigger region around the position of the beacon. Simultaneously, the image capturing element 111 captures an image of the recording location. This image has a field of view that includes an object that is located within the recording location. The field of view of the captured image also includes the position of the beacon (although there is no indication of this position in the captured image). The image capturing element 111 then sends the captured image to the processor 105. The position of the object within the recording location is then determined. This is done by identifying the object within the image, and then determining a location of the identified object in the image. The processor 105 then locates the position of the beacon 220 in the captured image from the indication of the absolute position of the beacon. This is done by comparing the absolute location of the device 101 to the indication of the absolute position of the beacon 220, and then determining the distance and vector between the device 101 and the beacon. The distance and vector are then used to determine where in the captured image the beacon 220 is positioned. The pre-stored radius of the trigger region is then used in combination with the position of the beacon within the captured image to determine where in the captured image the trigger region is positioned. The processor 105 has then determined the position of the beacon and the trigger region within the captured image, and has determined the object location data indicating the position of the object within the captured image of the recording location. The processor 105 then determines, based on the position of the beacon within the captured image and the object location data, whether the object is in a trigger region of the recording location associated with the beacon 220. The processor 105 then sends the object location data and the position of the beacon and trigger region within the captured image to the visualisation element 107, along with the captured image of the recording location. The visualisation element then superimposes the position of the beacon on the image of the recording location and superimposes the trigger region on the image of the recording location by superimposing the trigger radius around the position of the beacon. The visualisation element 107 then displays the superimposed image. An indication of whether the object is in the trigger region is also shown on the superimposed image. The user, such as a special effects technician, then views the visualisation element and in the event that the object is within the trigger region the user then triggers the trigger. The trigger then sends a trigger signal to a detonator that detonates the filming event in response to the trigger signal. For example, the filming event may be an explosion, and the detonator then detonates an explosive charge in response to the trigger signal.
In real-time the device 101 is therefore able to show a superimposed image of the recording location on the visualisation element 107 to a user. This means that a special effects technician viewing the visualisation element can see the position of the beacon 220 even though it is not visible in real-life (it is hidden for the purpose of recording the image sequence). The special effects technician can also see the trigger region and whether the object is within the trigger region. This addresses the problems discussed in the background section, and enables the filming event to be more reliable, and safer.
The device shown in Figure 1 may be implemented in a number of ways. For example, any receiver 103 may be used, such as a wireless receiver. . The trigger 109 may comprise a transmitter configured to transmit a trigger signal, wherein the trigger signal may comprise a signal output for actuating an event apparatus, for example by providing a control signal to a filming event consumable such as an explosive (alternatively the trigger signal may be sent to an intermediate device first). For example, the transmitter may be configured to send data wirelessly. . The receiver 103 and the transmitter 109 may together form a transceiver, and may be implemented by a single piece of hardware.
Additionally, the visualisation element 107 may be a screen, such as a touch screen for a mobile telephone. The superimposed image shows a trigger region around the position of the beacon. The superimposed image also shows an indication of whether the object is within the trigger region. For example, the trigger region may pulse or change colour to indicate that the object is within the trigger region. A special effects technician viewing the visualisation element is therefore informed when the object is within the trigger region by the superimposed image shown by the visualisation element. A member of staff such as a special effects technician may watch the screen to give them an indication of when an object is within the trigger region. For example the object location may be determined by the special effects technician watching the screen. Without the superimposed image provided by the visualisation element 107 the special effects technician would have to rely on the outdated methods described in the background. Alternatively the process may be fully automated, in which case the visualisation element 107 is not required, as the device would trigger the filming event automatically in the event that the object is within the trigger region. In this case once the processor has determined that the object is within the trigger region the processor communicates this to the trigger which then sends a trigger signal to the detonator. This makes the process simpler, and removes the user from the process.
However, the use of the visualisation element 107 allows a special effects technician to override and ignore the indication that the object is within the trigger region if there are other concerns, such as the presence of another object that would make the filming event unsafe.
The image capturing element 111 may be any element configured to capture an image. For example, the camera on a mobile telephone may be used as the image capturing element 111. The determination of the position of the object may be made locally on a processor specifically associated with the image capturing element. The device 101 may determine the displacement of the object relative to the device 101 from the captured image, and from the position of the object within the captured image. The absolute position of the object may then be determined by adding the displacement of the object compared to the device 101 to the known absolute position of the device 101. The image capturing element 111 is optional. In other embodiments the device may receive the object location data from an external device, or the device may receive a captured image of the recording location from an external device, and then determine the object location data from the received captured image.
Alternatively rather than the object location being determined from the captured image, the object may have a second beacon co-located with it. The second beacon may communicate with the device in the same manner as the first beacon described above. Therefore the position of the second beacon, and therefore the object location, may be received by the receiver of the device. The processor is then configured to determine whether the object location is within the trigger region associated with the first beacon. This may be performed by comparing the received object location with the received position of the first beacon, for example to determine whether these positions are within a set distance of each other, e.g. radius of the trigger region.
The device 101 may be configured to transmit its location to the beacon. This may be useful in some applications where the beacon and the device are both moving, and the beacon and the device must remain within a set distance of each other. This information may be used to determine the relative movement of the beacon relative to the device, or of an object co-located with the beacon.
Its noted that the trigger region may be stored either by the beacon 220, or by the device 101. The trigger region may be sent by the beacon 220 to the device 101. For example, in the case where multiple beacons are used each beacon may have a unique trigger region (e.g. some explosions may be larger than others and so may have a correspondingly larger trigger region -this may also be dependent on the camera angle from which the image sequence is being filmed). In this instance each beacon may communicate its corresponding trigger region to the device 101 to reduce the complexity in calibrating the network. Trigger regions may also be asymmetrical, or have shapes other than a circle, and such information about the trigger region may be stored either on the device or the beacon, and then communicated to the device. In either embodiment the processor 105 of the device then determines the position of the trigger region in the captured image of the recording location by situating the position of the trigger region around the determined position of the beacon within the captured image of the recording location.
The beacon 220 shown in Figure 1 may be implemented in a number of ways. For example, the positioning element 221 may use global navigation satellite systems (GNSS) and global spatial information technology (such as GPS, GIS, GLONASS, BEIDOU, GALILEO, etc.) to determine the position of the beacon 220. Other technologies such as Differential GPS (DGPS) or Real-time kinematic (RTK) positioning may be used when available to improve accuracy of the positioning obtained from the GNSS.
Additionally, the relative position sensors 229 may include accelerometers to measure the acceleration of the beacon 220. The processor may use the measured acceleration to determine the velocity, and displacement of the beacon 220. The processor 225 may verify the accuracy of the velocity and displacement by comparing this to a velocity and displacement determined from a time series of absolute positions determined using the positioning element 221. Optionally, instead of, or in combination with an accelerometer, a magnetometer may be used to determine orientation relative to magnetic north, and one or more gyroscopes may be used to measure orientation and angular velocity. The relative position, orientation and motion measured by the relative position sensors 229 may be used in combination with the satellite position to calculate the beacons position and movement. The processor 225 may be used to perform any required calculations based on the measured data.
The beacons 220 may have a negligible physical footprint, allowing them to be substantially hidden from the camera during the capture of an image sequence. The beacons 220 may be able to remain in place and active during both rehearsals and shooting while providing position and optionally movement information to increase the reliability and safety of the filming event.
The transmitter 223 may transmit using any form of wireless communication. The beacon may optionally comprise a receiver configured to receive positional and movement information from the device, or from other beacons. The receiver may be a wireless receiver. The receiver and the transmitter 223 may together form a transceiver, and may be implemented by a single piece of hardware.
It is noted that in embodiments where an object is moving at a high speed through a recording location, or in embodiments where there is likely to be drift in the received location of the first, or second, beacon (for example in forests) a probabilistic model may be employed by the processor to verify the accuracy of the received location data. For an object moving at a high speed the image capturing element may capture a series of images of the moving object. The processor may use this to estimate the objects velocity, and/or acceleration, in order to predict where the object location will be a set period of time later. If the received object location data is different to the predicted object location the processor will then perform an action. That action may be overriding the received object location, and instead using the predicted location, or where the discrepancy is too large, suggesting to the technician that the stunt be aborted for safety reasons. In the case of locations with a high likelihood of GNSS drift the predicted location may override unexpected changes to the received location that are likely attributed to drift.
Figure 2 shows one example of a network 300 of beacons 220. Two visualisation devices 101 are shown, as are a plurality of beacons 220. The beacons 220 include beacons configured to only send their position 220a, and beacons configured to send both their position and movement 220b. The visualisation devices 101 receive the position of a first beacon 220a in the recording location, superimpose the position of this beacon on an image of the recording location, and control one or more filming events in the recording location in response to determining that a selected object is within a trigger region of the recording location associated with the beacon. As shown in Figure 2, the network comprises connections between the visualisation device 101 and the beacons 220, as well as between the beacons themselves. The connection between the beacons 220 themselves are nonessential but may enable each beacon 220 to calculate its distance to adjacent beacons, which may be useful information for the visualisation device 101, and lower the processing requirements of the visualisation device, and reduce latency. As shown in Figure 2 some of the beacons 220 are not visible to one of the visualisation devices 101 as there is an obstruction. In this case the visualisation device 101 may superimpose an indication that the position of an obstructed beacon is hidden from view on the image of the recording location. It is noted that two visualisation devices are used in this network. The first visualisation device, positioned partially behind the obstruction may be used by the special effects technician. The second visualisation device may be used by a stunt performer and may move through the recording location, and so move relative to the beacons 220. The superimposed image produced by each of the visualisation devices will be different, as each visualisation device will produce a superimposed image from its own perspective.
Figure 3 shows block diagram of members of the network of devices configured to control a filming event during the image capture of an image sequence at a recording location. The network comprises a first beacon 220a configured to transmit is position, a second beacon 220b configured to indicate the movement of an object to which it is configured to be attached, and a visualisation device 101 configured to superimpose the position of the beacon on an image of the recording location, and further configured to superimpose the movement of the object on the image of the recording location.
In this case the second beacon is shown attached to a missile 641 as is illustrated in Figures 12-15. The missile in these Figures moves across the recording location. The second beacon 220b may be attached to any moving object, such as a drone, a vehicle or other such moving object.
The visualisation device 101 is shown as the device of Figure 1.
The first beacon 220a may be any beacon configured to transmit its position. The first beacon may also transmit movement information, but this is not necessary.
The first beacon 220a is shown comprising just a positioning element 221, a processor and a transmitter 223. The positioning element 221 is configured to communicate with a global navigation system, and send the position measurements to the processor 225 to determine an absolute position. The absolute position is then sent to the transmitter 223 to be sent to the device 101. Other features may be present in the first beacon as set out in the description of Figure 1. The second beacon 220b is shown comprising a positioning element 221, a transmitter 223, a relative position sensor 229, and a processor 225. As set out in relation to Figure 1 the relative position sensors send the relative position measurements to the processor 225, which determines the movement of the beacon 220b. The positioning element 221 is configured to communicate with a global navigation system, and send the position measurements to the processor 225 to determine an absolute position. The processor is then configured to send both the absolute position and the movement information of the second beacon 220b to the transmitter 223 to send to the device 101. Other features may be present in the second beacon 220b as set out in the description of Figure 1.
In addition to the devices of the network shown in Figure 3, a controlling beacon may also be present. The controlling beacon may be configured to act as a central node of the network, the network comprising the controlling beacon, the first beacon and the second beacon. The controlling device may be configured to receive the position of the first beacon, and the movement associated with the object (to which the second beacon is configured to be attached), and then pass this information on to the visualisation device 101. The controlling beacon may be referred to as the "anchor".
The network may comprise an ad-hoc network such as a Bluetooth network. For example, the network may comprise a master node and a series of slave nodes. The master node may be the controlling beacon, and the first and second beacons 220 may comprise slave beacons. The controlling beacon may either be in, or adjacent to the recording location. It may be used as a beacon in the filming event, or it may be used purely as an intermediate device between the first and second beacons 220 and the visualisation device 101. Alternatively the network may be a non-ad-hoc network such as infrastructure network. Embodiments utilising an infrastructure network may use the controlling beacon as the centralised node. In either embodiment using the controlling beacon may allow the network to be efficient to configure, and provides a single data source to the visualisation device 101 for efficiency of use. This may make installation simpler, and avoid compatibility issues.
Figure 4 shows a recording location in which an image sequence including a filming event is to be captured. In this case the filming event is the explosion of a vehicle 431. The device 101 is positioned out of view of the recording location, and is receiving information from the recording location shown. Figure 4 shows the arrangement at the beginning of the image sequence. Figure 4 shows a recording location, an object 431, which in this case is a vehicle, as well as a beacon location 433. The beacon is co-located with an explosive charge. A trigger region 435 around the beacon 220 is also shown. The beacon itself is not visible in Figure 4 as there is often no visual marker of an explosive charge when capturing an image sequence. This means that when the image sequence is recorded the beacon is not visible.
In this specific embodiment the device 101 (not shown) and the beacon 220 are connected to form a network. The beacon 220 in Figure 4 then sends a message across the network indicating its position in the recording location to the device 101. The device 101 in this embodiment comprises a camera or other image capturing element 111. Concurrent to the communication with the beacon, the device captures a first image of the recording location. The device also comprises a processor and uses this to determine the position of the car 431 in the first image that it has captured. This determination is made according to any known image processing techniques. Once this step is completed the device knows both the position of the car 431, and the position of the beacon 433. The device can then compare the position of the car 431 to the position of the beacon 433 to determine whether the car 431 is within the trigger region 435. This determination is performed using the processor of the device 101. In this example, this comparison comprises determining whether the position of the car 431 and the beacon 433 differ by less than a pre-selected distance (for example 3 metres). In Figure 4 the position of the car 431 is outside the trigger region 435 and so the device 101 indicates that the filming event should not yet be triggered. This indication is to a special effects technician who is viewing a visualisation element 107 of the device 101 (such as a screen). Figure 4 shows the image provided by the visualisation element 107 in this embodiment. If the car 431 was within the trigger region 435 the graphic showing the trigger region 435 in the image on the visualisation element 107 may change, for example by flashing or changing colour, or by another message appearing on-screen such as a tick.
In the past the position of the beacon 433 would not have been visible and the special effects technician in charge of detonating the explosive charge at the correct time would have had to remember the exact location of the explosive, and then noted when the vehicle 431 was close to that location. However, the device 101 (as shown in Figure 1) comprises a receiver, and is configured to receive an indication of the position of the beacon. The device then processes whether an object 431 is in a trigger region 435 of the recording location associated with the beacon. In Figure 4 the object 431 is not within the trigger region 435 associated with the beacon 220, and so the explosive charge is not triggered.
Figure 5 shows progression of the image sequence. The object 431 has not yet entered the trigger region 435, however in the past the special effects technician may have accidentally detonated the explosive charge early. The processor of the device 101 however determines based on the indication of the position of the beacon 433, and the object location data that the object 431 is not in the trigger region 435, and so the explosive charge is not detonated.
Figure 6 shows a further progression of the image sequence. The object 431 has now entered the trigger region 435. The processor of the device 101 then determines based on the position of the beacon 433 and the object location data that the object is in the trigger region 435. In this embodiment the graphic shown by the visualisation element 107 may change, for example by flashing to indicate to a special effects technician that the filming event should be triggered. This triggering may be automatic, or it may require a triggering action by the special effects technician. Figure 7 then shows that the device 101 has triggered the filming event 437 in the recording location based on the determination that the object is in the trigger region. This shows that the explosive charge has exploded as the explosive charge has been detonated. The triggering may comprise sending an electronic message from the device 101 (or from another device operated by the special effects technician) to a communication element associated with the explosive charge, this message indicates that the explosive charge should be detonated.
The device 101 may determine the object location data from image data. For example the device 101 may receive images of the image sequence, or the device may comprise an image sensor 111 configured to capture images of the recording location. The device can then use the image to determine the object location. This can be done by first identifying the object 431 in an image, and then determining the location of that object based on the image. Determining whether an object 431 is in a trigger region 435 may comprise detecting co-incidence of the object 431 and the trigger region 435, and triggering the filming even in response to said detection. Alternatively a user of the device, such a special effects technician may use the image provided to determine whether the object location 431 is within the trigger area 435.
It is noted that Figure 4 may be a visualisation of the recording location that is provided to a staff members of the film production, such as cameramen or special effects technician. The device may include a visualisation element 107 to superimpose the position of the first beacon 433 on the image of the recording location. In Figure 4 the position of the first beacon 433 is shown by the X displayed on the image of the recording location. The visualisation element 107 is also configured to superimpose the indication of the trigger region 435 onto the image of the recording location. The visualisation element 107 may for example be a screen such as a telephone screen. It is noted that the visualisation element 107 is not essential to the device. The processing may be performed internally, and the triggering performed automatically without display to a special effects technician. In order for additional accuracy the visualisation element may be advantageous as it enables the special effects engineer to override the determination of whether the filming event 437 should be triggered if the filming event appears unsafe to the special effects technician in any way.
Figure 8 shows a recording location that is at sea. Bodies of water have very few reference points and so it is especially difficult for the cameramen, stuntmen and special effects technicians to co-ordinate a filming event without any human error. A further complication is that if the filming event requires an external device such as an explosive charge then these are often held below the sea level by a partially submerged buoy. There is no indication however if the explosive charge has become separated from the buoy. In this case the cameraman, stuntman and special effects technician may all execute their roles perfectly, but due to the explosive charge moving in the current after detachment from the buoy, the safety of members of the crew may be at risk. This has lead to fatalities in the past. Figure 8 shows a recording location including a boat 531, and a beacon location 533, and a trigger region 535 associated with the beacon.
In Figure 8 the position of the first beacon 533 in the recording location is received by a device 101 (not shown). The position of the first beacon 533 is superimposed on an image of the recording location. This superimposition is shown in Figure 8. The filming event is then controlled in response to determining whether the boat 531 is within the trigger region 535 of the recording location associated with the beacon. In Figure 8 it is determined that the boat 531 is not within the trigger region 535, and therefore the filming event is not triggered.
In this specific embodiment the device 101 (not shown) and the beacon 22 are connected to form a network. The beacon in Figure 8 then sends a message across the network indicating its position 533 in the recording location to the device 101. The device 101 in this embodiment comprises a camera or other image capturing element 111. Concurrent to the communication with the beacon, the device captures a first image of the recording location. The device then superimposes the position of the beacon 533 and the surrounding trigger region 535 on to an image of the recording location that shows both the position of the beacon 533 and the boat 531. A special effects technician viewing the visualisation element of the device can then determine whether the boat 531 is within the trigger region 535 associated with the beacon. To aid the special effects technician when the boat 531 is within the trigger region 535 an indication may be given to the technician, such as the graphic showing the trigger region 535 may change, for example by flashing or changing colour, or by another message appearing on-screen such as a tick. The filming event is controlled in response to this determination, and so in this case the special effects technician does not trigger the filming event.
In Figure 9 the boat 531 has moved close to the trigger region 535. The special effects technician, cameraman or stuntman could mistakenly believe that the boat 531 was within the trigger area 535, and this could lead to the failure of the filming event. However, in this embodiment the position of the first beacon 533 is received. This position is superimposed on the image of the recording location (as shown in Figure 9), and the filming event is controlled in response to determining that the selected object 531 is not within the trigger region 535 of the recording location associated with the beacon. In this case that means that the filming event is not triggered. This may be an automatic triggering, or the control may be administered by a user, such as a special effects technician, based on the superimposed image shown.
In Figure 10 the boat 531 has moved within the trigger region 535. The position of the beacon 533 is received, and superimposed on the image of the recording location. The filming event is then controlled in response to determining that the boat 531 is within the trigger region 535 of the recording location associated with the beacon. An indication may be provided to the special effects technician, such as the trigger region 535 flashing, to indicate that the special effects technician should control the filming event by detonating the explosive charge. As the boat 531 is within the trigger region the filming event 537 is then triggered. This is shown in Figure 11 which shows an explosive charge being detonated at the location at which it was co-located with the beacon. Both the explosive charge and the beacon are designed to be consumables in this embodiment as neither will be recovered, and neither is re-usable. Controlling the filming event 537 may comprise sending an electronic message from the device (or from another device operated by the special effects technician) to a communication element associated with the explosive charge, this message indicating that the explosive charge should be detonated.
In Figures 8-11 the trigger region 535 is shown comprising a selected distance. This is the distance, such that when the object 531 is positioned a distance less than the selected distance from the beacon the filming event 537 is controlled. In Figures 8-11 the beacon is (at least relatively) stationary. Therefore determining that the boat 531 is a selected distance from the position of the beacon 533 comprises determining that the object 531 has reached a selected distance from the beacon.
In the situation shown in Figures 8-11 the waves are constantly moving. If there is a particularly large wave in the foreground then the location of the beacon 533 may be obscured from images of the recording location. In the event that the beacon is behind an obstruction then an indication that the position of the beacon 533 is hidden may be superimposed on the image of the recording location. That may simply be changing the colour of the beacon position (for example to red) so that a viewer doesn't mistakenly believe the beacon to be positioned on the obstruction, such as the foreground wave.
In addition to the information shown in Figures 8-11, the superimposed image may also provide an indication of how far away the position of the beacon 533, or the selected object (in this case the boat) 531 are. In particularly featureless environments such as arctic tundra this may be particularly useful for the user.
Figure 12 shows a third image sequence. In this case a missile 641 is heading towards a tank 631 in a barren landscape. As missiles have different dimensions, and so are different sizes, it may be difficult to judge how close the missile 641 is to the tank 631 from a distance. It is important that an explosion is triggered at the right time to reduce the risk of the filming event needing to be repeated at much expense. In the case of Figure 13 the tank 631 has a first beacon co-located with it. The tank in this example is stationary. The missile 641 in Figure 12 has a second beacon co-located with it. The missile 641 is non-stationary in this embodiment. Figure 12 shows the beginning of the image sequence. The missile 641 and the tank 631 are some distance away. A first trigger region 635 associated with the first beacon, and a second trigger region 645 associated with the second beacon are shown.
In this specific embodiment the device 101 (not shown), the first beacon 220a and the second beacon 220b are connected to form a network. The first beacon 220a then sends a first message across the network to the device 101 indicating its position in the recording location to the device. The second beacon 220b concurrently sends a second message across the network to the device indicating its position and movement in the recording location to the device 101. The device 101 therefore knows the position of both the tank 631 and the missile 641. A processor of the device 101 is then used to compare the location of the tank 631 and the missile 641. In this example, this comparison comprises determining whether the position of the missile 641 and the tank 631 differ by less than a pre-selected distance (for example 3 metres). The trigger region of the tank 635 and the missile 645 in the example are the same (for example three metres), but in other embodiments the trigger regions may be asymmetrical, or different sizes for different beacons. In Figure 12 the position of the tank 631 is outside the trigger region associated with the missile 645, and the position of the missile 641 is outside the trigger region 635 associated with the tank. The device therefore then indicates that the filming event should not yet be triggered. This indication is to a special effects technician who is viewing a visualisation element 107 of the device (such as a screen). Figure 12 shows the image provided by the visualisation element in this embodiment. If the tank 631 or missile 641 were within the other respective trigger region the graphic showing the trigger region in the image on the visualisation element may change, for example by flashing or changing colour, or by another message appearing on-screen such as a tick. This change may provide the indication to the special effect technician. Controlling the filming event may be in response to this indication. For example, the special effects technician may control the filming event in response to the indication from the visualisation element of the device.
Figure 13 shows a progression of the image sequence. The tank 631 and the missile 641 are located close to each other in Figure 13, however the missile 641 is not within the first trigger region 635 and the tank 631 is not within the second trigger region 645. Without the use of the system, from afar a special effects technician might mistakenly believe the two objects to be close enough to each other, and so prematurely instigate an explosion. In this embodiment the position of the first beacon 631 and the second beacon 641 are received, the position of the first beacon and second beacon are superimposed on an image of the recording location and then the filming event is controlled in response to determining that a selected object is not within a trigger region. In this case the tank 631 is not within the second trigger region 645, and the missile 641 is not within the first trigger region 635 and so the filming event is not triggered.
Figure 14 shows a further progression of the image sequence. The tank 631 and the missile 641 are adjacent to one another. The position of the first beacon and the second beacon are received, and the positions of the first and second beacons are superimposed on the image of the recording location. The filming event is then controlled in response to determining whether a selected object is within a trigger region of the recording location. In this case the tank 631 is within the second trigger region 645, and the missile 641 is within the first trigger region 635 so the filming event 637 is commenced. This is shown in Figure 15 which shows that the filming event 637 is an explosion and that controlling the filming event comprises triggering the explosion. Controlling the filming event may comprise sending an electronic message from the device (or from another device operated by the special effects technician) to a communication element associated with the explosive charge, this message indicating that the explosive charge should be detonated.
In Figures 12-15 the second beacon is co-located with the missile 641, which is moving. The second beacon may be referred to as an "arrow", as it may have the capability of sending data corresponding to its movement (such as velocity, acceleration etc.) to the device 101.
It is noted that a computer program product may be configured to perform the method detailed above. Moreover, a programmable device may be programmed with said computer program product. The programmable device may be a mobile phone, a tablet, a wearable device, of a film camera. Embodiments utilising a mobile phone may allow an application to be downloaded comprising the computer program product, and this application may allow the method to be performed. Embodiments using a film camera may be particularly advantageous as these apparatus are already on film sets, and so would not require the use of any additional equipment.
Figure 16 shows a method of operating the device described above. In particular the method comprises first receiving a position of a first beacon in the recording location 751, then superimposing the position of the first beacon on an image of he recording location 753, and then controlling a filming event in the recording location in response to determining that a selected object is within a trigger region of the recording location associated with the beacon 755. Superimposing the position of the first beacon on an image of the recording location may comprise either superimposing an exact position, or an area around the position of the beacon, such as the trigger region (as shown in Figures 12-15).
A filming event may be any event that is within the image sequence. For example, the filming event may be a special effect, a stunt, or the use of pneumatic or hydraulic systems, or the use of a pulley system. A special effect may comprise the use of a consumable such as an explosive charge. A stunt may comprise the use of a ramp to flip a car. Pulleys, pneumatic, or hydraulic systems may be used for animatronics, or to cause vehicle crashes without the use of stunt drivers. A pneumatic device may comprise compressed air, and may be configured to expel the compressed air in order to create a similar visual effect to that of an explosive charge.
Figure 17 shows a first method of installation. This comprises positioning a beacon at a selected position within a recording location 861, and then connecting the beacon with a device such that the beacon is configured to transmit its location to the device 863. This method does not require the use of a controlling beacon so that only those beacons used in the filming event are required in the network. The installation allows for multiple connections between the visualisation device and the plurality of beacons. The beacons may also connect with each other, although this is non-essential.
Figure 18 shows a second method of installation. This comprises positioning a beacon at a selected position within the recording location 971, and then positioning a controlling beacon adjacent to, or in the recording location 973. The next step is creating a network between the controlling beacon and the beacon such that the beacon is configured to transmit its location to the controlling beacon 975. The final step in Figure 18 is connecting the controlling beacon with a device, such that the controlling beacon is configured to transmit the position of the beacon to the device 977. This may be advantageous as it limits the connections between the visualisation device and the beacons to a single connection -that between the visualisation device and the controlling beacon. This may reduce problems associated with compatibility, and may simplify the installation process.
In one embodiment of a system, such as those described above, the system may utilize transform data for superimposed image positioning on the viewing device, the superimposed image positioning comprising the position of the one or more beacons being superimposed on the image of the recording location generated by the visualisation element. The example that is set out in Figure 12-15 may be one such example of this functionality where two beacons are used to track object positioning without the requirement of image data to trigger an event. To trigger an event using only one beacon, human input may be used in some exemplary systems. Other examples, particularly those intended for object tracking over mid to long distances using images with a high density of pixels, may use only one beacon, and utilise image tracking, optionally without any human intervention.
Additionally, there is the ability to use image data to reinforce positioning in the systems described -when available and when the image processing is considered to add relevant and accurate object location information (for example where that information is as accurate, or more accurate than the GNSS data). Image data may become particularly relevant under these conditions: * Image processing can achieve planar/surface detection * Image processing can achieve point cloud formation * Image processing frame rate is at an efficient magnitude to accurately overlay positioning of detected objects/surfaces * Image processing can achieve object detection In these embodiments it may be advantageous for the pixel density to be high. This may aid the creation of point clouds using the viewing device. The network of beacons and devices may have the capability to stitch together point clouds generated from various viewing device image data, to create a representation of the topography and surroundings. If an object tracking is used within the network, the relative positioning of the object can be relayed to other devices within the network to superimpose its object location data as an image on a viewing device display. To add the functionality to the aforementioned topography data, a feature of the system and application may involve placing location markers that function as virtual beacons. To increase their accuracy there is the option to anchor these points to point cloud data generated from image processing in the nearby surroundings. These location markers may include notable objects in the image, such as rocks in a desert environment.
In another embodiment, this time employing an additional second beacon co-located with the object, calculations of relative GNSS location through the beacons processors and its communication with the on-board processors in the viewing device. The output of these calculations is used to create image data in the image display of the visualisation device, in the form of a superimposed symbol (beacon). This may be particularly useful in embodiments in which pixel density of the object is somewhat lower, and so image tracking techniques, whilst still offering some benefits, may not be as accurate as a multiple beacon solution.
However, in cases where the pixel density is particularly useful for calculation of an object location, image tracking may be used in tandem with a second beacon co-located with an object. For example, there may be some environments that may result in drift and other forms of inaccuracies of GNSS data, and said image processing techniques may be employed to augment the underlying calculations used for the displayed image results. The use of point cloud generated data by the viewing device would at short ranges be suitable and indeed advantageous as supplementary to the primary calculations employed for the displayed image results. For example, where it appears there is GNSS drift, the position given by image tracking may take precedence until the GNSS error is rectified.
An example of this set of circumstances would involve an interior set (an example of a recording location, but positioned inside a building) where the wall is made of urethane, soft-board or any other soft material would be used to simulate concrete or brick and mortar walls. The urethane or tintex walls are then rigged with small explosive charges just below the camera facing surface. In this scenario these small explosive charges are detonated in sequence to simulate the impacts of bullets fired at the wall. Upon detonation explosives will push debris outward, leaving behind a crater which the audience accepts as the impact left as a result of gunfire at the surface.
In this example the effectiveness of this illusion lies in the hidden nature of the explosives similar to that of explosives (or alternatively hydraulic or pneumatic rigs to mimic an explosion) intentionally submerged in the ground of desert recording location. Prior to detonation the walls surface should and indeed be painted or otherwise treated by artists to show no signs of the embedded explosives and their positions in the wall. This sequence will present many of the same issues for the crews of camera, special effects and stunt departments. It is a fairly common requirement in these kinds of sequences for a stunt person to be asked to run or move past the wall in question while the explosives are detonated in a sequence that narrowly trails his or her path. Range in this situation is significantly reduced, often, to within 10m for all concerned, under these conditions point cloud data may be used as a supplementary source in the calculations that result in the end product.
One scene that was shot using an interior set is shown in Figure 19. This scene is from the film "The Matrix", and is specifically "the lobby shootout scene". If a similar scene were recorded in the future then it may be possible to implement the teachings of the present invention in order to improve safety. In this scene the impacts of bullets on the walls show that the force is outward as a result of the explosives being embedded below the camera facing surface. You will also notice that there are no visible indications on the wall surface prior to detonation as to the location of said explosive charges. Though a very high density grade of polyurethane has been used, coloured grey to match that of concrete, it is clear from the way the particulates float or move it is obviously not concrete.
This entire sequence would have been shot within the 10m range described above. Figure 19 shows the triggering of explosions embedded in the pillars to simulate bullet impacts. The event is triggered once the main character has moved close enough to the pillar.
In similar installations making use of the present invention, the location of a pillar containing an explosive charge may be calculated firstly by the physical attachment of a beacon to the column and the resultant corresponding calculations made through communication between the beacon's internal instrumentation and processor and the viewing device coupled with our application. Secondly the beacons relative position may be additionally calculated through the use of image processing techniques available to the viewing device, specifically point cloud calculations which allow the device and its camera to impose a plotted cloud of points which represent the world and relative space. This grid of points will accurately allow the viewing device to generate and assign a relative positional value to the pillar in question, within the viewing device's frame of view.
Under these circumstances, as well as others described above, it would be particularly advantageous to combine the data from the beacon correspondence and the cloud plotting data in calculations resulting in the final resultant display data provided by the application and ultimately viewed by the user.
The receivers, transmitters, transceivers, or any other communication means described herein may comprise electronic circuitry and/or software/firmware configured to use any suitable wired or wireless communication method. Examples include short range wireless interfaces such as personal area network (PAN) e.g. Bluetooth, and local area network (LAN) communication, such as WFi. Communication via wide area networks (WANs) e.g. long range (LoRa) and/or low power wide area network (LPWAN), and telecommunications networks may also be used.
It will be appreciated from the discussion above that the embodiments shown in the Figures are merely exemplary, and include features which may be generalised, removed or replaced as described herein and as set out in the claims. With reference to the drawings in general, it will be appreciated that schematic functional block diagrams are used to indicate functionality of systems and apparatus described herein. For example, the functionality provided by the receiver and the trigger of the device may in whole or in part be provided by a single transceiver. In addition, the process functionality described may also be provided by devices which are supported by the visualisation device. It will be appreciated however that the functionality need not be divided in this way and should not be taken to imply any particular structure of hardware other than that described and claimed below. The function of one or more of the elements shown in the drawings may be further subdivided, and/or distributed throughout apparatus of the disclosure. In some embodiments the function of one or more elements shown in the drawings may be integrated into a single functional unit.
The above embodiments are to be understood as illustrative examples. Further embodiments are envisaged. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
In some examples, one or more memory elements can store data and/or program instructions used to implement the operations described herein. Embodiments of the disclosure provide tangible, non-transitory storage media comprising program instructions operable to program a processor to perform any one or more of the methods described and/or claimed herein and/or to provide data processing apparatus as described and/or claimed herein.
The device and beacons (and any of the activities and apparatus outlined herein) and any of their constituent parts may be implemented with fixed logic such as assemblies of logic gates or programmable logic such as software and/or computer program instructions executed by a processor. Other kinds of programmable logic include programmable processors, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an application specific integrated circuit, ASIC, or any other kind of digital logic, software, code, electronic instructions, flash memory, optical disks, CDROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof. Such data storage media may also provide a data storage means for use in conjunction with the sensor to store the oxygenation signals.

Claims (25)

  1. Claims 1. A device for triggering a filming event at a recording location during image capture of an image sequence of the recording location, the device comprising: a receiver, configured to receive an indication of the position of a first beacon in the recording location; a processor for determining, based on the indication and object location data, whether an object is in a trigger region of the recording location associated with the beacon; and a trigger for triggering a filming event in the recording location, during image capture of the image sequence, in the event that the object is within the trigger region.
  2. 2. The device of claim 1, wherein the object location data comprises one of: image data from the image sequence indicating the location of the object; and/or data received from a second beacon co-located with the object within the recording location.
  3. 3. The device of claim 1 or claim 2, wherein determining whether an object is in the trigger region comprises: identifying the object in an image of the image sequence; and/or determining the location of the object based on image data from the image sequence; and/or comparing the object data to the position of the first beacon, and determining if the object and the first beacon are less than a set distance apart.
  4. 4. The device of any preceding claim, wherein determining whether an object is in a trigger region comprises detecting co-incidence of the object and the trigger region, and triggering the filming event in response to said detection.
  5. 5. The device of any preceding claim, wherein the trigger comprises a signal output for providing a control signal to a filming event consumable, for example an explosive.
  6. 6. The device of any preceding claim, further comprising a visualisation element to superimpose the position of the first beacon on the image of the recording location.
  7. 7. The device of claim 6, wherein the visualisation element is configured to superimpose an indication of the trigger region on the image.
  8. 8. The device of any preceding claim, further comprising a transmitter configured to transmit the location of the device to the first beacon.
  9. 9. A method of controlling a filming event at a recording location during image capture of an image sequence of the recording location, the method comprising the steps of: receiving a position of a first beacon in the recording location; superimposing the position of the first beacon on an image of the recording location; controlling a filming event in the recording location in response to determining that a selected object is within a trigger region of the recording location associated with the beacon.
  10. 10. The method of claim 9, wherein the trigger region comprises a selected distance between the selected object and the first beacon.
  11. 11. The method of any of claims 9-10, wherein determining that a selected object is a selected distance from the position of the first beacon comprises: determining that the object has reached a selected distance from the beacon; and/or determining that the beacon has reached a selected distance from the object.
  12. 12. The method of any of claims 9-11, wherein superimposing the position of the first beacon comprises providing a visual cue as to the position of the first beacon in the image of the recording location, for example wherein the visual cue comprises a graphical region indicating the trigger region.
  13. 13. The method of any of claims 9-12, further comprising, in the event the position of the first beacon is behind an obstruction such that the position of the beacon is not visible in the image of the recording location, superimposing an indication that the position of the first beacon is hidden from view on the image of the recording location.
  14. 14. The method of any of claims 9-13, further comprising providing an indication of how far away the position of the first beacon is, and/or providing an indication of how far away the selected object is.
  15. 15. The method of claims 9-14, further comprising receiving the position of a second beacon indicating the position of the selected object.
  16. 16. The method of any of claims 9-15, wherein the filming event comprises actuating an event apparatus.
  17. 17. The method of claim 16, wherein the event apparatus is: an explosive, and wherein actuating the explosive causes an explosion; and/or a pneumatic device comprising compressed gas, and wherein actuating the pneumatic device comprises releasing the compressed gas to simulate the effect of an 5 explosion; and/or a pulley or hydraulic system, and wherein actuating the pulley or hydraulic system causes a force to be applied to a prop, for example to flip a car.
  18. 18. The method of any of claims 9-17, further comprising: adjusting the superimposed image to account for movement of the device based on the position of the device.
  19. 19. The method of any of claims 9-18, wherein the method is performed by the device of claims 1-8.
  20. 20. A computer program product comprising program instructions configured to perform the method of any of claims 9-19.
  21. 21. A programmable device, programmed with the computer program product of claim 20, for example wherein the programmable device is a mobile phone, tablet, wearable device or film camera.
  22. 22. A method of installing a beacon system at a recording location, such that it is configured to control a filming event during the image capture of an image sequence at a recording location, the method comprising: positioning a beacon at a selected position within the recording location; connecting the beacon with a device, such that the beacon is configured to transmit its position to the device, wherein the device is configured to superimpose the position of the beacon on an image of the recording location.
  23. 23. A method of installing a beacon system at a recording location, such that it is configured to control a filming event during the during the image capture of an image sequence at a recording location, the method comprising: positioning a beacon at a selected position within the recording location; positioning a controlling beacon adjacent to, or in, the recording location; creating an a network between the controlling beacon and the beacon, such that the beacon is configured to transmit its location to the controlling beacon; connecting the controlling beacon with a device, such that the controlling beacon is configured to transmit the position of the beacon to the device, wherein the device is configured to superimpose the position of the beacon on an image of the recording location.
  24. 24. A network of devices configured to control a filming event during the image capture of an image sequence at a recording location, the network comprising: a first beacon configured to transmit its position; a second beacon configured to indicate the movement of an object to which it is configured to be attached; a visualisation device configured to superimpose the position of the beacon on an image of the recording location, and further configured to superimpose the movement of the object on the image of the recording location.
  25. 25. The network of claim 24, further comprising: a controlling beacon configured to act as a central node of the network, the network comprising the controlling beacon, the first beacon and the second beacon, wherein the controlling beacon is configured to receive the position of the beacon, and movement of the object, and pass this information on to the visualisation device; and/or wherein the visualisation device is the device of any of claims 1-8.
GB2013384.9A 2020-08-26 2020-08-26 Device and associated method for triggering an event during capture of an image sequence Pending GB2598556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2013384.9A GB2598556A (en) 2020-08-26 2020-08-26 Device and associated method for triggering an event during capture of an image sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2013384.9A GB2598556A (en) 2020-08-26 2020-08-26 Device and associated method for triggering an event during capture of an image sequence

Publications (2)

Publication Number Publication Date
GB202013384D0 GB202013384D0 (en) 2020-10-07
GB2598556A true GB2598556A (en) 2022-03-09

Family

ID=72660692

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2013384.9A Pending GB2598556A (en) 2020-08-26 2020-08-26 Device and associated method for triggering an event during capture of an image sequence

Country Status (1)

Country Link
GB (1) GB2598556A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721802B (en) * 2021-08-18 2024-09-27 广州南方卫星导航仪器有限公司 Vector capturing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090051785A1 (en) * 2007-08-23 2009-02-26 Sony Corporation Imaging apparatus and imaging method
US20170019589A1 (en) * 2015-07-14 2017-01-19 Samsung Electronics Co., Ltd. Image capturing apparatus and method
US20180005049A1 (en) * 2016-06-29 2018-01-04 Sony Corporation Determining the position of an object in a scene
WO2019170387A1 (en) * 2018-03-07 2019-09-12 Volkswagen Aktiengesellschaft Overlaying additional information on a display unit
US20190302879A1 (en) * 2018-04-02 2019-10-03 Microsoft Technology Licensing, Llc Virtual reality floor mat activity region

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090051785A1 (en) * 2007-08-23 2009-02-26 Sony Corporation Imaging apparatus and imaging method
US20170019589A1 (en) * 2015-07-14 2017-01-19 Samsung Electronics Co., Ltd. Image capturing apparatus and method
US20180005049A1 (en) * 2016-06-29 2018-01-04 Sony Corporation Determining the position of an object in a scene
WO2019170387A1 (en) * 2018-03-07 2019-09-12 Volkswagen Aktiengesellschaft Overlaying additional information on a display unit
US20190302879A1 (en) * 2018-04-02 2019-10-03 Microsoft Technology Licensing, Llc Virtual reality floor mat activity region

Also Published As

Publication number Publication date
GB202013384D0 (en) 2020-10-07

Similar Documents

Publication Publication Date Title
US11740080B2 (en) Aerial video based point, distance, and velocity real-time measurement system
US20180356492A1 (en) Vision based location estimation system
CN105270399B (en) The device and method thereof of vehicle are controlled using vehicle communication
JP5430882B2 (en) Method and system for relative tracking
KR101936897B1 (en) Method for Surveying and Monitoring Mine Site by using Virtual Reality and Augmented Reality
NL2013724B1 (en) Underwater positioning system.
US8229163B2 (en) 4D GIS based virtual reality for moving target prediction
CN110520692B (en) Image generating device
KR101711602B1 (en) Safety inspection system using unmanned aircraft and method for controlling the same
CN110603463A (en) Non line of sight (NLoS) satellite detection at a vehicle using a camera
CN107918397A (en) The autonomous camera system of unmanned plane mobile image kept with target following and shooting angle
JP6942675B2 (en) Ship navigation system
CN108021145A (en) The autonomous camera system of unmanned plane mobile image kept with target following and shooting angle
US20120158237A1 (en) Unmanned apparatus and method of driving the same
CN103398710A (en) Navigation system for entering and leaving port of ships and warships under night-fog weather situation and construction method thereof
JP2015113100A (en) Information acquisition system and unmanned flight body controller
Azhari et al. A comparison of sensors for underground void mapping by unmanned aerial vehicles
GB2598556A (en) Device and associated method for triggering an event during capture of an image sequence
EP2523062B1 (en) Time phased imagery for an artificial point of view
US9292971B2 (en) Three-dimensional tactical display and method for visualizing data with a probability of uncertainty
CN111213107B (en) Information processing device, imaging control method, program, and recording medium
CN110411443A (en) A kind of rocker arm of coal mining machine inertia/visual combination determines appearance device and method
US11409280B1 (en) Apparatus, method and software for assisting human operator in flying drone using remote controller
JP2020173127A (en) Vehicle position presentation system, onboard device used therefor, vehicle position presentation method, and program for vehicle position presentation
RU2498222C1 (en) System of data exchange of topographic surveying vehicle