EP3353565A1 - Video recording method and apparatus - Google Patents

Video recording method and apparatus

Info

Publication number
EP3353565A1
EP3353565A1 EP15904688.7A EP15904688A EP3353565A1 EP 3353565 A1 EP3353565 A1 EP 3353565A1 EP 15904688 A EP15904688 A EP 15904688A EP 3353565 A1 EP3353565 A1 EP 3353565A1
Authority
EP
European Patent Office
Prior art keywords
wireless message
tag
video content
stored
recording area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP15904688.7A
Other languages
German (de)
French (fr)
Other versions
EP3353565A4 (en
EP3353565B1 (en
Inventor
Jukka REUNAMÄKI
Juha Salokannel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of EP3353565A1 publication Critical patent/EP3353565A1/en
Publication of EP3353565A4 publication Critical patent/EP3353565A4/en
Application granted granted Critical
Publication of EP3353565B1 publication Critical patent/EP3353565B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0021Tracking a path or terminating locations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/02Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using radio waves
    • G01S3/14Systems for determining direction or deviation from predetermined direction
    • G01S3/46Systems for determining direction or deviation from predetermined direction using antennas spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10009Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view

Definitions

  • the specification relates to a video recording method and apparatus.
  • this specification describes a method comprising capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area; receiving a wireless message from a tag situated in the recording area; applying a time stamp to the received wireless message; determining directional information of the tag in the recording area based on analysis of the wireless message; and causing the directional information and the timestamp to be stored.
  • the directional information may comprise angle of arrival information of the received wireless message.
  • the angle of arrival information may comprise an azimuthal angle.
  • the angle of arrival information may comprise an elevational angle.
  • the directional information and the timestamp may be stored at the video capture apparatus.
  • the directional information and the timestamp may be stored at a remote server.
  • the camera array may be a spherical array arranged to capture video content from a spherical recording area.
  • the wireless message may contain additional information input by a user of the tag.
  • the wireless message may contain a tag identifier identifying the tag from which the wireless message is received.
  • the timestamp may correspond to a time value of the video stream.
  • the wireless message may be a BLE advertisement packet.
  • the directional information and timestamp information may be stored in a file separate from the recorded video content.
  • the video content captured by each respective camera may be recorded as part of a composite data file or in an individual file.
  • the array of cameras may comprise a plurality of stereoscopic camera pairs.
  • this specification describes a computer program comprising instructions that, when executed by a computing apparatus, cause the computing apparatus to perform the method of the first aspect.
  • this specification describes an apparatus comprising at least one processor; at least one memory having computer-readable instructions stored thereon, the computer-readable instructions when executed by the at least one processor causing the apparatus at least to capture video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
  • this specification describes a computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causing performance of capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area; receiving a wireless message from a tag situated in the recording area; applying a time stamp to the received wireless message; determining directional information of the tag in the recording area based on analysis of the wireless message; and causing the directional information and the timestamp to be stored.
  • this specification describes an apparatus comprising means for capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
  • Figure l illustrates a recording environment
  • Figure 2 is a schematic diagram illustrating a recording apparatus and an editing/replay apparatus
  • Figure 3 is a schematic diagram of a controller
  • Figure 4 is a schematic diagram of a camera
  • Figure 5 is a schematic diagram of a mobile tag
  • Figure 6 is a flow chart illustrating a flow chart showing steps taken at a recording apparatus
  • Figure 7 is a flow chart illustrating a flow chart showing steps taken at a replay apparatus;
  • Figure 8 shows a storage medium;
  • Figure 9 shows a camera array according to an alternative embodiment
  • Figure 10 shows a camera array according to an alternative embodiment.
  • Embodiments of the invention provide a system for capturing video and/ or audio across an array of recording devices as well as capturing directional data relating to positioning tags that are within the field of view of the recording devices.
  • the directional data may be used during at least one of a subsequent video processing/editing stage so that multiple streams captured by the array of recording devices can be handled in an efficient way.
  • certain views may be highlighted as part of the video based on the relative location measured from the positioning tag's wireless transmissions.
  • Embodiments of the invention involve recording the relative locations of radio-detected objects and associating the locations with an area captured in the video and/or audio data.
  • a record of the direction radio information may be added to a video and/or audio recording file.
  • a separate file may be recorded containing the directional radio information and time-stamped so that it matches with the simultaneously recorded video and/or audio files.
  • FIG l shows an example recording environment l.
  • a recording apparatus io comprises a spherical array of video cameras n and a chassis. While referred to as a recording apparatus, the apparatus is arranged to capture a live video stream as well as recording video content to be stored at a storage medium. In embodiments, where the video content is stored, it can be stored at the recording apparatus itself, at a video editing computer or at a remote server.
  • the video cameras n are arranged to provide video coverage across 360 degrees in terms of both elevation and azimuth, i.e. across an entire sphere, which may be termed a video sphere. It should be borne in mind that in alternative
  • an array may comprise cameras covering a hemispherical area or indeed only a section of a spherical area.
  • Each of the cameras 11 is arranged to capture a section of the three-dimensional space surrounding the camera array 10.
  • the recording apparatus 10 shown in Figure 1 has six cameras na-f. Camera nf is not shown in Figure 1 but is represented schematically in Figure 2.
  • a positioning tag 20 (for example a Bluetooth Low Energy tag) is shown in Figure 1.
  • the positioning tag 20 is part of a user's watch.
  • the positioning tag 20 may be attached to an animal's collar to help capture video for a wildlife documentary or the positioning tag 20 may be incorporated in a mobile phone or a key fob.
  • the recording apparatus 10 and positioning tags 20 may be configured to operate using any suitable type of wireless transmission/reception technology.
  • Suitable types of technology include, but are not limited to Bluetooth Basic Rate / Enhanced Data Rate (BR/EDR) and Bluetooth Low Energy (BLE).
  • Bluetooth Low Energy (BLE) is a relatively new wireless communication technology published by the Bluetooth SIG as a component of Bluetooth Core Specification Version 4.0.
  • Other types of suitable technology include for example technologies based on IEEE 802.11 and IEEE 802.15.4. The use of BLE may be particularly useful due to its relatively low energy consumption and because most mobile phones and other portable electronic devices will be capable of communicating using BLE technology.
  • Signals/positioning packets transmitted by the positioning tags 20 may be according to the High Accuracy Indoor Positioning (HAIP) solution for example as described at http: //www.in-location-alliance.com.
  • HAIP High Accuracy Indoor Positioning
  • the recording apparatus 10 may be considered the origin of a spherical coordinate system.
  • the positioning tag 20 may thus be defined by an azimuthal angle ⁇ in the x-y plane shown in Figure 1 and by an elevational angle ⁇ with respect to the z axis.
  • the positioning tag 20 periodically transmits wireless messages which are received by the recording apparatus 10.
  • the recording apparatus 10 determines an azimuthal angle of arrival and an elevational angle of arrival for the received packet.
  • the recording apparatus 10 may also determine a received signal strength indication (RSSI) value for the received package. This information may then be stored in a suitable format.
  • FIG 2 is a schematic diagram of the recording apparatus 10 and editing and/or replay apparatus 15.
  • the recording apparatus 10 comprises a controller 13.
  • the controller 13 controls the recording apparatus 10.
  • the controller 13 is configured to determine angle- of-arrival information from packets received from the tags 20.
  • the controller 13 is mounted in the centre of the video/audio recording system.
  • the output of the camera recording is saved into one or more files in raw or processed format stored by the controller 13.
  • the output of each individual camera may be stored in a respective file stored by the controller 13.
  • the output of the cameras may be combined into a single, composite file stored by the controller 13.
  • the directional data is saved into one or more files in a raw or processed format.
  • the video and directional data may be stored locally at the recording apparatus 10 or separately, such as in a remote server 170, to be accessed remotely by the replay apparatus 15. Alternatively the recorded media and directional data may be uploaded to the replay apparatus 15 itself.
  • the recording apparatus 10 may comprise a communication module 14.
  • the communication module 14 comprises an RF antenna and RF transceiver to allow wireless communication between the recording apparatus 10 and a remote server 170 or computer 15 having a video-editing capability.
  • the recording apparatus 10 may be configured to communicate via a wireless network such as Wi-Fi.
  • the recording apparatus 10 may have a wired link (not shown) to a computer having a video-editing capability.
  • the recording apparatus 10 may be provided with a user input/output 12.
  • the user input/output 12 may comprise a screen and keyboard which may be integrated into a touchscreen.
  • the user input/output 12 is used to allow the user to control the operation one or more of the cameras 11 and the playback functionality of the recording apparatus 10.
  • the user input/output 12 allows the user to control playback from selected one or more cameras.
  • a user may select one or more tags 20 from a user interface displayed on the screen of the apparatus 10. Selection of parts of the video content relevant to the tag location may then be performed at the apparatus 10.
  • the screen may be used for viewing live footage from one or more cameras.
  • a user may select one or more tags 20 and view live footage from the cameras that are relevant to the selected tags.
  • the selected video content may be stored at the apparatus 10, the replay apparatus 15 or the remote server 170.
  • the editing and/or replay apparatus 15 may be a computer comprising a processor 150 and a storage device 151.
  • the storage device 151 comprises a volatile memory 152 and a non-volatile memory 153.
  • the non-volatile memory 153 may have an operating system 154 and video editing software 155 stored therein.
  • the non-volatile memory 153 may also store a directional data file 156 in which the directional data received from the controller 13 is stored.
  • a video file 157 may also be stored containing video stream data received from the controller 13.
  • the video file may be stored at the remote server 170.
  • the video stream data may be stored as separate files, whereby each video file contains the output of each respective camera.
  • the directional data may be contained within the video file(s).
  • the editing and/or replay apparatus 15 further comprises an RF transceiver 158 and an RF antenna 159 to enable wireless communication with the, recording apparatus 10 and the server 170.
  • the replay apparatus 15 may be a computer having input and output components 160 such as a screen, keyboard and speakers and so forth. A user may view video content from the recording apparatus as a live stream received from the recording apparatus 10.
  • the video content may be stored at the replay apparatus 15 or the remote server 170 for playback.
  • Figure 3 is an example schematic block diagram of the controller 13.
  • the controller 13 comprises a processor 300 and storage device 310.
  • the storage device 310 comprises nonvolatile memory 320 on which computer-readable code 320A is stored.
  • the non-volatile memory is provided with a Bluetooth module 320B.
  • the computer-readable code 320A allows the particular functionality of the controller 13 in embodiments of the present invention to be stored and executed.
  • the Bluetooth module 320B contains the code required so that received Bluetooth messages may be processed in accordance with the Bluetooth standard.
  • the storage device 310 also comprises a volatile memory 330.
  • the processor 300 is arranged to process azimuthal and elevational angle-of-arrival information.
  • the processor 300 may apply directional data to the video feed obtained from one or more cameras.
  • the processor 300 may output the directional data and the video feed obtained from one or more cameras to a remote server.
  • the controller 13 comprises an azimuthal antenna array 340 connected to an RF switch 341, a transceiver 342 and an azimuthal angle-of-arrival (AoA) estimation module 343.
  • the controller 13 also comprises an elevational antenna array 350 connected to an RF switch 351, a transceiver 352 and an elevational angle-of-arrival (AoA) estimation module 353 ⁇
  • the estimation of the Angle of Arrival is based on a measured time difference of signal copies (transmitted by the multiple physically separated antennas) in the receiver.
  • the time difference is due to variable propagation channel lengths, and the practical estimation is typically based on secondary effects to the signal, such as the resulting phase difference of the signal copies.
  • Angle of Arrival positioning has been shown to provide positioning accuracy of tens of centimetres, or in about 2 degree direction estimate.
  • the link between the processor 300 and the antennas 340, 350 may be bidirectional so that the transceivers 342, 352 and antennas 340, 350 may also be used for RF
  • the controller 13 also comprises a clock 360 and timestamping capability.
  • the controller 13 may be configured to measure and record RSSI data of received packets.
  • the controller 13 may store reference values to allow the azimuthal and elevational angles to be monitored uniformly.
  • the storage device 310 may store information defining the bearing of zero degrees in azimuth and elevation. From these reference points, the area covered by a particular camera may be defined.
  • camera 11a may be defined as covering azimuthal angular range o degrees to 60 degrees and elevational angular range o degrees to 60 degrees.
  • the cameras 11 and controller 13 may form an integrated recording apparatus 10 wherein the bearing information is stored in the memory of the cameras 11 and/or controller 13.
  • the cameras 11 and/or controller 13 may be provided with a compass to determine direction and/or a gyroscope to determine orientation.
  • FIG. 4 is a schematic block diagram of one of the cameras 11.
  • the camera 11 comprises a camera module 400.
  • the camera module 400 comprises video camera components that are known in the art including, for example a lens, a CCD array and an image processor.
  • Each camera 11 may be controlled by the controller 13.
  • the camera 11 receives instructions from the communication module 14 which, in turn, receives instructions wirelessly from the video-editing computer.
  • each camera 11 may be provided with an RF antenna and transceiver so that each controller receives instructions directly from and is controlled by the video-editing computer.
  • the camera 11 also comprises a processor 402 and storage device 403.
  • the storage device comprises non-volatile memory 404 and volatile memory 405.
  • the non-volatile memory is provided with computer-readable instructions 404A.
  • the camera may also be provided with a clock 405 so that a timeline may be applied to the recorded video content.
  • the clocks of each of the cameras may be synchronised by the controller 13 to ensure that consistent timekeeping is applied.
  • the camera 11 may also comprise a microphone 406 to capture audio content.
  • the cameras may comprise separate video and audio processors.
  • the video and audio processing capability may be combined in a single multimedia processor or, as shown in Figure 4, the processing functionality of the camera module 400 and microphone 406 may be performed by the processor 402.
  • FIG. 5 is a schematic block diagram of the positioning tag 20.
  • the positioning tag 20 comprises a transceiver 200 for transmitting wireless messages such as BLE
  • the positioning tag 20 also comprises a processor 210 and a storage device 222.
  • the storage device comprises non-volatile memory 220 and volatile memory 221.
  • the non-volatile memory is provided with a Bluetooth module 220A and programming instructions 220B.
  • the programming instructions 220B allow the particular functionality of the positioning tags 20 in embodiments of the present invention to be stored and executed.
  • the Bluetooth module 220A contains the code required so that messages may be transmitted in accordance with the Bluetooth standard.
  • the positioning tag 20 may form a component of a mobile communication device such as a mobile phone, smart watch, electronic glasses etc.
  • the mobile communication device may comprise an input 230 allowing a user to input additional information to be included in the wireless messages.
  • a user may include their name.
  • the positioning tag 20 is included as part of a smart watch or mobile phone is that a user may record their heartbeat which is then transmitted as a data field in the wireless message such as BLE advertisement messages.
  • Figure 6 is a flow chart illustrating operations of the recording apparatus 10 as a video is recorded.
  • the controller 13 may optionally collect information from positioning devices 20 in the vicinity of the recording apparatus 10.
  • Positioning devices may send wireless messages containing sensor data advising the recording apparatus 10 of the presence of the positioning devices 20 in the vicinity.
  • the wireless messages may contain sensor information about the positioning devices 20.
  • Sensor information may include, for example, the identity of the user of the device, whether the device attached to the user (e.g. as a smart watch, electronic glasses etc).
  • the collection of the sensor data via connection with the positioning devices 20 may also happen during the recording or after the recording.
  • radio transceivers or capability for multiple connections in the controller 13 so that for example heart rate information of a competitor person can be recorded along taking video.
  • the additional data may be saved into the same file as the directional or into separate file(s).
  • An advantage of a separate file is faster search. Search may be based, for example, on searching records related with a particular identity only.
  • the instruction to commence recording may be inputted at a computer that is in wireless communication with the recording apparatus 10.
  • the recording apparatus 10 may comprise input capability such as a record button, touchscreen and so forth.
  • Video may be recorded by each of the cameras 11 for the entire sphere shown in Figure 1.
  • the video may be stored in a video file format such as .wmv, .mov or .mpeg or any other suitable format.
  • the audio data may be recorded within the same data file as the video or in separate files. Examples of audio file formats are .wav and .mp3 although any other suitable file format may be used.
  • the recording apparatus 10 receives a wireless message from a positioning tag such as the positioning tag 20 shown in Figure 1.
  • the wireless message may contain an identifier to identify the positioning tag that sent the packet.
  • the wireless message may also contain further fields such as text.
  • a message may contain heartbeat information obtained by the positioning tag 20.
  • the azimuthal and/or elevational Angles of Arrival are determined.
  • Direction estimation of the signal source from the received signal is performed by using multiple antenna elements.
  • the estimation of the azimuthal and elevational Angle of Arrival is based on measured time difference of signal copies (received by the multiple physically separated antenna elements 340, 350 shown in Figure 3) in the controller 13.
  • the time difference is due to variable propagation channel lengths, and the practical estimation is typically based on secondary effects to the signal, such as the resulting phase difference of the signal copies.
  • the positioning tag 20 transmits a wireless message and the controller 13 executes antenna switching during the reception of the packet.
  • the controller 13 scans for the wireless messages and executes amplitude and phase sampling during reception of the packets.
  • the controller 13 may then utilise the amplitude and phase samples, along with its own antenna array information, to estimate the AoA of the packet from the positioning tag 20.
  • Operation 6.5 is an optional step.
  • the RSSI value of the received packet is determined and recorded.
  • the received packet is time stamped.
  • the data obtained at operations 6.4, 6.5 and 6.6 are stored.
  • the data may be stored as part of the video file. Alternatively, the data may be stored in a separate file. Storing the data separately is advantageous since it is not reliant on video formats that support adding the data to video file metadata. Table 1 shows an example of the contents of a directional data file.
  • the example directional data comprises an identifier of the transmitter of the detected directional signals (i.e. the positioning tag 20), the measured azimuthal and elevational angles and RSSI values. Each angle and corresponding RSSI value is associated with a time stamp.
  • the time stamp applied to the received directional data packet corresponds to the video recording so that the observed tag can be matched with the video stream.
  • Table 1 illustrate embodiments where wireless messages received from tags 20 positioned anywhere within the spherical space surrounding the recording apparatus 10 (i.e. from o to 360 degrees in both azimuth and elevation) are recorded in a single file.
  • a single controller 13 is used in connection with each camera 11.
  • each camera 11 may be connected to a respective controller 13 and a file containing data relating to tag detection events in the sector recorded by the camera may be maintained at each controller.
  • a single file containing data relating to tag detection may be compiled by a central processor (such as the video editing computer) based on files of tag data relating to each sector of the spherical space surrounding the recording apparatus 10.
  • the directional information from controller and the camera information are recorded so that transmitters can be placed into the combined camera recording (360 degree recording).
  • the controller may collect data from all or the subset of the received packets.
  • the link between the portions of the video/audio data and the directional data may be based on setting a relationship between the directional radio sphere and the video/audio recording sphere to match.
  • Each camera records a certain part of the video sphere - the parts of sphere the cameras cover may be overlapping. In various embodiments, the cameras are arranged so there are no gaps in coverage. The part of the sphere that a camera covers is matched with certain part of the radio/directional sphere.
  • the desired radio transmitter is detected on certain azimuth and elevation angles, the corresponding camera/cameras can be switch on or their focus can be adjusted according to the radio/directional detection.
  • Figure 7 shows the operations performed when a user wishes to replay and edit video content.
  • the video content may be being live-streamed on a screen of the recording apparatus 10, in which case, the following steps shown in Figure 7 are performed at the recording apparatus 10.
  • the video content may be being live-streamed on a screen of the replay apparatus 15 in which case, the following steps shown in Figure 7 are performed at the replay apparatus 15.
  • the content may be stored locally at the replay apparatus 15, at the recording apparatus 10 or at the remote server 170 for playback subsequent to the recording of the video content.
  • the selected video content may be played back on the replay apparatus 15 or at the recording device 10.
  • the recording apparatus 10 has captured video content across a spherical recording area.
  • the resulting data size of a video stream containing video content from the array of cameras shown in Figure 1 can be very large.
  • a user may choose to watch video based on a direction of the particular tag.
  • the amount of video data that needs to be handled can be reduced.
  • a user selects one or more positioning tags 20 that he or she wants to track during playback of the video content. This may be done in a number of ways. For example, the user may be presented with a user interface in which a list of tags is displayed. The tags that are selected for display may be obtained from the directional data file 156 which contains the tag identifiers, displayed to the user who then selects at least one tag to track.
  • the replay apparatus 15 or recording apparatus 10 searches the directional data file 156 for the directional data and time stamp data relating to the selected positioning tags 20.
  • the directional data and time stamp data relating to the selected positioning tags 20 are retrieved.
  • the replay apparatus 15 compares the directional data of the selected tag 20 with directional information regarding each of the cameras 11.
  • one or more cameras are identified that have recorded video content from a section of the spherical recording area that corresponds to the location of the selected tag.
  • part of the video content is selected containing the video content from the identified one or more cameras and the video content from the one or more cameras 11 is retrieved.
  • the timestamp data may be analysed and compared to the timeline of video content of a particular camera so that video content is only retrieved for the time periods in which the positioning tag is in the area recorded by a particular camera.
  • the parts of the content selected at step 7.5 may then be displayed to a user.
  • a user may select more than one positioning tag to be displayed. If it is determined that the user selected positioning tags that are covered by separate cameras, the tags may be displayed simultaneously in a split screen format.
  • ice hockey players are wearing wearable positioning tags. A user selects two ice hockey players to be viewed. If it is determined that the two players are not covered by the same camera then the two players may be displayed in a split-screen format.
  • video content data may also be obtained from cameras adjacent to the camera covering the area in which the selected tags are located. This is useful where a tag is highly mobile and may move quickly from one section of the video sphere to an adjacent section.
  • video content data may be retrieved only from the cameras that are covering the areas in which the tag is located over a period of time. As such, video content data from cameras that do not cover areas in which the tag is located does not need to be accessed.
  • Embodiments of the invention allow the video/ audio and radio-direction records to be used to enable viewing or downloading of only a portion of the recorded material.
  • the radio-direction information may be based on detection of a transmitter that is attached to an object (e.g. person, animal or device). Therefore, a video/audio record focus area can be chosen based on choosing the desired radio transmitter identity and other data received from the positioning tags 20.
  • this object (and its near surroundings) can be streamed for high quality by, for example, increasing the bit rate for cameras that are covering the tag.
  • Other tags or directions can be streamed with lower quality to save bandwidth and thus improve streaming capability.
  • video streams from cameras not showing the selected tags may not be downloaded or streamed at all, further saving bandwidth.
  • a video record of a particular object at a particular time can be searched and viewed efficiently using the identifier and the directional information related with the object.
  • the searching speed may be increased by using a separate, light-weight file to store the records of the directional information.
  • the directional information may be stored as part of the video file.
  • FIG. 9 shows a recording apparatus 90 according to an alternative embodiment.
  • the recording apparatus 90 comprises a circular array of recording devices.
  • the operation of the recording apparatus 90 is similar to that of the recording apparatus 10.
  • the circular array is arranged in a single plane so that each of the cameras 91 is arranged to capture a section of the three-dimensional space surrounding the camera array 90.
  • An example of a camera array capable of producing video recordings around 360 degrees is the GoPro 360 camera array.
  • Such an arrangement is advantageous in scenarios where the region to be recorded lies within the recording area of a circular array of cameras.
  • single plane arrays may cover partial circle, for example a semicircle.
  • Table 2 shows an example file that may be compiled during detection of positioning tags. Table 2
  • the controller 92 comprised within the recording apparatus 90 need only perform angle of arrival calculations to determine an azimuthal angle. Therefore, the processing required by the controller is lower than in the case where the elevational angle is also determined.
  • FIG. 10 shows a recording apparatus 1000 according to yet another alternative embodiment.
  • the recording apparatus 1000 comprises an array of camera pairs 1001.
  • Each camera pair 1001 comprises a first camera 1001a and a second camera 1002b.
  • the first camera 1001a is configured to capture a left-eye image
  • the second camera 1002b is configured to capture a right-eye image.
  • the recording apparatus 1000 comprises a controller to combine the left-eye image with the right-eye image to form a stereoscopic image.
  • the array of camera pairs may be spherical or circular. For a spherical array both elevational and azimuthal data may be recorded. For a circular array only azimuthal data may be recorded.
  • the controller 1002 may store reference values to allow the azimuthal angle (and, in the case of a spherical array, elevational angle) to be monitored uniformly.
  • the control module may store information defining the bearing of zero degrees in azimuth and elevation. From these reference points, the area covered by a particular camera pair may be defined. For example, a first camera pair may be defined as covering azimuthal angular range o degrees to 60 degrees and elevational angular range o degrees to 60 degrees.
  • the cameras pair and controller 1002 may form an integrated recording apparatus wherein the bearing information is stored in the memory of the cameras and/or controller 1002.
  • the cameras and/ or controller 1002 may be provided with a compass to determine direction and/or a gyroscope to determine orientation.
  • Advantages of various embodiments include the ability to manage feeds having high resolutions.
  • cameras in the arrays described above may have a resolution of up to approximately 6K.
  • When editing the feeds from the different cameras it may be necessary to switch between several high bandwidth feeds. This is especially true, in embodiments having a stereoscopic camera array involving pairs of cameras. Switching between a large number of feeds at a replay apparatus 15 becomes very demanding in terms of bandwidth.
  • feeds are stored at a remote server and accessed remotely by the replay apparatus 15 over a wireless connection then switching between a large number of feeds may become difficult in terms of bandwidth management and cost.
  • Embodiments provide an advantage that the number of feeds that need to be retrieved can be kept to a minimum.
  • Computer readable instructions, software and operating systems may be pre-programmed into the apparatuses 11, 13, 14, 15, 20, 92, 1002. Alternatively, the computer readable instructions, software and operating systems may arrive at the apparatuses 11, 13, 14, 15, 20, 92, 1002 via an electromagnetic carrier signal or may be copied from a physical entity 800 (see Figure 8) such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD.
  • the computer readable instructions, software and operating systems may provide the logic and routines that enables the
  • the term 'memory' when used in this specification is intended to relate primarily to memory comprising both non-volatile memory and volatile memory unless the context implies otherwise, although the term may also cover one or more volatile memories only, one or more non-volatile memories only, or one or more volatile memories and one or more non-volatile memories.
  • volatile memory include RAM, DRAM, SDRAM etc.
  • non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc.
  • Embodiments of the present disclosure may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
  • the software, application logic and/or hardware may reside on memory, or any computer media.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • a computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer as defined previously.
  • the computer program may be implemented in a computer program product comprising a tangible computer-readable medium bearing computer program code embodied therein which can be used with the processor for the implementation of the functions described above.
  • a computer program product comprising a tangible computer-readable medium bearing computer program code embodied therein which can be used with the processor for the implementation of the functions described above.
  • tangible computer program etc, or a "processor” or “processing circuit” etc. should be understood to encompass not only computers having differing architectures such as single/multi processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specify circuits ASIC, signal processing devices and other devices.
  • References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.
  • Such "computer-readable storage medium” may mean a non-transitory computer-readable storage medium which may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be understood, however, that "computer-readable storage medium” and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media.
  • Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
  • the different steps discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above- described steps may be optional or may be combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Cardiology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A method and apparatus are disclosed for recording, at a recording apparatus, video content obtained from an array of cameras, wherein each camera records video content from a section of a recording area; receiving a wireless message from a tag situated in the recording area; applying a time stamp to the received wireless message; determining directional information of the tag in the recording area based on analysis of the wireless message; and causing the directional information and the timestamp to be stored.

Description

Video recording method and apparatus. Field
The specification relates to a video recording method and apparatus.
Background
In the field of audio/video recording and editing it is often necessary to handle files that are relatively large in terms of data size. A particular issue arises where audio/video content is obtained from an array of recording devices leading to even greater quantities of data. This brings new challenges in relation to managing the large quantities of data in a reliable, efficient and user-friendly manner.
Summary
In a first aspect, this specification describes a method comprising capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area; receiving a wireless message from a tag situated in the recording area; applying a time stamp to the received wireless message; determining directional information of the tag in the recording area based on analysis of the wireless message; and causing the directional information and the timestamp to be stored.
The directional information may comprise angle of arrival information of the received wireless message. The angle of arrival information may comprise an azimuthal angle.
The angle of arrival information may comprise an elevational angle.
The directional information and the timestamp may be stored at the video capture apparatus.
The directional information and the timestamp may be stored at a remote server.
The camera array may be a spherical array arranged to capture video content from a spherical recording area. The wireless message may contain additional information input by a user of the tag.
The wireless message may contain a tag identifier identifying the tag from which the wireless message is received.
The timestamp may correspond to a time value of the video stream.
The wireless message may be a BLE advertisement packet. The directional information and timestamp information may be stored in a file separate from the recorded video content.
The video content captured by each respective camera may be recorded as part of a composite data file or in an individual file.
The array of cameras may comprise a plurality of stereoscopic camera pairs.
In a second aspect, this specification describes a computer program comprising instructions that, when executed by a computing apparatus, cause the computing apparatus to perform the method of the first aspect.
In a third aspect, this specification describes an apparatus comprising at least one processor; at least one memory having computer-readable instructions stored thereon, the computer-readable instructions when executed by the at least one processor causing the apparatus at least to capture video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
receive a wireless message from a tag situated in the recording area; apply a time stamp to the received wireless message; determine directional information of the tag in the recording area based on analysis of the wireless message; and cause the directional information and the timestamp to be stored.
In a fourth aspect, this specification describes a computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causing performance of capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area; receiving a wireless message from a tag situated in the recording area; applying a time stamp to the received wireless message; determining directional information of the tag in the recording area based on analysis of the wireless message; and causing the directional information and the timestamp to be stored. In a fifth aspect, this specification describes an apparatus comprising means for capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
means for receiving a wireless message from a tag situated in the recording area; means for applying a time stamp to the received wireless message; means for determining directional information of the tag in the recording area based on analysis of the wireless message; and means for causing the directional information and the timestamp to be stored.
Brief description of the drawings
For a more complete understanding of the methods, apparatuses and computer-readable instructions described herein, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
Figure l illustrates a recording environment;
Figure 2 is a schematic diagram illustrating a recording apparatus and an editing/replay apparatus;
Figure 3 is a schematic diagram of a controller;
Figure 4 is a schematic diagram of a camera;
Figure 5 is a schematic diagram of a mobile tag; Figure 6 is a flow chart illustrating a flow chart showing steps taken at a recording apparatus;
Figure 7 is a flow chart illustrating a flow chart showing steps taken at a replay apparatus; Figure 8 shows a storage medium;
Figure 9 shows a camera array according to an alternative embodiment; and Figure 10 shows a camera array according to an alternative embodiment.
Detailed description
Embodiments of the invention provide a system for capturing video and/ or audio across an array of recording devices as well as capturing directional data relating to positioning tags that are within the field of view of the recording devices. The directional data may be used during at least one of a subsequent video processing/editing stage so that multiple streams captured by the array of recording devices can be handled in an efficient way. Furthermore, when a video is subsequently viewed by a viewer, certain views may be highlighted as part of the video based on the relative location measured from the positioning tag's wireless transmissions.
Embodiments of the invention involve recording the relative locations of radio-detected objects and associating the locations with an area captured in the video and/or audio data. A record of the direction radio information may be added to a video and/or audio recording file. Alternatively, a separate file may be recorded containing the directional radio information and time-stamped so that it matches with the simultaneously recorded video and/or audio files.
Figure l shows an example recording environment l. A recording apparatus io comprises a spherical array of video cameras n and a chassis. While referred to as a recording apparatus, the apparatus is arranged to capture a live video stream as well as recording video content to be stored at a storage medium. In embodiments, where the video content is stored, it can be stored at the recording apparatus itself, at a video editing computer or at a remote server. The video cameras n are arranged to provide video coverage across 360 degrees in terms of both elevation and azimuth, i.e. across an entire sphere, which may be termed a video sphere. It should be borne in mind that in alternative
embodiments, an array may comprise cameras covering a hemispherical area or indeed only a section of a spherical area.
Each of the cameras 11 is arranged to capture a section of the three-dimensional space surrounding the camera array 10. The recording apparatus 10 shown in Figure 1 has six cameras na-f. Camera nf is not shown in Figure 1 but is represented schematically in Figure 2. A positioning tag 20 (for example a Bluetooth Low Energy tag) is shown in Figure 1. In this example, the positioning tag 20 is part of a user's watch. In other non-limiting examples the positioning tag 20 may be attached to an animal's collar to help capture video for a wildlife documentary or the positioning tag 20 may be incorporated in a mobile phone or a key fob.
The recording apparatus 10 and positioning tags 20 may be configured to operate using any suitable type of wireless transmission/reception technology. Suitable types of technology include, but are not limited to Bluetooth Basic Rate / Enhanced Data Rate (BR/EDR) and Bluetooth Low Energy (BLE). Bluetooth Low Energy (BLE) is a relatively new wireless communication technology published by the Bluetooth SIG as a component of Bluetooth Core Specification Version 4.0. Other types of suitable technology include for example technologies based on IEEE 802.11 and IEEE 802.15.4. The use of BLE may be particularly useful due to its relatively low energy consumption and because most mobile phones and other portable electronic devices will be capable of communicating using BLE technology.
Signals/positioning packets transmitted by the positioning tags 20 may be according to the High Accuracy Indoor Positioning (HAIP) solution for example as described at http: //www.in-location-alliance.com.
Whilst embodiments are described herein using BLE messages and HAIP systems, alterative low-power radio technologies may be used such as IEEE 802.15.4. The recording apparatus 10 may be considered the origin of a spherical coordinate system. The positioning tag 20 may thus be defined by an azimuthal angle Θ in the x-y plane shown in Figure 1 and by an elevational angle φ with respect to the z axis. As will be explained in more detail below, the positioning tag 20 periodically transmits wireless messages which are received by the recording apparatus 10. The recording apparatus 10 then determines an azimuthal angle of arrival and an elevational angle of arrival for the received packet. The recording apparatus 10 may also determine a received signal strength indication (RSSI) value for the received package. This information may then be stored in a suitable format. Figure 2 is a schematic diagram of the recording apparatus 10 and editing and/or replay apparatus 15. The recording apparatus 10 comprises a controller 13. The controller 13 controls the recording apparatus 10. The controller 13 is configured to determine angle- of-arrival information from packets received from the tags 20.
In the example implementation shown in Figure 2, the controller 13 is mounted in the centre of the video/audio recording system. In this embodiment, there are six cameras, each having one lens, and a single controller 13. The output of the camera recording is saved into one or more files in raw or processed format stored by the controller 13. The output of each individual camera may be stored in a respective file stored by the controller 13. Alternatively, the output of the cameras may be combined into a single, composite file stored by the controller 13. Also, the directional data is saved into one or more files in a raw or processed format. The video and directional data may be stored locally at the recording apparatus 10 or separately, such as in a remote server 170, to be accessed remotely by the replay apparatus 15. Alternatively the recorded media and directional data may be uploaded to the replay apparatus 15 itself.
The recording apparatus 10 may comprise a communication module 14. The
communication module 14 comprises an RF antenna and RF transceiver to allow wireless communication between the recording apparatus 10 and a remote server 170 or computer 15 having a video-editing capability. The recording apparatus 10 may be configured to communicate via a wireless network such as Wi-Fi. Alternatively, the recording apparatus 10 may have a wired link (not shown) to a computer having a video-editing capability.
The recording apparatus 10 may be provided with a user input/output 12. The user input/output 12 may comprise a screen and keyboard which may be integrated into a touchscreen. The user input/output 12 is used to allow the user to control the operation one or more of the cameras 11 and the playback functionality of the recording apparatus 10. The user input/output 12 allows the user to control playback from selected one or more cameras. As described below with reference to Figure 7, a user may select one or more tags 20 from a user interface displayed on the screen of the apparatus 10. Selection of parts of the video content relevant to the tag location may then be performed at the apparatus 10.
The screen may be used for viewing live footage from one or more cameras. A user may select one or more tags 20 and view live footage from the cameras that are relevant to the selected tags. The selected video content may be stored at the apparatus 10, the replay apparatus 15 or the remote server 170. The editing and/or replay apparatus 15 may be a computer comprising a processor 150 and a storage device 151. The storage device 151 comprises a volatile memory 152 and a non-volatile memory 153. The non-volatile memory 153 may have an operating system 154 and video editing software 155 stored therein. The non-volatile memory 153 may also store a directional data file 156 in which the directional data received from the controller 13 is stored. A video file 157 may also be stored containing video stream data received from the controller 13. Alternatively, the video file may be stored at the remote server 170. The video stream data may be stored as separate files, whereby each video file contains the output of each respective camera. Alternatively, the directional data may be contained within the video file(s). The editing and/or replay apparatus 15 further comprises an RF transceiver 158 and an RF antenna 159 to enable wireless communication with the, recording apparatus 10 and the server 170. The replay apparatus 15 may be a computer having input and output components 160 such as a screen, keyboard and speakers and so forth. A user may view video content from the recording apparatus as a live stream received from the recording apparatus 10. Alternatively, the video content may be stored at the replay apparatus 15 or the remote server 170 for playback. Figure 3 is an example schematic block diagram of the controller 13. The controller 13 comprises a processor 300 and storage device 310. The storage device 310 comprises nonvolatile memory 320 on which computer-readable code 320A is stored. The non-volatile memory is provided with a Bluetooth module 320B. The computer-readable code 320A allows the particular functionality of the controller 13 in embodiments of the present invention to be stored and executed. The Bluetooth module 320B contains the code required so that received Bluetooth messages may be processed in accordance with the Bluetooth standard. The storage device 310 also comprises a volatile memory 330.
The processor 300 is arranged to process azimuthal and elevational angle-of-arrival information. The processor 300 may apply directional data to the video feed obtained from one or more cameras. Alternatively, the processor 300 may output the directional data and the video feed obtained from one or more cameras to a remote server.
The controller 13 comprises an azimuthal antenna array 340 connected to an RF switch 341, a transceiver 342 and an azimuthal angle-of-arrival (AoA) estimation module 343. The controller 13 also comprises an elevational antenna array 350 connected to an RF switch 351, a transceiver 352 and an elevational angle-of-arrival (AoA) estimation module 353·
The estimation of the Angle of Arrival is based on a measured time difference of signal copies (transmitted by the multiple physically separated antennas) in the receiver. The time difference is due to variable propagation channel lengths, and the practical estimation is typically based on secondary effects to the signal, such as the resulting phase difference of the signal copies. Angle of Arrival positioning has been shown to provide positioning accuracy of tens of centimetres, or in about 2 degree direction estimate.
The link between the processor 300 and the antennas 340, 350 may be bidirectional so that the transceivers 342, 352 and antennas 340, 350 may also be used for RF
communication. The controller 13 also comprises a clock 360 and timestamping capability. The controller 13 may be configured to measure and record RSSI data of received packets.
The controller 13 may store reference values to allow the azimuthal and elevational angles to be monitored uniformly. The storage device 310 may store information defining the bearing of zero degrees in azimuth and elevation. From these reference points, the area covered by a particular camera may be defined. For example, camera 11a may be defined as covering azimuthal angular range o degrees to 60 degrees and elevational angular range o degrees to 60 degrees. The cameras 11 and controller 13 may form an integrated recording apparatus 10 wherein the bearing information is stored in the memory of the cameras 11 and/or controller 13. In alternative embodiments, the cameras 11 and/or controller 13 may be provided with a compass to determine direction and/or a gyroscope to determine orientation.
Figure 4 is a schematic block diagram of one of the cameras 11. The camera 11 comprises a camera module 400. The camera module 400 comprises video camera components that are known in the art including, for example a lens, a CCD array and an image processor. Each camera 11 may be controlled by the controller 13. In some embodiments, the camera 11 receives instructions from the communication module 14 which, in turn, receives instructions wirelessly from the video-editing computer. Alternatively, each camera 11 may be provided with an RF antenna and transceiver so that each controller receives instructions directly from and is controlled by the video-editing computer. The camera 11 also comprises a processor 402 and storage device 403. The storage device comprises non-volatile memory 404 and volatile memory 405. The non-volatile memory is provided with computer-readable instructions 404A. The camera may also be provided with a clock 405 so that a timeline may be applied to the recorded video content. The clocks of each of the cameras may be synchronised by the controller 13 to ensure that consistent timekeeping is applied. The camera 11 may also comprise a microphone 406 to capture audio content. The cameras may comprise separate video and audio processors.
Alternatively, the video and audio processing capability may be combined in a single multimedia processor or, as shown in Figure 4, the processing functionality of the camera module 400 and microphone 406 may be performed by the processor 402.
Figure 5 is a schematic block diagram of the positioning tag 20. The positioning tag 20 comprises a transceiver 200 for transmitting wireless messages such as BLE
advertisement messages and an antenna 201. The positioning tag 20 also comprises a processor 210 and a storage device 222. The storage device comprises non-volatile memory 220 and volatile memory 221. The non-volatile memory is provided with a Bluetooth module 220A and programming instructions 220B. The programming instructions 220B allow the particular functionality of the positioning tags 20 in embodiments of the present invention to be stored and executed. The Bluetooth module 220A contains the code required so that messages may be transmitted in accordance with the Bluetooth standard.
In some embodiments the positioning tag 20 may form a component of a mobile communication device such as a mobile phone, smart watch, electronic glasses etc. In this case, the mobile communication device may comprise an input 230 allowing a user to input additional information to be included in the wireless messages. For example, a user may include their name. Another example, where the positioning tag 20 is included as part of a smart watch or mobile phone is that a user may record their heartbeat which is then transmitted as a data field in the wireless message such as BLE advertisement messages.
Figure 6 is a flow chart illustrating operations of the recording apparatus 10 as a video is recorded.
At operation 6.1, the controller 13 may optionally collect information from positioning devices 20 in the vicinity of the recording apparatus 10. Positioning devices may send wireless messages containing sensor data advising the recording apparatus 10 of the presence of the positioning devices 20 in the vicinity. The wireless messages may contain sensor information about the positioning devices 20. Sensor information may include, for example, the identity of the user of the device, whether the device attached to the user (e.g. as a smart watch, electronic glasses etc). The collection of the sensor data via connection with the positioning devices 20 may also happen during the recording or after the recording.
There may also be multiple radio transceivers or capability for multiple connections in the controller 13, so that for example heart rate information of a competitor person can be recorded along taking video. The additional data may be saved into the same file as the directional or into separate file(s). An advantage of a separate file is faster search. Search may be based, for example, on searching records related with a particular identity only.
At operation 6.2, recording is commenced. The instruction to commence recording may be inputted at a computer that is in wireless communication with the recording apparatus 10. Alternatively, the recording apparatus 10 may comprise input capability such as a record button, touchscreen and so forth. Video may be recorded by each of the cameras 11 for the entire sphere shown in Figure 1. The video may be stored in a video file format such as .wmv, .mov or .mpeg or any other suitable format. The audio data may be recorded within the same data file as the video or in separate files. Examples of audio file formats are .wav and .mp3 although any other suitable file format may be used.
At operation 6.3, the recording apparatus 10 receives a wireless message from a positioning tag such as the positioning tag 20 shown in Figure 1. The wireless message may contain an identifier to identify the positioning tag that sent the packet. The wireless message may also contain further fields such as text. For example, a message may contain heartbeat information obtained by the positioning tag 20.
At operation 6.4, the azimuthal and/or elevational Angles of Arrival are determined.
Direction estimation of the signal source from the received signal is performed by using multiple antenna elements. The estimation of the azimuthal and elevational Angle of Arrival is based on measured time difference of signal copies (received by the multiple physically separated antenna elements 340, 350 shown in Figure 3) in the controller 13. The time difference is due to variable propagation channel lengths, and the practical estimation is typically based on secondary effects to the signal, such as the resulting phase difference of the signal copies. The positioning tag 20 transmits a wireless message and the controller 13 executes antenna switching during the reception of the packet. The controller 13 scans for the wireless messages and executes amplitude and phase sampling during reception of the packets. The controller 13 may then utilise the amplitude and phase samples, along with its own antenna array information, to estimate the AoA of the packet from the positioning tag 20.
Operation 6.5 is an optional step. The RSSI value of the received packet is determined and recorded. At operation 6.6, the received packet is time stamped. At operation 6.7, the data obtained at operations 6.4, 6.5 and 6.6 are stored. The data may be stored as part of the video file. Alternatively, the data may be stored in a separate file. Storing the data separately is advantageous since it is not reliant on video formats that support adding the data to video file metadata. Table 1 shows an example of the contents of a directional data file.
The example directional data comprises an identifier of the transmitter of the detected directional signals (i.e. the positioning tag 20), the measured azimuthal and elevational angles and RSSI values. Each angle and corresponding RSSI value is associated with a time stamp. The time stamp applied to the received directional data packet corresponds to the video recording so that the observed tag can be matched with the video stream.
The file contents shown in Table 1 illustrate embodiments where wireless messages received from tags 20 positioned anywhere within the spherical space surrounding the recording apparatus 10 (i.e. from o to 360 degrees in both azimuth and elevation) are recorded in a single file. This may be the case where a single controller 13 is used in connection with each camera 11. In other embodiments, each camera 11 may be connected to a respective controller 13 and a file containing data relating to tag detection events in the sector recorded by the camera may be maintained at each controller. Alternatively, a single file containing data relating to tag detection may be compiled by a central processor (such as the video editing computer) based on files of tag data relating to each sector of the spherical space surrounding the recording apparatus 10.
The directional information from controller and the camera information are recorded so that transmitters can be placed into the combined camera recording (360 degree recording). The controller may collect data from all or the subset of the received packets.
The link between the portions of the video/audio data and the directional data may be based on setting a relationship between the directional radio sphere and the video/audio recording sphere to match. Each camera records a certain part of the video sphere - the parts of sphere the cameras cover may be overlapping. In various embodiments, the cameras are arranged so there are no gaps in coverage. The part of the sphere that a camera covers is matched with certain part of the radio/directional sphere. When the desired radio transmitter is detected on certain azimuth and elevation angles, the corresponding camera/cameras can be switch on or their focus can be adjusted according to the radio/directional detection.
Figure 7 shows the operations performed when a user wishes to replay and edit video content. The video content may be being live-streamed on a screen of the recording apparatus 10, in which case, the following steps shown in Figure 7 are performed at the recording apparatus 10. Alternatively, the video content may be being live-streamed on a screen of the replay apparatus 15 in which case, the following steps shown in Figure 7 are performed at the replay apparatus 15.
In alternative embodiments, the content may be stored locally at the replay apparatus 15, at the recording apparatus 10 or at the remote server 170 for playback subsequent to the recording of the video content. The selected video content may be played back on the replay apparatus 15 or at the recording device 10.
In this example, the recording apparatus 10 has captured video content across a spherical recording area. The resulting data size of a video stream containing video content from the array of cameras shown in Figure 1 can be very large. During the replay and editing stages, a user may choose to watch video based on a direction of the particular tag. By matching the video content captured from the various cameras with the directional data relating to the tags of interest, the amount of video data that needs to be handled can be reduced.
At operation 7.1, a user selects one or more positioning tags 20 that he or she wants to track during playback of the video content. This may be done in a number of ways. For example, the user may be presented with a user interface in which a list of tags is displayed. The tags that are selected for display may be obtained from the directional data file 156 which contains the tag identifiers, displayed to the user who then selects at least one tag to track.
At operation 7.2, the replay apparatus 15 or recording apparatus 10 searches the directional data file 156 for the directional data and time stamp data relating to the selected positioning tags 20. The directional data and time stamp data relating to the selected positioning tags 20 are retrieved.
At operation 7.3, the replay apparatus 15 compares the directional data of the selected tag 20 with directional information regarding each of the cameras 11. At operation 7.4, one or more cameras are identified that have recorded video content from a section of the spherical recording area that corresponds to the location of the selected tag.
At operation 7.5, part of the video content is selected containing the video content from the identified one or more cameras and the video content from the one or more cameras 11 is retrieved. The timestamp data may be analysed and compared to the timeline of video content of a particular camera so that video content is only retrieved for the time periods in which the positioning tag is in the area recorded by a particular camera.
The parts of the content selected at step 7.5 may then be displayed to a user. In some embodiments, a user may select more than one positioning tag to be displayed. If it is determined that the user selected positioning tags that are covered by separate cameras, the tags may be displayed simultaneously in a split screen format. In one example, ice hockey players are wearing wearable positioning tags. A user selects two ice hockey players to be viewed. If it is determined that the two players are not covered by the same camera then the two players may be displayed in a split-screen format.
As well as retrieving video content data from cameras covering the area in which the selected tags are located, video content data may also be obtained from cameras adjacent to the camera covering the area in which the selected tags are located. This is useful where a tag is highly mobile and may move quickly from one section of the video sphere to an adjacent section.
In scenarios where the positioning tag which is selected by the user moves between various sectors of the video sphere, video content data may be retrieved only from the cameras that are covering the areas in which the tag is located over a period of time. As such, video content data from cameras that do not cover areas in which the tag is located does not need to be accessed. Embodiments of the invention allow the video/ audio and radio-direction records to be used to enable viewing or downloading of only a portion of the recorded material. The radio-direction information may be based on detection of a transmitter that is attached to an object (e.g. person, animal or device). Therefore, a video/audio record focus area can be chosen based on choosing the desired radio transmitter identity and other data received from the positioning tags 20. As the user decides which of the radio signal objects he or she wants to follow, this object (and its near surroundings) can be streamed for high quality by, for example, increasing the bit rate for cameras that are covering the tag. Other tags or directions can be streamed with lower quality to save bandwidth and thus improve streaming capability. Furthermore, video streams from cameras not showing the selected tags may not be downloaded or streamed at all, further saving bandwidth.
A video record of a particular object at a particular time can be searched and viewed efficiently using the identifier and the directional information related with the object. The searching speed may be increased by using a separate, light-weight file to store the records of the directional information. Alternatively, the directional information may be stored as part of the video file.
Figure 9 shows a recording apparatus 90 according to an alternative embodiment. The recording apparatus 90 comprises a circular array of recording devices. In general, the operation of the recording apparatus 90 is similar to that of the recording apparatus 10. However, the circular array is arranged in a single plane so that each of the cameras 91 is arranged to capture a section of the three-dimensional space surrounding the camera array 90. An example of a camera array capable of producing video recordings around 360 degrees is the GoPro 360 camera array. Such an arrangement is advantageous in scenarios where the region to be recorded lies within the recording area of a circular array of cameras. It should be borne in mind that single plane arrays may cover partial circle, for example a semicircle. Table 2 shows an example file that may be compiled during detection of positioning tags. Table 2
Since all of the cameras 91 are located in the same plane, there is no requirement to collect elevational angle information. As such, the file has a smaller data size. The controller 92 comprised within the recording apparatus 90 need only perform angle of arrival calculations to determine an azimuthal angle. Therefore, the processing required by the controller is lower than in the case where the elevational angle is also determined.
Figure 10 shows a recording apparatus 1000 according to yet another alternative embodiment. The recording apparatus 1000 comprises an array of camera pairs 1001. Each camera pair 1001 comprises a first camera 1001a and a second camera 1002b. The first camera 1001a is configured to capture a left-eye image and the second camera 1002b is configured to capture a right-eye image. The recording apparatus 1000 comprises a controller to combine the left-eye image with the right-eye image to form a stereoscopic image. It will be appreciated that the processing involved in forming stereoscopic videos is relatively intensive since images must be captured from both cameras of each pair 1001. Subsequent editing of stereoscopic videos is also relatively intensive in terms of processing. The array of camera pairs may be spherical or circular. For a spherical array both elevational and azimuthal data may be recorded. For a circular array only azimuthal data may be recorded.
In this embodiment, storage of directional data is performed in largely the same way as previously described. The controller 1002 may store reference values to allow the azimuthal angle (and, in the case of a spherical array, elevational angle) to be monitored uniformly. The control module may store information defining the bearing of zero degrees in azimuth and elevation. From these reference points, the area covered by a particular camera pair may be defined. For example, a first camera pair may be defined as covering azimuthal angular range o degrees to 60 degrees and elevational angular range o degrees to 60 degrees. The cameras pair and controller 1002 may form an integrated recording apparatus wherein the bearing information is stored in the memory of the cameras and/or controller 1002. In alternative embodiments, the cameras and/ or controller 1002 may be provided with a compass to determine direction and/or a gyroscope to determine orientation.
Advantages of various embodiments, especially where the camera array is arranged to record stereoscopic video content, include the ability to manage feeds having high resolutions. For example, cameras in the arrays described above may have a resolution of up to approximately 6K. When editing the feeds from the different cameras it may be necessary to switch between several high bandwidth feeds. This is especially true, in embodiments having a stereoscopic camera array involving pairs of cameras. Switching between a large number of feeds at a replay apparatus 15 becomes very demanding in terms of bandwidth. Furthermore, if feeds are stored at a remote server and accessed remotely by the replay apparatus 15 over a wireless connection then switching between a large number of feeds may become difficult in terms of bandwidth management and cost. Embodiments provide an advantage that the number of feeds that need to be retrieved can be kept to a minimum.
Computer readable instructions, software and operating systems may be pre-programmed into the apparatuses 11, 13, 14, 15, 20, 92, 1002. Alternatively, the computer readable instructions, software and operating systems may arrive at the apparatuses 11, 13, 14, 15, 20, 92, 1002 via an electromagnetic carrier signal or may be copied from a physical entity 800 (see Figure 8) such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD. The computer readable instructions, software and operating systems may provide the logic and routines that enables the
devices/apparatuses 11, 13, 14, 15, 20, 92, 1002 to perform the functionality described above. The term 'memory' when used in this specification is intended to relate primarily to memory comprising both non-volatile memory and volatile memory unless the context implies otherwise, although the term may also cover one or more volatile memories only, one or more non-volatile memories only, or one or more volatile memories and one or more non-volatile memories. Examples of volatile memory include RAM, DRAM, SDRAM etc. Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc.
Embodiments of the present disclosure may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
A computer-readable medium may comprise a computer-readable storage medium that may be any tangible media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer as defined previously.
According to various embodiments of the previous aspect of the present disclosure, the computer program according to any of the above aspects, may be implemented in a computer program product comprising a tangible computer-readable medium bearing computer program code embodied therein which can be used with the processor for the implementation of the functions described above. Reference to "computer-readable storage medium", "computer program product",
"tangibly embodied computer program" etc, or a "processor" or "processing circuit" etc. should be understood to encompass not only computers having differing architectures such as single/multi processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specify circuits ASIC, signal processing devices and other devices. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.
By way of example, and not limitation, such "computer-readable storage medium" may mean a non-transitory computer-readable storage medium which may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be understood, however, that "computer-readable storage medium" and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Combinations of the above should also be included within the scope of "computer- readable medium".
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
If desired, the different steps discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above- described steps may be optional or may be combined.
Although various aspects of the present disclosure are set out in the independent claims, other aspects of the present disclosure comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

Claims

Claims
1. A method comprising:
capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
receiving a wireless message from a tag situated in the recording area;
applying a time stamp to the received wireless message;
determining directional information of the tag in the recording area based on analysis of the wireless message; and
causing the directional information and the timestamp to be stored.
2. The method of claim 1, wherein the directional information comprises angle of arrival information of the received wireless message.
3. The method of claim 2, wherein the angle of arrival information comprises an azimuthal angle.
4. The method of claim 2 or claim 3, wherein the angle of arrival information comprises an elevational angle.
5. The method of any preceding claim, wherein the directional information and the timestamp are stored at the video capture apparatus.
6. The method of any of claims 1-4, wherein the directional information and the timestamp are stored at a remote server.
7. The method of any preceding claim, wherein the camera array is a spherical array arranged to capture video content from a spherical recording area.
8. The method of any preceding claim, wherein the wireless message contains additional information input by a user of the tag.
9. The method of any preceding claim, wherein the wireless message contains a tag identifier identifying the tag from which the wireless message is received.
10. The method of any preceding claim, wherein the timestamp corresponds to a time value of the video stream.
11. The method of any preceding claim, wherein the wireless message is a BLE advertisement packet.
12. The method of any preceding claim, wherein the directional information and timestamp information are stored in a file separate from the captured video content.
13. The method of any preceding claim, wherein the video content captured by each respective camera is recorded as part of a composite data file or in an individual file.
14. The method of any preceding claim, wherein the array of cameras comprises a plurality of stereoscopic camera pairs.
15. A computer program comprising instructions that, when executed by a computing apparatus, cause the computing apparatus to perform the method of any preceding claim.
16. Apparatus comprising:
at least one processor;
at least one memory having computer-readable instructions stored thereon, the computer-readable instructions when executed by the at least one processor causing the apparatus at least to:
capture video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
receive a wireless message from a tag situated in the recording area;
apply a time stamp to the received wireless message;
determine directional information of the tag in the recording area based on analysis of the wireless message; and
cause the directional information and the timestamp to be stored.
17. The apparatus of claim 16, wherein the directional information comprises angle of arrival information of the received wireless message.
18. The apparatus of claim 17, wherein the angle of arrival information comprises an azimuthal angle.
19. The apparatus of claim 17 or claim 18, wherein the angle of arrival information comprises an elevational angle.
20. The apparatus of any of claims 16-19, wherein the directional information and the timestamp are stored at the apparatus.
21. The apparatus of any of claims 16-19, wherein the directional information and the timestamp are stored at a remote server.
22. The apparatus of any of claims 16-21, wherein the camera array is a spherical array arranged to capture video content from a spherical recording area.
23. The apparatus of any of claims 16-22, wherein the wireless message contains additional information input by a user of the tag.
24. The apparatus of any of claims 16-23, wherein the wireless message contains a tag identifier identifying the tag from which the wireless message is received.
25. The apparatus of any of claims 16-24, wherein the timestamp corresponds to a time value of the video stream.
26. The apparatus of any of claims 16-25, wherein the wireless message is a BLE advertisement packet.
27. The apparatus of any of claims 16-26, wherein the directional information and timestamp information are stored in a file separate from the captured video content.
28. The apparatus of any of claims 16-27, wherein the video content captured by each respective camera is recorded as part of a composite data file or in an individual file.
29. The apparatus of any of claims 16-28, wherein the array of cameras comprises a plurality of stereoscopic camera pairs.
30. A computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causing performance of: capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
receiving a wireless message from a tag situated in the recording area;
applying a time stamp to the received wireless message;
determining directional information of the tag in the recording area based on analysis of the wireless message; and
causing the directional information and the timestamp to be stored.
31. Apparatus comprising:
means for capturing, at a video capture apparatus, video content obtained from an array of cameras, wherein each camera is arranged to capture video content from a section of a recording area;
means for receiving a wireless message from a tag situated in the recording area; means for applying a time stamp to the received wireless message;
means for determining directional information of the tag in the recording area based on analysis of the wireless message; and
means for causing the directional information and the timestamp to be stored.
EP15904688.7A 2015-09-23 2015-09-23 Video recording method and apparatus Active EP3353565B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2015/050634 WO2017051064A1 (en) 2015-09-23 2015-09-23 Video recording method and apparatus

Publications (3)

Publication Number Publication Date
EP3353565A1 true EP3353565A1 (en) 2018-08-01
EP3353565A4 EP3353565A4 (en) 2019-05-08
EP3353565B1 EP3353565B1 (en) 2021-08-04

Family

ID=58386042

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15904688.7A Active EP3353565B1 (en) 2015-09-23 2015-09-23 Video recording method and apparatus

Country Status (4)

Country Link
US (1) US10681335B2 (en)
EP (1) EP3353565B1 (en)
CN (1) CN108450022A (en)
WO (1) WO2017051064A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6557410B2 (en) 2015-09-23 2019-08-07 ノキア テクノロジーズ オーユー Select video content
CN110557569A (en) * 2019-09-09 2019-12-10 郑州联睿电子科技有限公司 Intelligent monitoring system and method integrating ultra-wideband AoA positioning and video tracking
CN111301394A (en) * 2020-03-26 2020-06-19 东风柳州汽车有限公司 Automobile man-machine common driving mode accelerator control system and control method
KR102605070B1 (en) * 2020-07-06 2023-11-24 한국전자통신연구원 Apparatus for Learning Recognition Model, Apparatus for Analyzing Video and Apparatus for Providing Video Searching Service
CN113709542B (en) * 2020-10-09 2023-09-19 天翼数字生活科技有限公司 Method and system for playing interactive panoramic video

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6034716A (en) 1997-12-18 2000-03-07 Whiting; Joshua B. Panoramic digital camera system
US6337683B1 (en) 1998-05-13 2002-01-08 Imove Inc. Panoramic movies which simulate movement through multidimensional space
EP1297634A1 (en) 2000-06-09 2003-04-02 iMove Inc. Streaming panoramic video
US7839926B1 (en) 2000-11-17 2010-11-23 Metzger Raymond R Bandwidth management and control
US7327383B2 (en) * 2003-11-04 2008-02-05 Eastman Kodak Company Correlating captured images and timed 3D event data
WO2007036842A2 (en) 2005-09-30 2007-04-05 Koninklijke Philips Electronics N.V. Method and apparatus for capturing metadata for a content item
US9185361B2 (en) * 2008-07-29 2015-11-10 Gerald Curry Camera-based tracking and position determination for sporting events using event information and intelligence data extracted in real-time from position information
JP5227110B2 (en) * 2008-08-07 2013-07-03 株式会社トプコン Omnidirectional camera with GPS and spatial data collection device
CN102356371B (en) * 2009-03-16 2015-11-25 诺基亚公司 Data processing equipment and the user interface be associated and method
US20130176403A1 (en) 2011-08-16 2013-07-11 Kenneth Varga Heads up display (HUD) sensor system
US9031971B2 (en) 2010-07-23 2015-05-12 Qualcomm Incorporated Flexible data download models for augmented reality
US8702516B2 (en) * 2010-08-26 2014-04-22 Blast Motion Inc. Motion event recognition system and method
JP6282793B2 (en) 2011-11-08 2018-02-21 サターン ライセンシング エルエルシーSaturn Licensing LLC Transmission device, display control device, content transmission method, recording medium, and program
CA2798298C (en) 2011-12-09 2016-08-23 W-Ideas Network Inc. Systems and methods for video processing
US20130300832A1 (en) * 2012-05-14 2013-11-14 Sstatzz Oy System and method for automatic video filming and broadcasting of sports events
ITRM20120329A1 (en) * 2012-07-12 2012-10-11 Virtualmind Di Davide Angelelli 360 ° IMMERSIVE / SPHERICAL VIDEO CAMERA WITH 6-11 OPTICS 5-10 MEGAPIXEL WITH GPS GEOLOCALIZATION
NO336454B1 (en) * 2012-08-31 2015-08-24 Id Tag Technology Group As Device, system and method for identifying objects in a digital image, as well as transponder device
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
US10721530B2 (en) 2013-07-29 2020-07-21 Koninklijke Kpn N.V. Providing tile video streams to a client
EP2860699A1 (en) * 2013-10-11 2015-04-15 Telefonaktiebolaget L M Ericsson (Publ) Technique for view synthesis
EP2879371B1 (en) * 2013-11-29 2016-12-21 Axis AB System for following an object marked by a tag device with a camera
WO2015107252A1 (en) * 2014-01-15 2015-07-23 Nokia Technologies Oy Determination of a location of a device
US10547825B2 (en) * 2014-09-22 2020-01-28 Samsung Electronics Company, Ltd. Transmission of three-dimensional video

Also Published As

Publication number Publication date
CN108450022A (en) 2018-08-24
EP3353565A4 (en) 2019-05-08
EP3353565B1 (en) 2021-08-04
US20180343440A1 (en) 2018-11-29
WO2017051064A1 (en) 2017-03-30
US10681335B2 (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US10468066B2 (en) Video content selection
US10681335B2 (en) Video recording method and apparatus
US10186299B2 (en) Method and electronic device for generating multiple point of view video
US8773589B2 (en) Audio/video methods and systems
EP3306495A1 (en) Method and system for associating recorded videos with highlight and event tags to facilitate replay services
EP2865192B1 (en) Time-synchronizing a parallel feed of secondary content with primary media content
US10778905B2 (en) Surround video recording
US20180103197A1 (en) Automatic Generation of Video Using Location-Based Metadata Generated from Wireless Beacons
JP2013223235A (en) Radio communication device, memory device, radio communication system, radio communication method and program
US20160323483A1 (en) Automatically generating notes and annotating multimedia content specific to a video production
US20150139601A1 (en) Method, apparatus, and computer program product for automatic remix and summary creation using crowd-sourced intelligence
EP3018845A1 (en) Method of broadcast audio/video content play out handover and corresponding apparatus
WO2014064321A1 (en) Personalized media remix
EP3706360A1 (en) Location based authentication
JP6149400B2 (en) Information processing apparatus, information processing system, control method thereof, and program
US20180184180A1 (en) Media feed synchronisation
TWI535282B (en) Method and electronic device for generating multiple point of view video
US20190163820A1 (en) Provision of playlist information related to a played song
US11394931B2 (en) Multimedia capture and editing using wireless sensors
EP3332564B1 (en) Methods and apparatus for creating an individualized record of an event
GB2556922A (en) Methods and apparatuses relating to location data indicative of a location of a source of an audio component
WO2018222532A1 (en) Rfid based zoom lens tracking of objects of interest
WO2017081356A1 (en) Selecting a recording device or a content stream derived therefrom

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180315

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RIC1 Information provided on ipc code assigned before grant

Ipc: G03B 37/04 20060101ALI20170407BHEP

Ipc: H04N 13/02 20060101ALI20170407BHEP

Ipc: H04N 5/232 20060101ALI20170407BHEP

Ipc: H04N 21/2343 20110101ALI20170407BHEP

Ipc: H04N 5/268 20060101ALI20170407BHEP

Ipc: G01S 3/46 20060101AFI20170407BHEP

Ipc: G11B 27/031 20060101ALI20170407BHEP

Ipc: H04N 7/18 20060101ALI20170407BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20190408

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/247 20060101ALI20190402BHEP

Ipc: G11B 27/10 20060101ALI20190402BHEP

Ipc: A61B 5/00 20060101ALI20190402BHEP

Ipc: G11B 27/11 20060101ALI20190402BHEP

Ipc: H04N 7/18 20060101ALI20190402BHEP

Ipc: A63B 24/00 20060101ALI20190402BHEP

Ipc: H04N 21/2343 20110101ALI20190402BHEP

Ipc: G11B 27/031 20060101ALI20190402BHEP

Ipc: G01S 3/46 20060101AFI20190402BHEP

Ipc: H04N 13/243 20180101ALI20190402BHEP

Ipc: H04N 5/268 20060101ALI20190402BHEP

Ipc: G03B 37/04 20060101ALI20190402BHEP

Ipc: H04N 5/232 20060101ALI20190402BHEP

Ipc: G06K 7/10 20060101ALI20190402BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NOKIA TECHNOLOGIES OY

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200630

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210309

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1417513

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210815

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015072083

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210804

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1417513

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210804

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211104

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211104

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211206

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211105

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015072083

Country of ref document: DE

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220506

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20211104

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210923

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210923

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211004

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211104

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150923

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230802

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210804