GB2392334A - Pseudo video generated from a series of individual cameras - Google Patents

Pseudo video generated from a series of individual cameras Download PDF

Info

Publication number
GB2392334A
GB2392334A GB0219710A GB0219710A GB2392334A GB 2392334 A GB2392334 A GB 2392334A GB 0219710 A GB0219710 A GB 0219710A GB 0219710 A GB0219710 A GB 0219710A GB 2392334 A GB2392334 A GB 2392334A
Authority
GB
United Kingdom
Prior art keywords
images
cameras
capture
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0219710A
Other versions
GB2392334B (en
GB0219710D0 (en
Inventor
John Naylor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIME SLICE CAMERAS Ltd
Original Assignee
TIME SLICE CAMERAS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIME SLICE CAMERAS Ltd filed Critical TIME SLICE CAMERAS Ltd
Priority to GB0520849A priority Critical patent/GB2416457B/en
Priority to GB0219710A priority patent/GB2392334B/en
Publication of GB0219710D0 publication Critical patent/GB0219710D0/en
Publication of GB2392334A publication Critical patent/GB2392334A/en
Application granted granted Critical
Publication of GB2392334B publication Critical patent/GB2392334B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/2627Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A series of digital cameras capture either single images or a continuous video stream which is then stored in ring buffers. At this stage the captured images may be processed either for correction, such as colour correction, offsetting or stabilisation, or for conversion, such as colour space or compression. During the processing stage blur may be added to images so that they more realistically simulate the movement of a camera at high speed. Each capture module can output a sequence of images as pseudo video to show either motion with time different views of the same instant of time.

Description

- 1 IMAGE CAPTURE SYSTEM ENABLING SPECIAL EFFECTS
The invention relates to image capture systems, particularly in the field
of special effects.
5 As a field, special effects can be classified in several ways. One
important property of a special effect is whether it can be used "live". The slow motion action replay is a good example of a live special effect because much of its value derives from the immediacy with which an event, such as a goal being scored during a soccer match, can be re-examined via that special 1 0 effect.
Another way to divide the general field into classes is to consider the
point at which the processing required to produce the effect is situated in the signal chain. Those effects that are produced by using specialized lenses or cameras are often defined as being "in camera". Effects that are produced by 15 treating the output of cameras are generally termed "post-processing".
The final classification that is relevant to the invention relates to the number of cameras required to produce the effect. There are three main cases: À Zero cameras required, as in computer generated animation.
20 À One camera required at a time; this is the normal case.
À Simultaneous requirement for more than one camera.
According to this categorization, the present invention seeks to provide live, in-camera, multi-camera special effects. In this context "live" shall mean that broadcasVprofessional quality video of an event is available within 1 Us of 25 it happening.
There are instances in the prior art where multiple cameras have been
used to produce an in-camera special effect.
The most common usage at present is to construct an array of still (i.e. not video or movie) cameras that can be directed at a point of interest. The still
- 2 cameras are then triggered simultaneously or sequentially. The still image from each member of the array is then edited into a sequence such that the first image in the sequence came from the first camera in the array, the second came from the second camera in the array and so on. When the 5 sequence is replayed, the viewer experiences the illusion of moving through or around a scene that has been either frozen in time (simultaneous triggering) or is moving (sequenced triggering). Perhaps the most widely recognised use of this technique is the "bullet time" sequence in The Matnx, a hit movie in 1998.
10 In 1980, Tim MacMillan developed an array of cameras that were triggered simultaneously to create a special effect. He developed it over a number of years and, in May 1993, a version was featured on the BB0's popular science programme called "Tomorrow's World". MacMillan's innovation was to use small cameras that were placed side-by-side on a 1" pitch in the array. This 15 enabled normal movie film to be threaded as a continuous strip through the multiple cameras. Once the cameras had been triggered, the movie film was removed and developed. It could then be played through a projector with no need to edit the contributions from the individual cameras together.
In 1994 Dayton Taylorfiled a patent application in the US on which have 20 issued United States patents 5,659,323, 6,154,251 and 6,331,871. In many respects, these patents disclose an identical system to that invented by MacMillan using still cameras though which normal movie film stock is threaded. Kewazinga Corp have attempted to develop a system that solves the same 25 problem as the invention. It uses a small number of cameras (fewer than 10) and motion estimation technology to produce a "virtual" camera that can be "flown" around the subject. Motion estimation technology is not yet sufficiently advanced to get the quality level required for professional or broadcast use.
Motion estimation artefacts effectively prohibit the use of Kewazinga's system 30 for live, professional use. The aim - shared by the present invention - is to obtain a high optical density, that is to say a large number of images per unit length of the trajectory of the virtual moving camera. Kewazinga attempt to do this by motion estimation and a "virtual" camera.
- 3 In contrast, the invention aims to provide a sufficient number of real cameras to generate the desired optical density without the need for motion estimation. This leads to the problems of communicating with a potentially very large number of cameras (perhaps 500) in a practicable and cost-
5 effective manner, whilst meeting the target of live operation.
More generally, the main limitations of the prior art that this invention
overcomes as follows.
Many of the prior art techniques are film based and, so, cannot be used
live. Those that are based on digital cameras clearly do not suffer from the 10 film processing delay but are still not able to produce - for example - a "frozen time" video sequence within a live environment. The processing necessary to collect a still image from every camera in an array, edit them into the correct sequence and convert that sequence into broadcast quality video cannot be done economically within the 10 second deadline using current technology.
15 Even if faster, cheaper processing were available, the prior art approaches
still fail to provide a practicable cost-effective solution to live operation at high optical density.
The techniques that use a single piece of movie film threaded through multiple still cameras have practical limits imposed on the "flight-path" that the 20 array of cameras can follow without the movie stock snapping.
The systems that use an array of cameras based on normal still cameras suffer from a practical limitation on how close together the cameras can be.
The "optical density" achievable by multiple normal still cameras is around one picture every 6". The movie stock based embodiments can (must) achieve an 25 optical density of one picture every 1".
According to one aspect of the present invention there is provided image capture apparatus comprising an array of cameras arranged in an arbitrary path; and a plurality of capture modules each connected to receive the image output from a set of consecutive cameras in the array; wherein each capture 30 module serves to correct differences in images arising from different image capture characteristics of the respective cameras and to output a sequence of images as pseudo video.
- 4 This output sequence of images may be a very short movie (e.g. eight frames in duration), call this a "moviette", or "pseudo video". The moviette may be encoded according to a standardized digital video format such CCCIR Rec 601; Motion JPEG; MPEG(1,2 or 4) DV; DVCAM; DVC Pro; DVC Pro 50; 5 Betacam SX; Digital Betacam; Digital S or D-VHS.
Preferably wherein each capture module captures multiple images from each of the cameras in the associated set, the output sequence of images comprising one image from each camera.
Advantageously, there are provided user selectable modes corresponding 10 to different selections of images from the multiple images captured from each camera, one mode corresponding to the selection of images all corresponding to the same instant of time.
Suitably, the cameras are continuously sampled with said images being held in a store which is over-written on capacity.
15 According to a further aspect of the present invention, there is provided an image capture system capable of special effects, comprising an array of cameras arranged to view a common scene from respective view points; a plurality of capture modules each connected to receive a continuously sampled image output from each of set of cameras in the array, with each 20 capture module serving to output a sequence of images, one from each camera, as pseudo video according to a standardised digital video format and a video processor adapted to combine the respective image sequences into a time sequence representing the view of the scene of a virtual camera moving from one location to the next of said actual cameras.
25 Advantageously, there are provided user selectable modes corresponding to different selections of images from the multiple images captured from each camera, one mode corresponding to the selection of images all corresponding to the same instant of time.
The invention can be regarded as an automated process that can be 30 embodied using standard electronics components and techniques. All the electronic devices necessary to produce an embodiment of the invention are standard parts that are readily available from their manufacturers.
- 5 The invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 is a schematic representation of a process according to one embodiment of the present invention; 5 Figure 2 is a block diagram of a set of capture heads or cameras and one capture segment or module, for use in the process of Figure 1; Figure 3 is a schematic representation of a control and distribution system, for use in the process of Figure 1; and Figure 4 is a schematic illustration of the interconnection of multiple 10 capture modules.
Referring initially to Figure 1, the key stages in the process will be described in turn.
Capture - A miniature digital camera called a "capture head" operates in one of these three capture modes: 15 1. It produces a continuous video stream at a frame rate determined by the global timing reference.
2. It produces a single still frame when triggered by the global timing reference. 3. It produces a single still frame after a programmable delay that is 20 triggered by the global timing reference.
Store - The images from several capture heads are stored into random access memory that is structured as ring buffers such that the oldest image in a ring buffer will be over-written by the most recently captured image. A separate ring buffer is maintained for each capture head.
25 The size of the ring buffers (i.e. how many images each can hold) depends on the amount of memory available; the size of a single image; and the number of capture heads that are being managed by the storage process.
Eight capture heads are illustrated in the example embodiment but there is no reason this number could not be higher or lower.
- 6 This process is a key part of the overall invention because it effectively converts a small number of still images into a short sequence of, say, eight images where each image is produced by a different camera. We can call this short sequence pseudo-video or a "moviette".
5 The RAM store is configured to provide a ring buffer for each capture head in the set of capture heads served by the one capture module. Each ring buffer then holds a continuously over-written time series of images from one capture head. A sequence of images output by the store as a moviette will comprise one image from each buffer and in one example these images will 10 correspond to a single instant and thus be taken from the same location in each ring buffer. Alternatively, images in the moviette can be taken from different locations in the ring buffers and thus correspond with different times.
Colour Correct- The images in a moviette all come from different capture heads and, as a result, are subject to sample variations: no two capture heads 15 will produce exactly the same results, even when used to photograph an identical scene. This phenomenon is due to sample variances between the various components used in the capture head (lens, CMOS imager, etc).
The variances can be minimised by ensuring that all the lenses used in the array have been cut from the same block of glass, and that the imager chips 20 used are from the same batch, but it will not be practicable to eliminate them entirely. The variances can, however, be calibrated a priori and this information is used to colour correct and stabilize the images that have been stored.
The colour correction process simply adjusts the levels of Cyan, 25 Magenta and Yellow (or Red, Green and Blue, depending on the type of imager used) for each pixel in the captured images to correct for the variances that have been measured beforehand and programmed into this process as parameters. Convert Colour-space - Readily available digital imagers produce commonly 30 produce their output in one of two colourspaces: Red, Green and Blue (RGB); or Cyan, Magenta and Yellow (CMY). Unfortunately, neither of these
- 7 are ideal for use as TV signals so the colour-space must be converted to the Luminance and Colour Difference (YUV) used by TV equipment.
A related process that may be required and can be combined with this step is called "de-mosaicing". Digital imagers commonly produced mosaiced 5 images. Consider a small image that is 2 pixels high by 2 pixels across.
Label the pixels from left to right, top to bottom thus: 1 1A| 1B|
1 2A 1 2B 1
In a mosaiced image, pixels 1A and 2B will typically only carry information on the yellow channel, pixel 1 B will only carry information about the magenta 10 channel and pixel 2A will be restricted to information on the cyan channel. In other words, each pixel only carries one colour channel.
De-mosaicing is the process where by each pixel is given three colour channels by using colour information from the surrounding pixels. The inputs to this process are mosaiced, non-colour calibrated images; the outputs are 15 de-mosaiced, and colour-calibrated.
Stabilize - The capture heads will be set up to point at the subject being photographed. However, it is possible for them to drift with time or for the mechanical adjustments to be less precise than required. Therefore, a stabilization process is required.
20 Imagine an array of horizontally mounted cameras all looking at a horizontal bar. In the "frozen time" movie created from those cameras, the bar would appear to jitter up and down due to small errors in the alignment of individual cameras. These errors are calibrated a priori and corrected for by Moving" the digital image in a manner that cancels out the alignment error.
25 Motion Blur- This is an optional process (not shown in Figure 1) which permits more realistic footage to be produced by the apparatus. The special effect provided by the apparatus simulates a camera that is moving at high speed and frame rate. The images produced by moving camera would tend to suffer an artefact known as motion blur whereby the subjects of the
- 8 photograph are directionally blurred in the opposite sense to the camera movement. In the apparatus illustrated by Figure 4, the change of axis (i.e. direction a camera points) and position between adjacent cameras in the array is known a priori and is used by this process to determine the direction 5 and degree of a directional blur that is applied to each image. Note that the directional blur parameters can vary between adjacent cameras because the change of axis and position between adjacent cameras is not necessarily constant for the entire array. Applying a directional blur to a digital image is part of the 2D image processing canon and readily implemented by one 10 skilled in the art.
Compress - At this stage, we have a colour corrected, de-mosaiced, stabilized, optionally motion blurred, moviette. In the example embodiment, each moviette is 8 frames long. For standard definition TV in PAL format, approximately 10MB is needed to accurately represent it. This is too much 15 data to be able to handle quickly enough to make a live system so it is necessary to compress it.
Standard compression formats have existed for almost a decade now and chipsets to support them are into their third generation. One of these is used to compress the 10MB moviette into an 8Mb sequence (i.e. ten mega 20 bytes become eight mega bits), a compression factor of around ten to one.
The output of this process is an eight frame moviette coded as either DVCPro or MPEG2 I frame only. The key reason for choosing one of these standards is that they permit multiple moviettes to be edited together easily.
They share the feature that the compression is intra-frame, only.
25 Co/lect- For a reasonably sized system (say 80 capture heads), the invention proposes several "capture segments" be chained together using a fast serial interconnect such as Fire-wire or USB2. Once triggered, each capture segment will produce a compressed moviette in parallel with all the other capture segments.
30 A normal computer workstation is also connected to the fast serial interconnect and it is here that the collection process resides. It simply
- 9 - collects the moviettes from each capture segment as they become ready and stores them in the computer's memory.
Edit- This process is also resident on the computer workstation. It simply edits together the moviettes that have been collected so that they play out in 5 the correct order. The full-length "frozen time" movie can then be saved to disk or played out.
Play- Cards that can convert compressed video in MPEG2 or DVCPro format to CCIR Rec 601 standard serial digital video (SDI) are readily available. One of such is used to implement this final step in the overall process.
10 Alternatively, the "frozen time" movie could be "played" as a transport stream. It is straightforward to package such a transport stream into a standard container format such as SMPTE 360M and send it to another computer or a video server over a suitable network.
The described example of this invention has a number of significant 15 advantages. The process is fast; for example, the embodiment described can have an eighty frame movie ready to view within one second of being triggered. The modularity of the design approach is important. The embodiment described can be used to construct arrays with as few as eight capture heads or as many as 496 capture heads. The limit is imposed by the 20 fast serial interconnect used. For example, Fire-wire can support a maximum of 63 devices. Continuous sampling is offered. In this example, the capture heads can sample at 20F/s at full resolution and at 30F/s at VGA resolution.
These enables a number of useful functions such as: "Pre-roll". The last few images from each camera element could be 25 stored to allow the operator to activate the shutter after the event they wanted to capture has taken place. They then review each movie and pick the one that best captures the event.
"Head & Tail". The first and last camera elements in the array could be set to operate as full frame rate video cameras. When triggered, the camera 30 would splice the lead-in to the Frozen time" capture to the lead-out. This would provide a smooth transition between real-time to frozen time and back
- 10 again and thus put more flexibility and better continuity at the disposal of the operator. The sampling is flexible; the sensors can be triggered all at once or be staggered according to a user-programmable pattern. The system need not 5 necessarily freeze time. It can also be used to slow down time with dramatic effect by triggering the camera heads individually in succession. Capture mode 3 is used to achieve this.
Similarly, the output options are flexible. It is possible to use a capture almost immediately and convert it to a desired format that can be used live on-
10 air. This could be uncompressed SDI video, or a compressed format such as MPEG2 video. The camera's interface into production could be via video cables or a high-speed computer link into a video server.
Other advantageous features include programmable gain so that lighting conditions are not required to be perfect for acceptable image quality 15 and the avoidance of mechanical shutters and other moving parts.
It should be understood that not every embodiment of this invention will necessarily offer all the above advantages.
Turning now to Figure 2, this shows the main electronic components of the capture segment or module and its connection to a number of capture 20 heads (miniature, digital cameras).
The capture heads comprise a high quality miniature lens that focuses light onto a CMOS or COD imaging chip such as Kodak's KAC300 which is connected to a port on the capture segment. The port supplies connections for data flow, standard IIC control and triggering as shown in the figure.
25 The capture segment implements the majority of the sub-processes already described: Store, Colour Correct, Convert Cotourspace, Stabilize, Motion Blur, Compress, and Collect. Which part of the capture segment performs which process is illustrated in the following table: Process Implemented by Store Custom logic (FPGA) and large RAM Colour Correct DSP
- 11 Convert Colour-space DSP Stabilize DSP Motion Blur DSP Compress Standard professional video compression chip such as LSI Logic/Ccube's DVXPress.
Collect RAM under control of host CPU and in interaction with the Control Distribution system over the high speed serial network.
Although complex, the implementation of each of these sub-processes is well within the capabilities of one skilled in electronics and software design.
Figure 3 shows the camera's Control and Distribution system. It takes 5 the form of a normal high-end computer workstation that has been augmented by a video card such as the TARGA 3000 with which compressed video in MPEG2 or DVCPro formats can be converted to SMPTE standard uncompressed digital video. It is here that the remaining Edit and Play sub-
processes are implemented. Software operating on the computer collects the 10 moviettes from all the capture segments in the array. These are edited together so that the moviette from segment 1 appears first, the moviette from segment 2 appears second, and so on to form a complete movie.
The completed movie is then played out via the video card. It can also be transferred to another computer or video server in its compressed form by 15 using some standard container format such as SMPTE 360M.
The implementation of these processes is well within the capabilities of one skilled in applications software and software driver development.
Figure 4 illustrates the modularity and extensibility of the invention by showing how an eighty camera system would be constructed from ten capture 20 segments each with eight capture heads. The capture segments are connected to each other and to the control system by a high speed serial bus standard such as IEEE 1394.
- 12 lt should be understood that this invention has been described by way of example and that a wide variety of modifications are possible without departing from the scope of the invention.
For example, the invention as described, is restricted to producing 5 video resolution images. With modifications that do not affect the invention in any material aspect, the system could be enhanced to operate at film resolution incorporating video assist. In this embodiment, higher resolution sensors are used in the capture heads to capture images at film resolution.
These are stored in the usual way. An extra process produces video 10 resolution proxies of the film resolution images. These proxies are then fed into the rest of the process as already described so that a video resolution proxy of the event is available within 1 Os of the "take". If the take was successful, the original film resolution images can be downloaded to the workstation from the capture segments and stored on disk.
15 It is a straightforward matter to bypass some of the sub-processes described above. For instance, it may be desirable to have the raw images from the capture heads, or to forgo the compression process. The possibilities are illustrated in Figure 1. However, the system may no longer be "live" due the fact that larger amounts of data must be moved over the serial 20 bus and the captured images will not be in a readily used video format.

Claims (12)

  1. - 13 CLAIMS
    Image capture apparatus comprising an array of cameras arranged in an arbitrary path; and a plurality of capture modules each connected to receive the image output from a set of consecutive cameras in the array; wherein each capture module serves to correct differences in images arising from different image capture characteristics of the respective cameras and to output a sequence of images as pseudo video.
  2. 2. Apparatus according to Claim 1, wherein the sequence of images is
    compressed.
  3. 3. Apparatus according to Claim 1 or Claim 2, wherein the sequence of images is output according to a standardised digital video format
  4. 4. Apparatus according to Claim 3, wherein the standardized compressed digital video format is selected from the group consisting of Motion JPEG; MPEG(1,2 or 4) DV; DVCAM; DVC Pro; DVC Pro 50; Betacam SX; Digital Betacam; Digital S and D-VHS.
  5. 5. Apparatus according to any one of the preceding claims, wherein each capture module captures multiple images from each of the cameras in the associated set, the output sequence of images comprising one image from each camera.
  6. 6. Apparatus according to Claim 5, providing user selectable modes corresponding to different selections of images from the multiple images captured from each camera, one mode corresponding to the selection of images all corresponding to the same instant of time.
  7. 7. Apparatus according to Claim 5 or Claim 6, wherein the cameras are continuously sampled with said images being held in a store which is overwritten on capacity.
    - 14
  8. 8. Apparatus according to any one of the preceding claims, wherein the differences in images corrected in the capture module include colour differences.
  9. 9. Apparatus according to any one of the preceding claims, wherein the differences in images corrected in the capture module include spatial offsets.
  10. 10. Apparatus according to any one of the preceding claims, wherein the capture module is adapted as an option to apply a degree of directional blur to each image, the directional blur parameters varying between adjacent cameras as appropriate to the difference in orientation between adjacent cameras.
  11. 11. An image capture system capable of special effects, comprising an array of cameras arranged to view a common scene from respective view points; a plurality of capture modules each connected to receive a continuously sampled image output from each of set of cameras in the array, with each capture module serving to output a sequence of images, one from each camera, as pseudo video according to a standardised digital video format and a video processor adapted to combine the respective image sequences into a time sequence representing the view of the scene of a virtual camera moving from one location to the next of said actual cameras.
  12. 12. A system according to Claim 11, providing user selectable modes corresponding to different selections of images from the multiple images captured from each camera, one mode corresponding to the selection of images all corresponding to the same instant of time.
GB0219710A 2002-08-23 2002-08-23 Image capture system enabling special effects Expired - Fee Related GB2392334B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0520849A GB2416457B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects
GB0219710A GB2392334B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0219710A GB2392334B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects

Publications (3)

Publication Number Publication Date
GB0219710D0 GB0219710D0 (en) 2002-10-02
GB2392334A true GB2392334A (en) 2004-02-25
GB2392334B GB2392334B (en) 2006-07-05

Family

ID=9942894

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0520849A Expired - Fee Related GB2416457B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects
GB0219710A Expired - Fee Related GB2392334B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB0520849A Expired - Fee Related GB2416457B (en) 2002-08-23 2002-08-23 Image capture system enabling special effects

Country Status (1)

Country Link
GB (2) GB2416457B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1615427A1 (en) * 2004-07-07 2006-01-11 Leo Vision Process for obtaining a succession of images with a rotating effect
CN102572230A (en) * 2010-12-30 2012-07-11 深圳华强数码电影有限公司 Space frozen shooting method and system
CN107968900A (en) * 2017-12-08 2018-04-27 上海东方传媒技术有限公司 A kind of 360 degree around shooting and producing system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333349B (en) * 2014-08-21 2017-04-19 中国空气动力研究与发展中心超高速空气动力研究所 Device and method for ultra-high speed sequential control
CN110270078B (en) * 2019-06-06 2020-12-01 泾县协智智能科技有限公司 Football game special effect display system and method and computer device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266479B1 (en) * 1997-06-09 2001-07-24 Matsushita Electric Industrial Co., Ltd. Video signal recording and reproducing apparatus
US20010028399A1 (en) * 1994-05-31 2001-10-11 Conley Gregory J. Array-camera motion picture device, and methods to produce new visual and aural effects
US20020063775A1 (en) * 1994-12-21 2002-05-30 Taylor Dayton V. System for producing time-independent virtual camera movement in motion pictures and other media
WO2002065761A2 (en) * 2001-02-12 2002-08-22 Carnegie Mellon University System and method for stabilizing rotational images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3097045B2 (en) * 1995-10-24 2000-10-10 池上通信機株式会社 Broadcast system
US20020190991A1 (en) * 2001-05-16 2002-12-19 Daniel Efran 3-D instant replay system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010028399A1 (en) * 1994-05-31 2001-10-11 Conley Gregory J. Array-camera motion picture device, and methods to produce new visual and aural effects
US20020063775A1 (en) * 1994-12-21 2002-05-30 Taylor Dayton V. System for producing time-independent virtual camera movement in motion pictures and other media
US6266479B1 (en) * 1997-06-09 2001-07-24 Matsushita Electric Industrial Co., Ltd. Video signal recording and reproducing apparatus
WO2002065761A2 (en) * 2001-02-12 2002-08-22 Carnegie Mellon University System and method for stabilizing rotational images

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1615427A1 (en) * 2004-07-07 2006-01-11 Leo Vision Process for obtaining a succession of images with a rotating effect
FR2872943A1 (en) * 2004-07-07 2006-01-13 Leo Vision Soc Par Actions Sim METHOD OF OBTAINING IMAGE SUCCESSION IN THE FORM OF A ROTATING EFFECT
CN102572230A (en) * 2010-12-30 2012-07-11 深圳华强数码电影有限公司 Space frozen shooting method and system
CN102572230B (en) * 2010-12-30 2014-12-24 深圳华强数码电影有限公司 Space frozen shooting method and system
CN107968900A (en) * 2017-12-08 2018-04-27 上海东方传媒技术有限公司 A kind of 360 degree around shooting and producing system

Also Published As

Publication number Publication date
GB2416457B (en) 2006-07-05
GB0520849D0 (en) 2005-11-23
GB2392334B (en) 2006-07-05
GB2416457A (en) 2006-01-25
GB0219710D0 (en) 2002-10-02

Similar Documents

Publication Publication Date Title
US12108018B2 (en) Digital camera system for recording, editing and visualizing images
JP6032919B2 (en) Image processing device
US7675550B1 (en) Camera with high-quality still capture during continuous video capture
US8564685B2 (en) Video signal capturing apparatus, signal processing and control apparatus, and video signal capturing, video signal processing, and transferring system and method
US7630001B2 (en) Imaging apparatus and imaging method having a monitor image frame rate independent of a main line image frame rate
JP2009010821A (en) Imaging device and imaging method, recording medium, and program
WO2002096096A1 (en) 3d instant replay system and method
JP4178634B2 (en) Video signal transmission apparatus, video signal transmission method, video signal imaging apparatus, and video signal processing apparatus
CN102090063A (en) Image processing apparatus and image pickup apparatus using the image processing apparatus
JP6758946B2 (en) Imaging device and playback device
GB2392334A (en) Pseudo video generated from a series of individual cameras
WO1997009818A1 (en) High-speed high-resolution multi-frame real-time digital camera
US20190082127A1 (en) Imaging device and reproducing device
JP6855251B2 (en) Imaging device and playback device
JP2004072148A (en) Av data recording apparatus and method
JP5973766B2 (en) Image processing device
JP2011040801A (en) Electronic camera
US9479710B2 (en) Cinematic image blur in digital cameras based on exposure timing manipulation
Bergeron et al. Television Camera Systems
US20220132096A1 (en) Image-capturing apparatus, information processing method, and program
JPH0350984A (en) Picture recorder
Macmillan et al. Stereo Image Acquisition using Camera Arrays
KR20040000919A (en) Still image compression system of digital camcoder and the method of still image compresssion
JPH09200590A (en) Image pickup device

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20200823