GB2055005A - Multi Planar Image Editor - Google Patents

Multi Planar Image Editor Download PDF

Info

Publication number
GB2055005A
GB2055005A GB8019138A GB8019138A GB2055005A GB 2055005 A GB2055005 A GB 2055005A GB 8019138 A GB8019138 A GB 8019138A GB 8019138 A GB8019138 A GB 8019138A GB 2055005 A GB2055005 A GB 2055005A
Authority
GB
United Kingdom
Prior art keywords
focus
image
images
sub
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB8019138A
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rose G M
Original Assignee
Rose G M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rose G M filed Critical Rose G M
Priority to GB8019138A priority Critical patent/GB2055005A/en
Publication of GB2055005A publication Critical patent/GB2055005A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N11/00Colour television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

Television/photographic camera arrangement includes an optical arrangement which causes the image light from the camera's main lens system to be focused on two or more planes at different distances from the main lens system, e.g. for three such planes the resulting sub-images are arranged to include in-focus objects located in the far, middle and near distance ranges respectively from the main lens system. These sub-images are then analysed and the in-focus portions thereof selected and used to form a composite image which is substantially in-focus at all points. The sub-images are analysed by television cameras e.g. three colour television cameras, and the resulting video signals (specifically the luminance signals) are analysed e.g. by digital filtering, to determine those regions of each sub-image which are more in-focus than the corresponding regions of the other sub-images. From this in-focus determination a switching control signal is formed for selecting the in-focus video signals for transmission or recording. The in- focus selection may be effected by first recording the sub-image video signals on magnetic tapes and erasing the out-of focus portions. Cine or still photographic cameras may simultaneously record the sub- images and during the processing of the films those portions not in-focus are suppressed from each frame according to the compared television signals, the films then being combined to form a master film. Also described is an arrangement in which a single image is photographed or televised, and those regions which are determined to be out of focus e.g. by digital filtering, are mathematically enhanced under operator control to improve their sharpness so that they appear to be in focus.

Description

SPECIFICATION A Multi Planar Image Editor 1. Traditional cameras gather light from a subject by means of a single lens or compound lens system, forming an image at a particular plane, where it may be observed directly by use of a screen, or recorded on a film or by an electronic system in place of that screen.
2. Such a lens or lens system will produce an image of perfect clarity of definition only of a plane object placed at one certain distance before it. Where two or more objects are within the field of view of the lens, only one, if they are at different distances, can be in perfect focus at any one time with any one lens position. (Fig. 1. Sheet 01A.) 3. Those objects that are not in the plane of precise geometrical focus appear blurred to the viewer. The extent of the blur, and its impact on the human eye, varies with the lens system and the observation powers of the viewer. It is possible for the human eye to accept some degree of blur withour irritation, and this permits objects with some depth to be captured in complete focus by the camera although front and back of the object are in different planes.
4. The degree with which a camera can capture a number of distance ranges within the acceptable limits of biur is referred to traditionally as 'depth-of-field' or 'depth-of Focus'. It is related to the characteristics of the lens, the intensity of light falling upon the scene, the speed of exposure and the sensitivity of the recording medium.
5. Objects in planes beyond or in front of the single selected plane and its associated depth of focus are not capable of acceptable definition despite improvement in lens or lighting conditions etc. Finally the photographer is compelled to compose his scene omitting badly blurred sectors, or to accept them and use them to the best artistic advantage possible.
6. The purpose of the inventions herein is to enable all objects within the field of view of the lens of a camera to be brought into sharp focus irrespective of their distance from the camera.
7. The inventions described enable a camera to be directed at a view containing all manner of objects at many varying distances, as they occur in practice in a haphazard fashion, and not as a set piece arranged by the photographer.
8. A camera for the production of television pictures is shown on Sheet 01 B. The conventional lens system L is directed at the scene to be recorded, and the light so gathered is split into two or more parts by the Prism P, Shown here is a set-up for splitting the light into three parts.
9. The split light is caused to fall upon three separate monochrome or colour television cameras, placed at three different distances from Prism P,. Shown here each part falls upon a three tube television camera, with its own prism system, P+, P , and P-, corresponding to S+, S and S-. (Fig. 2. 01A) 10.Prism P in conjunction with a system of colour filters and mirrors FR' FG and FB produces in-focus images of objects in the middle distance from the camera, in red, green and blue on television tubes T3, T1 and T2 respectively, i.e. S .
11. Similarly Prism P+ produces in-focus images of objects in the far distance from the camera, in red, green and blue on television tubes T7, Tg and T8 respectively, i.e. S+.
12. Prism P- produces in-focus images of objects in the near distance from the camera, in red green and blue on television tubes T6, T4 and T5 respectively, i.e. S-.
13. All three sets of red, green and blue signals are then fed via delay circuits to a switching nel:work that switches inputs frorn that channel in best focus to the final single output channel for conventional transmission or recording.
14. During the delay period the following selection process takes place. Sheet. 02.
Associated with each of the three input channels is a luminence input Y+, Y and Y-. This may be provided as a separate input from the RGB signals or derived from them. The Y signals are then digitised and passed through a filter and selector.
15. Sheet 03. shows the video luminence waveforms obtained frorn two cameras scanning an image of black 8 white bars set at different focus. The signal risetime for a focussed camera is faster than for an unfocussed camera. The filter/selector compares the risetime (spacial frequency) of the three Y channels and switches the one with the most extended frequency range onto the final output circuil:. The comparator may be implemented by either analogue or digital circuitry.
16. The digital filter may operate on any chosen number of 'points' and on a single line or two or more lines as follows. A frame of a television picture consists of horizontal lines of light, separated by lines of darkness. The lines of light vary in shade across the frame, but the dark separating line is constant intensity. There are various numbers of lines in common use, 525 and 625 lines per frame for example, and this invention applies to all.
17. Each line of light in its turn may be considered as being further divided across the width of the frame into, say, a further 625 points.
Thus in conjunction with a basic 625 line system, 390625 points per frame will be established.
Completed once every 50th second, this will give a point sampling rate of 19,531,250 per second.
i.e. 20 MHz.
18. Each point may have its position on the screen located by a particular number between say, 000001 at the top left hand corner down to 390625 at the bottom right hand corner. The characteristics of each point may then be described on a scale varying from 1 to 9, or on a 5-bit scale varying from 00001 to 99999, or on an eight-bit scale varying from 00000001 to 99999999, depending upon the sensitivity of the system.
19. Part of one frame of information is shown on Sheet 04. A possible set of 1-bit characteristics for each of the points shown for each of three channels S+, S and S- are shown.
(SHEET 04A) 20. A single line analysis of point 003004, for example, (04B) would compare the rate of change between points 003003 and 003005 in each channel, and since in this case channel S+ has the greatest rate of change, channel S+ would be in best focus and therefore channel S+ would be transmitted.
21. An improved selection process would be to compare the rate of change over points 003001 and 003007, showing again that point 003 003 taken from channel S+ should be transmitted.
22. An even more selective method of analysis is to compare the nine points 002003 002004 002005 003003 003004 003005 004003 004004 004005 for each channel s+, So a Son the following basis: If the above points are represented on a 1-bit basis then they could be set down thus for each channel separately:- abc dpe fgh and the spatial frequency would be given by the expression 8p-(a+b+c+d+e+f+g+h) The channel with the highest resulting number would be the one to be transmitted.
23. Where a motion picture film record is required the final composite television signal may be converted into a film record by conventional tape-to-film methods, or the following may be used.
24. Sheet 05 shows the light gathered from lens system L split into two parts. One part falls upon a set of monochrome television tubes, or colour tubes, producing signals T+, TO and T- that are processed to select in-focus areas as above and the other part falls onto cine film cameras C and C-.
25. During the normal processing of the three films C+, CO and C- those parts of each film that are not in focus are obliterated from each frame of each film according to the signals from the compared television channels. The three films, with the blurred sections washed-out of each, are then combined to form a single master film, in much the same way as traditional animation techniques.
26. Sheet 06 shows a camera head for use in the television camera system. Three identical colour television cameras, with identical lenses, are locked together in a light proof casing with an open front. The focal planes of each camera are capable of being adjusted so that each camera may be brought to focus upon a different distance from the front of the composite camera casing.
27. Light from the scene viewed enters the camera through a transparent cover, and strikes two-way mirror B1. Light reflected passes to mirror C and thus to camera C. Light passing through B 1 moves to two-way mirror B2. Light reflected by B2 moves to camera A by way of mirror A, whilst light passing through B2 moves to camera B.
28. Prisms and two-way prisms may be substituted for mirrors and two-way mirrors in all cases.
29. For colour film operation the setting out of three colour film cameras is as Sheet 06, but an additional video camera T is placed in such a position, Sheet 07, so that an additional two way mirror or prism will reflect part of the original image into camera T. Camera T contains three tubes with focal planes set to correspond to film cameras A, B and C.
30. Cameras A, B and C may be in fixed positions relative to the outer casing and each other, or may be moved during the course of a motion picture by the manual or automatic control of the photographer.
31. Video tape recorder techniques may be used to replace the delay circuits described in 13.
as follows. The three groups of RGB signals are combined to produce three conventional colour television signals such that Rf+ G++ Be+:RO+ G"+ B"=CO:R-+G -+ B-C-: 32. Signals C+, CO and C- are then recorded on three separate video tape systems, synchronised as to time. Duplicate tapes CA+, CAO and CA- are produced from C+, CO and C- respectively. Tapes CA+ and CAO are then played back together through a switching and selecting system as described in 1422 such that focussed sections of CA+ erase blurred sections of CAO and vice versa. Similarly CA+ and CA- are played together and processed and finally CA and CA- are played together and processed.
33. After the procedure described in 32., three tapes will remain, F+, FO and F- in which only the focussed sections of tapes CA+, CAO and CA- will remain, the remainder of each frame being blank.
34. For the final transmission and viewing, tapes F+, FO and F- shall be played simultaneously.
35. For cine film production each tape F+, FO and F- are used to produce black and white cine films H+, HO and H-.
36. The focussed sections of film H+ are wholly transparent whilst the blurred sections of film H+ are wholly black. Similarly with films HO and Hi.e. the three H films are silhouettes.
37. Applying films H+, H0 and H- to films C+, CO and C- in para. 24 above will produce three further films that, when combined, will have all sections in focus.
38. Alternatively tapes F+, FO and F- may be transformed into film by conventional processes, and the three partial films so produced be combined to form a single film all parts of which are in focus.
39. Sheet 08 shows a two plane camera for taking single photographs. A third plane may be incorporated above plane F1 , to produce a triple plane camera, and any number of additional planes may be introduced to form a multiplane camera.
40. Lens, diaphragm D and film support plane F1 are built into a rigid camera body as in a conventional camera except that film support plane F1 is contrived to be at right-angles to the lens, by means of a mirror or prism M.
41. Behind mirror M, and hidden from the lens in the initial position of the camera operation, is a second film support plane F2. A film F2 is mounted in a separate section of the camera body, such that film F2 is free to move relative to the lens position.
42. The operation of the camera is as follows:a. The camera is aimed at a scene containing background and foreground objects of equal interest. Film F1 is supported at film plane F1 such that it will record a sharp image of all those objects contained in a 'depth of focus' between infinity and a depth of focus distance closer to the camera.
b. The position of film support plane F2 is adjusted by the photographer so that film F2 will record a sharp image of those objects that are close to the camera, but too close to be sharply defined on film F1.
c. Aperture of diaphragm and shutter speed will be set as with a conventional camera.
d. The camera will be operated, taking a conventional picture on film F1. Immediately following this, however, and as part of the same operation, the following will occur.
e. Mirror M, hinged at H, swings down to cover film F1.
t The shutter acts for a second time, taking a separate picture on film F2.
43. Both films are then processed conventionally, and the out of focus sections of each erased by electronic selection as described earlier. The focussed sections of each film remaining are then combined into one final complete in focus picture for printing or projection in conventional ways.
44. The shutter mechanism may be replaced or supplemented by a shutter at plane F1 and plane F2. The diaphragm may be replaced or supplemented by a diaphragm at plane F1 or plane F2. Film F2 may be mounted in an adjustable sliding lightproof system using blinds or bellows. Film F1 may be adjustable in position as film F2.
45. Either or both films may be mounted so as to move in all other planes relative to the lens, or each other, in addition to the backward/forward movement described above.
46. Rangefinding for either F1 or F2 or both may be by direct measurement, parallax methods or reflex or electronic methods as with conventional cameras.
47. The digital filter mechanism as described in 16-22 above may be used to modify one or more of the three multiplane images or to enhance a picture taken with a conventional single plane single lens camera; the following describes the process for enhancing a single photograph or a single frame of a motion picture or television signal.
48. The image is scanned and divided into say 390625 points as before. Groups of points, in line or in sets of lines are then considered, for example: 002007 002008 002009 002010 003007 003008 003009 003010 004007 004008 004009 004010 the numerical values of each point in any given scene could be represented by abck defl ghjm and the spacial frequency of point 003008 would be e'=8e-(a+b+c+d+f+g+h+j) and the spacial frequency of point 003009 would be f'=8f-(b+c+k+e+l+h+j+m) 49. Sheet 09 shows an image in which object A is in focus (Af), object B is out of focus (Bf+) and object C is out of focus in a different way from B (Cf-).
50. An editing machine consisting of a screen for viewing the whole original scene as photographed, plus two or more monitor screens and a final image screen is shown in sheet 9.
51. All picture points with a spacial frequency value of 9 or more (on a one bit basis) are switched to appear on Monitor 1. All points with values between 4 and 8 are switched to appear on monitor 2. All points with values less than 4 are switched to appear on Monitor 3.
52. Using suitable delay circuits, each monitor will show parts of the scene with three separate grades of focus. Object A will appear on Monitor 1, surrounded by a dark area, Object B will appear on Monitor 2, surrounded by a dark area, and Object C will appear on Monitor 3.
53. If the operator of the editing device wishes to present a final fully focussed scene on transmission screen S, he will select Monitor 1 for direct transmission and Object A will appear at S in full focus.
54. The operator will then consider the numerical value of the spacial frequency of each point making up the blurred image on Monitor 2, and by subjective decision, multiply those numbers by a suitable mathematical constant such that the picture then appearing on Monitor 2 is correctly in focus. Where the characteristics of the original camera are known, the operator's subjective opinion may be replaced with constants derived from the camera characteristics.
55. Similarly, but by the application of different multipliers, the image on Monitor 3 may be brought to acceptable focus.
56. Passing the signals from Monitor 1 and Monitor 2, after mathematical enhancement, will produce a picture in full focus on screen S, for transmission or recording in conventional ways.
Whilst three monitors are shown here, nine monitors would give higher definition and similarly, whilst one-bit numbers have been used to illustrate the principle, nine-bit or more would be suitable for full colour television or cine film projection.
57. The use of the editor gives the operator control as to the degree of blur in any section of the image i.e. blurred areas may be made sharp and sharp areas may be made blurred. The delay circuits may be varied from a time suitable for viewing one frame (say 1/50 second) to many minutes, hours or days according to the time needed for the operators decision processes.

Claims (5)

Claims
1. A mechanism for selecting those parts of a photographic image that are out of focus and replacing them with in-focus images to build up a new composite whole frame image, in-focus at all points.
2. A means of dividing the image formed by a conventional lens system in cameras or other optical devices, into two or more sub-images, each sub-image being focussed on planes at different distances from the lens system.
3. A mechanism that equalises the geometry of the horizontal and vertical scales of the subimages, and allots separate numerical or other values to each individual spacial co-ordinate of each sub-image.
4. A mechanism that forms mathematical equations from groups of two or more of the values in each of the sub-images in Claim 3., compares the curves of these equations and selects that curve that most closely coincides with a part of the main image that is in-focus.
5. A method of forming an image of a scene which is substantially in-focus at all points, comprising imaging the scene at a focal plane, generating a video signal of said image, identifying those portions of the video signal corresponding to the in-focus and out-of-focus image areas of said image, processing the video signal portions corresponding to the said identified out-of-focus image areas to enhance the sharpness thereof so that when displayed they appear to be in-focus, and displaying as a composite image the video signal corresponding to the in-focus image areas with the processed video signal portions corresponding to the out-offocus image areas.
5. A mechanism similar to claim 4. but that selects that curve by subjective intervention of a human operator.
6. A switch mechanism triggered by the selected result of claim 4 or 5 that suppresses those areas of sub-images that are out-of-focus.
7. A mechanism for using the output of the above mechanisms to reform a single composite whole image for video or film reproduction.
8. A mechanism for monitoring the focussed and out-of-focus sections of an image, separating them and storing them for later consideration by a human operator, permitting the operator to modify mathematically and add to or detract from the original, such that out-of-focus sections of the original scene may be shown in-focus, or viceversa to any degree chosen by the operator.
9. Means for producing all the above modified images on film, in monochrome or colour, x-ray, infra-red, ultra-violet and all other frequencies above and below the visible light range.
10. Means for producing all the above modified images as still or motion pictures, for recording on any material or transmission by electronic, wireless radio, cable or other means.
The camera including image processing circuitry is claimed in Application No. 7834670 from which the present application is divided.
Claims
1. An editor in which one or more video signals can be received from images stored on film or any material or transmitted by electronic, wireless radio, cable or other means, in colour, monochrome, x-ray infra-red ultra-violet and all other frequencies above and below the visible light range.
2. An editor as in Claim 1 in which the said one or more video signals can be displayed and the infocus and outof-focus portion of any of the said images may be identified.
3. An editor as in Claims 1 and 2 in which the video signal portions corresponding to the said identified out-of-focus image areas can be enhanced so that when they are further displayed they appear to be in-focus.
4. An editor as in any of the preceding claims in which the enhanced out-of-focus image areas as a result of Claim 3. can be displayed with any other in-focus or enhanced image area and reformed into a single whole image video signal.
GB8019138A 1978-08-25 1978-08-25 Multi Planar Image Editor Withdrawn GB2055005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB8019138A GB2055005A (en) 1978-08-25 1978-08-25 Multi Planar Image Editor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB8019138A GB2055005A (en) 1978-08-25 1978-08-25 Multi Planar Image Editor

Publications (1)

Publication Number Publication Date
GB2055005A true GB2055005A (en) 1981-02-18

Family

ID=10513983

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8019138A Withdrawn GB2055005A (en) 1978-08-25 1978-08-25 Multi Planar Image Editor

Country Status (1)

Country Link
GB (1) GB2055005A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0438116A2 (en) * 1990-01-16 1991-07-24 Canon Kabushiki Kaisha Focus detection apparatus
EP0441117A2 (en) * 1990-01-05 1991-08-14 Nikon Corporation Camera exposure calculation device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0441117A2 (en) * 1990-01-05 1991-08-14 Nikon Corporation Camera exposure calculation device
EP0441117A3 (en) * 1990-01-05 1992-02-05 Nikon Corporation Camera exposure calculation device
EP0438116A2 (en) * 1990-01-16 1991-07-24 Canon Kabushiki Kaisha Focus detection apparatus
EP0438116A3 (en) * 1990-01-16 1992-03-04 Canon Kabushiki Kaisha Focus detection apparatus
US5189465A (en) * 1990-01-16 1993-02-23 Canon Kabushiki Kaisha Focus detection apparatus detecting fears to a plurality of areas
US5311241A (en) * 1990-01-16 1994-05-10 Canon Kabushiki Kaisha Focus detecting apparatus detecting focus to a plurality of areas

Similar Documents

Publication Publication Date Title
US3595987A (en) Electronic composite photography
EP1348148B2 (en) Camera
US6995793B1 (en) Video tap for a digital motion camera that simulates the look of post processing
JPH10513019A (en) Method and apparatus for zooming during capture and playback of three-dimensional images
US20050140820A1 (en) Lens unit and camera
GB1528362A (en) Stereoscopic photography methods and apparatus
US4695130A (en) Camera flash attachment for three dimensional imaging systems
CN108737696A (en) Picture pick-up device, control method and non-transitory storage medium
US7006141B1 (en) Method and objective lens for spectrally modifying light for an electronic camera
CA2054687A1 (en) Imaging systems
US4255033A (en) Universal focus multiplanar camera
US4457599A (en) Composite photography apparatus and method
US4994898A (en) Color television system for processing signals from a television camera to produce a stereoscopic effect
JPH07274214A (en) Stereoscopic video camera
US7164462B2 (en) Filming using rear-projection screen and image projector
GB2055005A (en) Multi Planar Image Editor
GB2055006A (en) Multi Planar Camera
US2985064A (en) Composite photography
JP3939127B2 (en) Imaging device
JP2005117399A (en) Image processor
US5572633A (en) Key-subject alignment method and printer for 3D printing utilizing a video monitor for exposure
JP4143140B2 (en) Image processing apparatus, printer, and photographing system
FR2354013A1 (en) Cinema or TV stereoscopic image production - uses two oppositely polarised films covering left and right images viewed alternately, through polarised spectacles
EP1584184A1 (en) Method for processing film images which are deviated from the film recording optical path of a moving picture camera
GB2010631A (en) Multi planar camera

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)