AU2008283765A1 - Method and software for transforming images - Google Patents

Method and software for transforming images Download PDF

Info

Publication number
AU2008283765A1
AU2008283765A1 AU2008283765A AU2008283765A AU2008283765A1 AU 2008283765 A1 AU2008283765 A1 AU 2008283765A1 AU 2008283765 A AU2008283765 A AU 2008283765A AU 2008283765 A AU2008283765 A AU 2008283765A AU 2008283765 A1 AU2008283765 A1 AU 2008283765A1
Authority
AU
Australia
Prior art keywords
image
point
fixation
distance
disorder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2008283765A
Inventor
Andy Baker
Peter Hanik
David Hoskins
John Jupe
Simon Parish
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Atelier Vision Ltd
Original Assignee
Atelier Vision Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Atelier Vision Ltd filed Critical Atelier Vision Ltd
Publication of AU2008283765A1 publication Critical patent/AU2008283765A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Description

WO 2009/018557 PCT/US2008/072041 METHOD AND SOFTWARE FOR TRANSFORMING IMAGES CROSS REFERENCE TO RELATED APPLICATIONS [00011 The present application is a continuation-in-part application which claims priority to provisional application No. 60/963,052 filed on August 2, 2007 and 10/551,290, filed on September 29, 2005 which is a US national stage application of international application No. PCT/GB2004/001,262, filed on March 25, 2004, which claims priority to GB 0307307.9 filed on March 29, 2003 and GB 0328839.6 filed on December 12, 2003 which are incorporated by reference herein in its entirety. FIELD OF INVENTION [00021 This invention generally relates to the field of image processing, and more particularly, but not by limitation, to techniques for enhancing the immersive qualities of representational media. BACKGROUND OF THE INVENTION [00031 The immersive qualities of an image may be influenced by the perception of depth within the image, orientation of the observer with respect to the depiction of space within the image or proximity cues, the observer's awareness of the spatial relationships existing between objects forming part of depicted scene, and the overall perception. [00041 When three-dimensional (3D) scenes are displayed or depicted using conventional two-dimensional (2D) techniques, such as by printing the scene on to paper, displaying it on a monitor, and other methods, there may be occasions when the perception of depth and form in the displayed image is not particularly good, even though the brain perceives the images displayed as being to some extent three-dimensional in nature. This could be caused by the absence of sufficient monocular or perceptually created cues in the 1 WO 2009/018557 PCT/US2008/072041 image to allow the brain to interpret the displayed image meaningfully. Approaches such as stereoscopic imaging, ray casting, or ray tracing may be utilized to improve depth perception in images. However, these approaches may require significant computational resources, special equipment, or the like. [0005] Thus the invention may produce images that more accurately incorporate monocular capabilities based on correctly rendering the structure of central and peripheral vision. These improvements may take representational media closer to the structure of the phenomenon of vision and reflect perceptual structure. SUMMARY OF THE INVENTION [0006] One aspect of the disclosure provides a method for processing an image comprises the steps of selecting a fixation point in an image, wherein the fixation point is a focal point of the image, and selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point. Further, the image is disordered outside the fixation region as a function of a distance. [0007] Yet another aspect of the disclosure provides a computer-readable medium having computer-executable instructions for performing a method for processing an image. The method for processing an image comprises the steps of selecting a fixation point in an image, wherein the fixation point is a focal point of the image, and selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point. Further, the image is disordered outside the fixation region as a function of a distance. 2 WO 2009/018557 PCT/US2008/072041 BRIEF DESCRIPTION OF THE DRAWINGS [0008] For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: [00091 FIG. 1 provides an illustrative implementation of a picture to be enhanced; [0010] FIG. 2 provides an illustrative implementation of a fixation point and fixation volume of a picture; [00111 FIG. 3 provides an illustrative implementation of a radial depth map of a picture as a gray scale image; [0012] FIG. 4 provides an illustrative implementation of a transformed image in vision space; [0013] FIG. 5 provides an illustrative implementation of a penumbra around a fixation volume; [0014] FIG. 6 provides an illustrative implementation of an image stretched in the X direction; [0015] FIG. 7 provides an illustrative implementation of a gray scale image with a small maximum radius for disorder; 10016] FIG. 8 provides an illustrative implementation of a gray scale image with a large maximum radius for disorder; [0017] FIG. 9 provides an illustrative implementation of an area of low disorder; [0018] FIG. 10 provides an illustrative implementation of a gray scale image with an 3 WO 2009/018557 PCT/US2008/072041 area of high disorder; [0019] FIG. 11 provides an illustrative implementation of random disorder pattern; [00201 FIG. 12 provides an illustrative implementation of swim disorder pattern; [00211 FIG. 13 provides an illustrative implementation of a blur disorder pattern; [00221 FIG. 14 provides an illustrative implementation of lines of occlusion and occlusion distance; [0023] FIG. 15 provides an illustrative implementation of an image stretched in the Y direction; [0024] FIG. 16 provides an illustrative implementation of an image rotated around the fixation point; and [0025] FIG. 17 provides an illustrative implementation of a final enhanced image. 4 WO 2009/018557 PCT/US2008/072041 DETAILED DESCRIPTION OF THE INVENTION [0026] In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well known components have been shown in a simplified form in order not to obscure the present invention in unnecessary detail. Some details may be omitted inasmuch as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art. [0027] Vision space, as distinguished from picture space, may seek to mimic perceptual structure or the structure of the actual phenomenon of vision. Vision space may acknowledge the individual nature and specialization of peripheral vision and central vision. Vision space may also recognize that brain functions 'create' important facets of visual perception by making 'relativistic judgments' between the two. One aspect of novelty of this approach may be the realization that these new cues rely on as yet unappreciated monocular cues and the composition of monocular projection. These monocular cues may distinguish this method of achieving proximity judgment or saliency of the image from other techniques that rely on stereo cues deriving from effects based on binocular disparity. In addition, these techniques may impart important orientation cues that may be utilized to factor the observer into the depiction of reality presented in the 2D media. The inclusion of this range of visual cues can be used to improve proximity judgments and to increase the immersiveness of all forms of representational media. [0028] By correctly representing and aligning two presentations used in monocular vision by the brain to compose more meaningful images (the projection known as vision), it is possible to induce the brain to replicate depth and form perceptions that could otherwise 5 WO 2009/018557 PCT/US2008/072041 only be created by directly observing the 'real setting' from which the pictorial information derived. Image enhancement may be achieved selecting a fixation point and disordering the image centering the disordering operation around the fixation point. In addition, the original image may be stretched in the vertically and/or horizontally, rotated, blurred, and modified in a variety of ways. The modified image may replicate the flow of visual information from the eye, including peripheral and central vision respectively, and may also mimic the final presentation of vision created from information received by the eye. [0029] Vision space can be used in computer generated (CG) media, virtual reality (VR) media working in 3D to output vision space media directly, post-production transformation of 2D media, realtime as media is captured, and the like. For example, for post production work on 2D media, information representing central vision may be isolated and processed separately from information representing peripheral vision. Once the image processing is performed, the information representing central and peripheral vision may be combined. When the two sets of information are combined, corresponding information from the disordered set of information forming a representation of peripheral vision may be removed, modulated, or juxtaposed. This process ensures that unwanted artifacts may be removed from the media when the final combined representation is created or that related elements of the image may be co-presented. Further, it may be necessary to place a third layer of information from the 'X' stretched data behind the 'Y' stretched data to ensure that any void data areas are replenished with correct information from the scene. This method for converting information to vision space is further discussed herein with reference to the drawings. [0030] In order for an image provided in picture space to be converted into vision space, image processing, such as disordering, rotating, stretching, or the like, may be 6 WO 2009/018557 PCT/US2008/072041 performed. In one implementation of the invention, the image processing discussed herein may be performed by a processor or the like in accordance with a program provided on a computer readable medium. FIG. 1 provides an illustrative implementation of a picture to be enhanced. The image of a butterfly and other surrounding items may be a graphic design depicted in traditional picture space. The image may provide limited relative spatial information and saliency based on visual cues. For example, the chair appears to be in front of the cabinet and the wall because the chair obscures part of these objects. Similarly, visual cues may indicate that the cereal box is in front of the wall. However, the butterfly may have very few visual cues contained in the representation. It might appear that the butterfly is in front of the wall, but it is possible that the butterfly is a drawing on one of the tiles in the wall. From the image it is difficult to be able to tell with certainty if the butterfly is in front of the wall and if so, by how much. The butterfly could be above or below the table and it could be in front of or behind the cabinet. In some situations, picture space may not provide sufficient spatial information or correctly segment information to facilitate these judgments. [0031] FIG. 2 provides an illustrative implementation of a fixation point and fixation volume of a picture. At the beginning of the enhancement process to transform the butterfly in picture space to an image in vision space, a fixation point 1 and fixation volume 2 may be selected. For instance, a fixation point 1 may be manually selected by an observer, automatically selected utilizing a perceptive user interface (PUI) (e.g. an eye tracking device), or selected utilizing any other suitable method. The butterfly may be selected as the object of fixation and a fixation point 1 may be defined on the butterfly. A region including one or more points, areas, or a volume surrounding the butterfly may be selected as the fixation volume 2. The fixation volume 2 may include all objects that are to be represented in central vision. This region may be as small or as large as desired. No minimum size is required and the fixation volume 2 may be identical with the fixation point 1. For instance 7 WO 2009/018557 PCT/US2008/072041 the fixation volume 2 may be coincident with the fixation point, a two dimensional (2D) area around the fixation point 1, or a three dimensional (3D) volume of space surrounding the fixation point 1. This process may help to segment an object from the space in which it sits. [0032] FIG. 3 provides an illustrative implementation of a radial depth map of a picture as a gray scale image. A normal depth map may provide depth data from the camera plane (front) to the furthest object in the scene (back). A radial depth map may recalculate the depth data to propagate out from a designated location within the image. When converting an image from picture space to vision space, the degree of disorder applied to the image may be increased as radial distance from the fixation point increases. A radial depth map of an image may be utilized to determine distance from a designated location within the image. Darker areas 3 in the gray scale image may be further away from the designated location (e.g. a fixation point) and may represent areas where a higher level of disorder should be applied. Lighter areas 4 in the image may be closer to the fixation point and represent areas that a lower level of disorder should be applied. This variation in disorder may simulate the disorder that naturally occurs in human vision at distances away from a fixation point. Additionally, there may be other factors that affect the fall-off in the radial disorder field in addition to distance from the fixation. These factors may include a variable self similar fractal pattern utilized to incrementally disorder the image. The degree to which the variable fall-off is deployed may also be dependent on the distance an individual is from the presentation screen, the size of the presentation screen and angle of camera shot used in the representation. It may be noted that radial disorder discussed herein is distinguished from the decrease in sharpness outside the depth of field in an image or film, which is a purely optical effect resulting from the fixed focus of a camera lens. [0033] FIG. 4 provides an illustrative implementation of a transformed image in 8 WO 2009/018557 PCT/US2008/072041 vision space. The vision space image illustrates the butterfly surrounding objects after certain transformation discussed herein were performed. Notice that the observer may now be capable of recognizing that the butterfly is in front of the wall and in front of the cabinet as well. Further, the observer may notice the butterfly can be identified as above the table. This additional spatial and orientation information about the scene may allow the eye and brain to make a new range of spatial judgments and may create a more accurate perception of the butterfly and its surroundings. In addition, in moving images it may become possible in vision space media to anticipate the flight of the butterfly in spatial terms. In picture space, these spatial judgments may merely be guesses made from secondary information such as occlusion cues, direction of travel, cast shadows and other such cues. [0034] As shown in FIG. 2, the fixation point and fixation volume may be defined and clearly delineated. However, human vision typically does not provide a clear delineation between central and peripheral vision. Instead, there may be a transition or transformation between central and peripheral vision. FIG. 5 provides an illustrative implementation of a penumbra around a fixation volume. This visual phenomenon may be simulated by providing a penumbra 5 as a transition volume around the fixation volume. The same procedure, but in inverse, can be engineered in the peripheral data-set for the remaining area outside of the fixation volume by providing a two way merge between the data-sets. [0035] FIG. 6 provides an illustrative implementation of an image stretched in the X direction. The image may be stretched 6 in the X direction within the portion of the image outlined by the fixation volume. This may be done to further distinguish and segment the region of central vision, which is unstretched 7, from the peripheral region or the region outside of central vision. [00361 FIG. 7 provides an illustrative implementation of a gray scale image. The 9 WO 2009/018557 PCT/US2008/072041 fixation point may be white and the objects may become darker as the distance of objects from the fixation point increases. Further, the degree of disorder incorporated in the final vision space image increases as the gray scale image becomes darker. At the distance where the gray scale image becomes black, maximum disorder may occur and objects beyond this distance may be shown with maximum disorder. The distance of maximum disorder may be adjusted to achieve the desired effect in the final vision space image. FIG. 7 shows a relatively small radius defining the distance of maximum disorder 8. FIG. 8 provides an illustrative implementation of a gray scale image with a large maximum radius for disorder 9. Notice that the cabinet and the bottle, which were black in FIG. 7, may be significantly lighter in the gray scale image when the radius defining the distance of maximum disorder increases. [00371 Similarly, the degree of maximum disorder within any region in peripheral space can be adjusted to achieve the desired effect in the fmal vision space image. Disorder may be created by perturbing an image or distorting spatial information. In picture space, disorder may be created by rearranging pixels in a specific manner within certain constraints, such as by moving pixels around utilizing random Gaussian fields , to form a swim disorder pattern, to form a blur disorder pattern, or the like. FIG. 9 provides an illustrative implementation of an area of low disorder. FIG. 10 provides an illustrative implementation of a gray scale image with an area of high disorder. Different types of disorder may also be employed to achieve the desired effect in the final vision space image. FIG. 11 provides an illustrative implementation of random disorder pattern. FIG. 12 provides an illustrative implementation of swim disorder pattern. FIG. 13 provides an illustrative implementation blur disorder pattern. Any disorder/noise or stylized texture pattern may be selected to achieve the desired or preferred visual effect. Different individuals may prefer or respond differently to different disorder patterns. It is possible to apply the disorder through a vector 10 WO 2009/018557 PCT/US2008/072041 field that is larger or the same size as the representation and dependent on the degree of camera movement, through a 3D or environmental vector field, or if using a form of random noise or textures, directly (i.e. there may be no need for the fall off vector fields). The disorder can be organized to modulate between frames, such as a frame by frame reset of the disorder pattern irrespective of movement of the camera or the object held in fixation, or to be static between frames, such as when changes appear in the disorder if either the camera or object of fixation moves. There may also be a mixture of the two functions to provide a rendering of the disorder pattern over time that is sympathetic/pleasing/unobtrusive to the viewer. While several potential methods of creating disorder are discussed herein, the scope of the claims is in no way limited to the specific methods discussed. Any suitable method of creating disorder know to one of ordinary skill in the art may be utilized unless the claims specifically limit the disorder methods. [00381 A further application of the techniques may be formulated for use with real time applications. Here the principles of the invention can be organized in such a way that output from the real-time engine may be directly formatted as enhanced media in vision space. Real-time engines employing the techniques of the invention may include applications in virtual reality (VR), simulators, video games, and the like and the techniques are not limited to media that may be subjected to post-production activities (e.g. film, animation, video programming, etc.). [0039] While disordering an image in peripheral vision, a selected area for disorder may include the edge of one or more objects located separately in space or between an object and background surface. If this area is disordered without giving consideration to the spatial location of all objects, the result may lead to misleading disorder levels at these edges with respect to background surfaces. In human vision, an observer may be capable of perceiving 11 WO 2009/018557 PCT/US2008/072041 well defined spatially adjusted disorder levels for object boundaries and edges. To make this compensation in the final vision space image such that the relative sharpness of the edges may be visible on objects, lines of occlusion may be defined. FIG. 14 provides an illustrative implementation of lines of occlusion and occlusion distance. In a gray scale depth map image, the area between the lines may indicate the extent of space within the depiction where disorder at occluded edges of objects within the area may be treated as a group and may affect one another. If an object appears outside this demarcated area it may be subjected to occlusion control where its spatial proximity becomes relevant and influences the degree of disorder appearing at perimeter boundaries. The level of disorder associated with a further object may be prevented from influencing the degree of disorder apparent on the edge of the closer object. The distance 11 between the lines of occlusion 10 can be extended or reduced to control the sensitivity of the occlusion control. Alternative methods for controlling this facet of the invention in 2D images could be developed. The occlusion adjustment may be made one-way, where disorder is applied to edges dependent on the distance from the fixation point, or two-way, where all edges remain sharp within the demarked area. [00401 FIG. 15 provides an illustrative implementation of an image stretched in the Y direction. In particular, the peripheral space of the image may be stretched 12 in the Y direction. This stretching 12 may help to additionally distinguish objects in the volume of central vision included in the fixation volume from the objects outside the fixation volume in peripheral space. In contrast, a region including the fixation volume remains unstretched 13 in FIG. 15. [0041] FIG. 16 provides an illustrative implementation of an image rotated around the fixation point. Objects outside of the fixation volume may be rotated 14. For instance, objects may be rotated clockwise as in the implementation shown. This rotation 14 may be a 12 WO 2009/018557 PCT/US2008/072041 further example of the creative segmentation of central vision and peripheral vision and can be applied to representational media. The rotation 14 could simulate the visual effect that results from the dominance of the right eye or left eye in humans. Since the majority of people are right eye dominant, most people may prefer a clockwise rotation of the peripheral space in the final vision space image. However, left eye dominant people may prefer a counterclockwise rotation. In human vision, the rotation may change as an observer blinks. This modulation of the rotation can be replicated in moving image media. [0042] In human vision, an observer does not see a frame around the space viewed. However, in picture space, the viewing area of the image may be delineated by a frame or edge. The presence of this frame may negatively affect the vision space effect by disrupting the increasing disorder pattern as we move further away from the fixation point. In order to minimize the negative effect on vision space, an area of disorder may be provided around the border of the image which transitions from the solid color of the frame to the disordered area in the peripheral space. The color of the frame can be changed to reflect the overall color scheme of the image and create a smoother transition. FIG. 17 provides an illustrative implementation of a final enhanced image in vision space. The enhanced image produced may further increase the saliency and perceived reality of the original image. In one implementation, the disorder apparent at the border 15 of the image may be detected and may dictate the transition to solid color, which would further obviate the influence of a frame in representational media. [0043] In one implementation, a single data set separated into central and peripheral regions may be utilized. However, in other implementations, specialized configurations using two or more data sets at any one time across the media can be implemented. Each region may be transformed to simulate the differentiation between human central and 13 WO 2009/018557 PCT/US2008/072041 peripheral vision. By utilizing two data sets where central vision is included in one data set and peripheral vision is included in a second data set, the two data sets may be merged/combined in various compositions. The technique of using two data sets may be convenient for use in computer programs because the two data sets can be independently streamed and transformed. However, when transformations are made using two data sets that are later combined, artifacts can be present at the point where the two data sets overlap. These artifacts may appear as a double image and may be discordant to some individuals, especially in moving images created in vision space. A further enhancement of the two data set technique can be achieved by cutting out an area outside the fixation volume in data set 1 (i.e. central vision) or cutting out an area in the fixation volume in data set 2 (i.e. peripheral vision) and then merging the two data sets. When the two images are combined, there may be a limited useful interface and hence less adverse double referencing. [0044] In other implementations, it may also be possible to set out a disorder field from the individual observer or camera position instead of from a fixated object appearing in the field of view. The 3D disorder field can be warped or centered on any point on or outside the field of view to achieve variable spatial effects. [0045] In addition, it may be possible to generate special effects by 'misusing' the parameters identified as representing 'normal' viewing conditions. For example, it may be possible to cut out forms from the media and to attribute to these forms an 'unnatural' disorder reference with respect to the selected fixation point. This can have the effect of an observer misunderstanding the spatial position of said forms within the representation. Careful manipulation of the disorder field can cause other cues, such as occlusion cues, to be overridden in preference for disorder references indicating spatial proximity. [00461 Video can be transformed into vision space by transforming the sequence of 14 WO 2009/018557 PCT/US2008/072041 individual images that make up the moving picture. In this case it may be necessary to track the fixation point as each frame in the video is advanced. This may be accomplished by defining a group of pixels on an object as a tracking point. The fixation point may be moved by interpolation between the fixation point in an early image in the sequence and a later image in the sequence. Film editors may be given the flexibility to define fixation point manually by using a touch screen, mouse or similar positioning device to track the fixation point while the video is played back. Another technique involves the use of eye tracing/tracking techniques to determine where a viewer's vision is fixating while viewing the original video. The eye tracking technique may be used to define the fixation point for every frame in the video. While this could have applications in post-production media, the application of eye-tracking technology may be best suited to real-time media. Other data/information pertaining to camera position (and camera movement) could be useful in managing/controlling the application of the disorder field. [0047] All patents and publications referenced herein are hereby incorporated by reference. It will be understood that certain of the above-described structures, functions, and operations of the above-described implementation are not necessary to practice the present invention and are included in the description simply for completeness of an exemplary implementations or additional implementations. In addition, it will be understood that specific structures, functions, and operations set forth in the above-described referenced patents and publications can be practiced in conjunction with the present invention, but they are not essential to its practice. It is therefore to be understood that the invention may be practiced otherwise than as specifically described without actually departing from the spirit and scope of the present invention as defined by the appended claims. 15

Claims (19)

1. A method for processing an image comprising the steps of: selecting a fixation point in an image, wherein the fixation point is a focal point of the image; selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point; and disordering the image outside the fixation region as a function of a distance.
2. The method of claim 1, wherein the fixation region is a point coincident with the fixation point, a two dimensional area around said fixation point, or a three dimensional volume around said fixation point.
4. The method of claim 1, wherein the distance is the distance from the fixation point to a point in the image.
5. The method of claim 1, wherein the distance is the distance from an observation point representing the point of view of an observer to a point in the image. 3. The method of claim 1, wherein a radius of maximum disorder indicates a maximum disorder distance at which maximum disorder in the image occurs.
6. The method of claim 1 further comprising the step of: blurring, altering contrast, altering color saturation, altering brightness, or a combination thereof in the image as a function of the distance. 16 WO 2009/018557 PCT/US2008/072041
7. The method of claim 1, wherein disordering the image comprises a rearrangement of pixels in the image.
8. The method of claim 7, wherein the rearrangement of the pixels is performed utilizing a self-similar fractal pattern.
9. The method of claim 1, wherein the fixation point is defined by a small group of pixels.
10. The method according to claim 1, wherein the image is a moving image comprising a plurality of images and the processing is performed on each of the plurality of images.
11. A computer-readable medium having computer-executable instructions for performing a method for processing an image, the method comprising: selecting a fixation point in an image, wherein the fixation point is a focal point of the image; selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point; and disordering the image outside the fixation region as a function of a distance.
12. The computer-readable medium of claim 11, wherein the fixation region is a point coincident with the fixation point, a two dimensional area around said fixation point, or a three dimensional volume around said fixation point. 17 WO 2009/018557 PCT/US2008/072041
13. The computer-readable medium of claim 11, wherein the distance is the distance from the fixation point to a point in the image.
14. The computer-readable medium of claim 11, wherein the distance is the distance from an observation point representing the point of view of an observer to a point in the image.
15. The computer-readable medium of claim 1, wherein a radius of maximum disorder indicates a maximum disorder distance at which maximum disorder in the image occurs.
16. The computer-readable medium of claim 11 further comprising the step of: blurring, altering contrast, altering color saturation, altering brightness, or a combination thereof in the image as a function of the distance.
17. The computer-readable medium of claim 11, wherein disordering the image comprises a rearrangement of pixels in the image.
18. The computer-readable medium of claim 17, wherein the rearrangement of the pixels is performed utilizing a self-similar fractal pattern.
19. The computer-readable medium of claim 11, wherein the fixation point is defined by a small group of pixels.
20. The computer-readable medium according to claim 11, wherein the image is a moving image comprising a plurality of images and the processing is performed on each of the plurality of images. 18
AU2008283765A 2007-08-02 2008-08-01 Method and software for transforming images Abandoned AU2008283765A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US96305207P 2007-08-02 2007-08-02
US60/963,052 2007-08-02
US55129007A 2007-09-29 2007-09-29
US10/551,290 2007-09-29
PCT/US2008/072041 WO2009018557A1 (en) 2007-08-02 2008-08-01 Method and software for transforming images

Publications (1)

Publication Number Publication Date
AU2008283765A1 true AU2008283765A1 (en) 2009-02-05

Family

ID=40304929

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008283765A Abandoned AU2008283765A1 (en) 2007-08-02 2008-08-01 Method and software for transforming images

Country Status (3)

Country Link
AU (1) AU2008283765A1 (en)
DE (1) DE112008002083T5 (en)
WO (1) WO2009018557A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2566280B (en) 2017-09-06 2020-07-29 Fovo Tech Limited A method of modifying an image on a computational device
GB2566276B (en) * 2017-09-06 2020-07-29 Fovo Tech Limited A method of modifying an image on a computational device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB307307A (en) 1928-03-02 1930-06-04 Ig Farbenindustrie Ag Process for the manufacture of amino-substituted tertiary alcohols
GB328839A (en) 1929-04-05 1930-05-08 Oerlikon Maschf Surge protection arrangement for electric plants of low operating voltage
US5973700A (en) * 1992-09-16 1999-10-26 Eastman Kodak Company Method and apparatus for optimizing the resolution of images which have an apparent depth
US20030076413A1 (en) * 2001-10-23 2003-04-24 Takeo Kanade System and method for obtaining video of multiple moving fixation points within a dynamic scene
US7163192B2 (en) 2002-06-20 2007-01-16 Kitz Corporation Actuator for valve

Also Published As

Publication number Publication date
DE112008002083T5 (en) 2010-10-28
WO2009018557A1 (en) 2009-02-05

Similar Documents

Publication Publication Date Title
JP6873096B2 (en) Improvements in and on image formation
US8711204B2 (en) Stereoscopic editing for video production, post-production and display adaptation
EP3035681B1 (en) Image processing method and apparatus
US9445072B2 (en) Synthesizing views based on image domain warping
JP4766877B2 (en) Method for generating an image using a computer, computer-readable memory, and image generation system
CN107079079B (en) For shaking the both-end metadata of visual control
CN105894567B (en) Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
US11659158B1 (en) Frustum change in projection stereo rendering
US9681122B2 (en) Modifying displayed images in the coupled zone of a stereoscopic display based on user comfort
Berning et al. A study of depth perception in hand-held augmented reality using autostereoscopic displays
WO2014121108A1 (en) Methods for converting two-dimensional images into three-dimensional images
Zhong et al. Reproducing reality with a high-dynamic-range multi-focal stereo display
US10210654B2 (en) Stereo 3D navigation apparatus and saliency-guided camera parameter control method thereof
EA013779B1 (en) Enhancement of visual perception
US9897806B2 (en) Generation of three-dimensional imagery to supplement existing content
DE102015223003A1 (en) Device and method for superimposing at least a part of an object with a virtual surface
JP2011529285A (en) Synthetic structures, mechanisms and processes for the inclusion of binocular stereo information in reproducible media
AU2008283765A1 (en) Method and software for transforming images
JP2017163373A (en) Device, projection device, display device, image creation device, methods and programs for these, and data structure
Ardouin et al. Design and evaluation of methods to prevent frame cancellation in real-time stereoscopic rendering
AU2004226624B2 (en) Image processing
GB2548080A (en) A method for image transformation
Dąbała et al. Simulated holography based on stereoscopy and face tracking
Berning et al. Improving Depth Perception for Hand-held Augmented Reality using Autostereoscopic Displays

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period