WO2009018557A1 - Method and software for transforming images - Google Patents

Method and software for transforming images Download PDF

Info

Publication number
WO2009018557A1
WO2009018557A1 PCT/US2008/072041 US2008072041W WO2009018557A1 WO 2009018557 A1 WO2009018557 A1 WO 2009018557A1 US 2008072041 W US2008072041 W US 2008072041W WO 2009018557 A1 WO2009018557 A1 WO 2009018557A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fixation
point
distance
disorder
Prior art date
Application number
PCT/US2008/072041
Other languages
English (en)
French (fr)
Inventor
Andy Baker
Peter Hanik
David Hoskins
John Jupe
Simon Parish
Original Assignee
Atelier Vision Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Atelier Vision Limited filed Critical Atelier Vision Limited
Priority to AU2008283765A priority Critical patent/AU2008283765A1/en
Priority to DE112008002083T priority patent/DE112008002083T5/de
Publication of WO2009018557A1 publication Critical patent/WO2009018557A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Definitions

  • This invention generally relates to the field of image processing, and more particularly, but not by limitation, to techniques for enhancing the immersive qualities of representational media.
  • the immersive qualities of an image may be influenced by the perception of depth within the image, orientation of the observer with respect to the depiction of space within the image or proximity cues, the observer's awareness of the spatial relationships existing between objects forming part of depicted scene, and the overall perception.
  • the invention may produce images that more accurately incorporate monocular capabilities based on correctly rendering the structure of central and peripheral vision. These improvements may take representational media closer to the structure of the phenomenon of vision and reflect perceptual structure.
  • One aspect of the disclosure provides a method for processing an image comprises the steps of selecting a fixation point in an image, wherein the fixation point is a focal point of the image, and selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point. Further, the image is disordered outside the fixation region as a function of a distance.
  • Yet another aspect of the disclosure provides a computer-readable medium having computer-executable instructions for performing a method for processing an image.
  • the method for processing an image comprises the steps of selecting a fixation point in an image, wherein the fixation point is a focal point of the image, and selecting a fixation region in the image, wherein the fixation region comprises a volume around the fixation point. Further, the image is disordered outside the fixation region as a function of a distance.
  • FIG. 1 provides an illustrative implementation of a picture to be enhanced
  • FIG. 2 provides an illustrative implementation of a fixation point and fixation volume of a picture
  • FIG. 3 provides an illustrative implementation of a radial depth map of a picture as a gray scale image
  • FIG. 4 provides an illustrative implementation of a transformed image in vision space
  • FIG. 5 provides an illustrative implementation of a penumbra around a fixation volume
  • FIG. 6 provides an illustrative implementation of an image stretched in the X direction
  • FIG. 7 provides an illustrative implementation of a gray scale image with a small maximum radius for disorder
  • FIG. 8 provides an illustrative implementation of a gray scale image with a large maximum radius for disorder
  • FIG. 9 provides an illustrative implementation of an area of low disorder
  • FIG. 10 provides an illustrative implementation of a gray scale image with an area of high disorder
  • FIG. 11 provides an illustrative implementation of random disorder pattern
  • FIG. 12 provides an illustrative implementation of swim disorder pattern
  • FIG. 13 provides an illustrative implementation of a blur disorder pattern
  • FIG. 14 provides an illustrative implementation of lines of occlusion and occlusion distance
  • FIG. 15 provides an illustrative implementation of an image stretched in the Y direction
  • FIG. 16 provides an illustrative implementation of an image rotated around the fixation point
  • FIG. 17 provides an illustrative implementation of a final enhanced image.
  • Vision space may seek to mimic perceptual structure or the structure of the actual phenomenon of vision. Vision space may acknowledge the individual nature and specialization of peripheral vision and central vision. Vision space may also recognize that brain functions 'create' important facets of visual perception by making 'relativistic judgments' between the two.
  • One aspect of novelty of this approach may be the realization that these new cues rely on as yet unappreciated monocular cues and the composition of monocular projection. These monocular cues may distinguish this method of achieving proximity judgment or saliency of the image from other techniques that rely on stereo cues deriving from effects based on binocular disparity.
  • these techniques may impart important orientation cues that may be utilized to factor the observer into the depiction of reality presented in the 2D media.
  • the inclusion of this range of visual cues can be used to improve proximity judgments and to increase the immersiveness of all forms of representational media.
  • Image enhancement may be achieved selecting a fixation point and disordering the image centering the disordering operation around the fixation point.
  • the original image may be stretched in the vertically and/or horizontally, rotated, blurred, and modified in a variety of ways.
  • the modified image may replicate the flow of visual information from the eye, including peripheral and central vision respectively, and may also mimic the final presentation of vision created from information received by the eye.
  • Vision space can be used in computer generated (CG) media, virtual reality
  • VR virtual reality
  • information representing central vision may be isolated and processed separately from information representing peripheral vision. Once the image processing is performed, the information representing central and peripheral vision may be combined. When the two sets of information are combined, corresponding information from the disordered set of information forming a representation of peripheral vision may be removed, modulated, or juxtaposed. This process ensures that unwanted artifacts may be removed from the media when the final combined representation is created or that related elements of the image may be co-presented.
  • FIG. 1 provides an illustrative implementation of a picture to be enhanced.
  • the image of a butterfly and other surrounding items may be a graphic design depicted in traditional picture space.
  • the image may provide limited relative spatial information and saliency based on visual cues. For example, the chair appears to be in front of the cabinet and the wall because the chair obscures part of these objects. Similarly, visual cues may indicate that the cereal box is in front of the wall.
  • the butterfly may have very few visual cues contained in the representation. It might appear that the butterfly is in front of the wall, but it is possible that the butterfly is a drawing on one of the tiles in the wall. From the image it is difficult to be able to tell with certainty if the butterfly is in front of the wall and if so, by how much.
  • the butterfly could be above or below the table and it could be in front of or behind the cabinet. In some situations, picture space may not provide sufficient spatial information or correctly segment information to facilitate these judgments.
  • FIG. 2 provides an illustrative implementation of a fixation point and fixation volume of a picture.
  • a fixation point 1 and fixation volume 2 may be selected.
  • a fixation point 1 may be manually selected by an observer, automatically selected utilizing a perceptive user interface (PUI) (e.g. an eye tracking device), or selected utilizing any other suitable method.
  • the butterfly may be selected as the object of fixation and a fixation point 1 may be defined on the butterfly.
  • a region including one or more points, areas, or a volume surrounding the butterfly may be selected as the fixation volume 2.
  • the fixation volume 2 may include all objects that are to be represented in central vision.
  • fixation volume 2 may be identical with the fixation point 1.
  • the fixation volume 2 may be coincident with the fixation point, a two dimensional (2D) area around the fixation point 1, or a three dimensional (3D) volume of space surrounding the fixation point 1. This process may help to segment an object from the space in which it sits.
  • FIG. 3 provides an illustrative implementation of a radial depth map of a picture as a gray scale image.
  • a normal depth map may provide depth data from the camera plane (front) to the furthest object in the scene (back).
  • a radial depth map may recalculate the depth data to propagate out from a designated location within the image.
  • the degree of disorder applied to the image may be increased as radial distance from the fixation point increases.
  • a radial depth map of an image may be utilized to determine distance from a designated location within the image. Darker areas 3 in the gray scale image may be further away from the designated location (e.g. a fixation point) and may represent areas where a higher level of disorder should be applied.
  • Lighter areas 4 in the image may be closer to the fixation point and represent areas that a lower level of disorder should be applied. This variation in disorder may simulate the disorder that naturally occurs in human vision at distances away from a fixation point. Additionally, there may be other factors that affect the fall-off in the radial disorder field in addition to distance from the fixation. These factors may include a variable self similar fractal pattern utilized to incrementally disorder the image. The degree to which the variable fall-off is deployed may also be dependent on the distance an individual is from the presentation screen, the size of the presentation screen and angle of camera shot used in the representation. It may be noted that radial disorder discussed herein is distinguished from the decrease in sharpness outside the depth of field in an image or film, which is a purely optical effect resulting from the fixed focus of a camera lens.
  • FIG. 4 provides an illustrative implementation of a transformed image in vision space.
  • the vision space image illustrates the butterfly surrounding objects after certain transformation discussed herein were performed. Notice that the observer may now be capable of recognizing that the butterfly is in front of the wall and in front of the cabinet as well. Further, the observer may notice the butterfly can be identified as above the table.
  • This additional spatial and orientation information about the scene may allow the eye and brain to make a new range of spatial judgments and may create a more accurate perception of the butterfly and its surroundings.
  • these spatial judgments may merely be guesses made from secondary information such as occlusion cues, direction of travel, cast shadows and other such cues.
  • FIG. 5 provides an illustrative implementation of a penumbra around a fixation volume. This visual phenomenon may be simulated by providing a penumbra 5 as a transition volume around the fixation volume. The same procedure, but in inverse, can be engineered in the peripheral data-set for the remaining area outside of the fixation volume by providing a two way merge between the data-sets.
  • FIG. 6 provides an illustrative implementation of an image stretched in the X direction.
  • the image may be stretched 6 in the X direction within the portion of the image outlined by the fixation volume. This may be done to further distinguish and segment the region of central vision, which is unstretched 7, from the peripheral region or the region outside of central vision.
  • FIG. 7 provides an illustrative implementation of a gray scale image.
  • the fixation point may be white and the objects may become darker as the distance of objects from the fixation point increases. Further, the degree of disorder incorporated in the final vision space image increases as the gray scale image becomes darker. At the distance where the gray scale image becomes black, maximum disorder may occur and objects beyond this distance may be shown with maximum disorder. The distance of maximum disorder may be adjusted to achieve the desired effect in the final vision space image.
  • FIG. 7 shows a relatively small radius defining the distance of maximum disorder 8.
  • FIG. 8 provides an illustrative implementation of a gray scale image with a large maximum radius for disorder 9. Notice that the cabinet and the bottle, which were black in FIG. 7, may be significantly lighter in the gray scale image when the radius defining the distance of maximum disorder increases.
  • the degree of maximum disorder within any region in peripheral space can be adjusted to achieve the desired effect in the final vision space image.
  • Disorder may be created by perturbing an image or distorting spatial information.
  • disorder may be created by rearranging pixels in a specific manner within certain constraints, such as by moving pixels around utilizing random Gaussian fields , to form a swim disorder pattern, to form a blur disorder pattern, or the like.
  • FIG. 9 provides an illustrative implementation of an area of low disorder.
  • FIG. 10 provides an illustrative implementation of a gray scale image with an area of high disorder. Different types of disorder may also be employed to achieve the desired effect in the final vision space image.
  • FIG. 11 provides an illustrative implementation of random disorder pattern.
  • FIG. 12 provides an illustrative implementation of swim disorder pattern.
  • FIG. 13 provides an illustrative implementation blur disorder pattern. Any disorder/noise or stylized texture pattern may be selected to achieve the desired or preferred visual effect. Different individuals may prefer or respond differently to different disorder patterns. It is possible to apply the disorder through a vector field that is larger or the same size as the representation and dependent on the degree of camera movement, through a 3D or environmental vector field, or if using a form of random noise or textures, directly (i.e. there may be no need for the fall off vector fields).
  • the disorder can be organized to modulate between frames, such as a frame by frame reset of the disorder pattern irrespective of movement of the camera or the object held in fixation, or to be static between frames, such as when changes appear in the disorder if either the camera or object of fixation moves. There may also be a mixture of the two functions to provide a rendering of the disorder pattern over time that is sympathetic/pleasing/unobtrusive to the viewer. While several potential methods of creating disorder are discussed herein, the scope of the claims is in no way limited to the specific methods discussed. Any suitable method of creating disorder know to one of ordinary skill in the art may be utilized unless the claims specifically limit the disorder methods.
  • a further application of the techniques may be formulated for use with realtime applications.
  • the principles of the invention can be organized in such a way that output from the real-time engine may be directly formatted as enhanced media in vision space.
  • Real-time engines employing the techniques of the invention may include applications in virtual reality (VR), simulators, video games, and the like and the techniques are not limited to media that may be subjected to post-production activities (e.g. film, animation, video programming, etc.).
  • a selected area for disorder may include the edge of one or more objects located separately in space or between an object and background surface. If this area is disordered without giving consideration to the spatial location of all objects, the result may lead to misleading disorder levels at these edges with respect to background surfaces.
  • an observer may be capable of perceiving well defined spatially adjusted disorder levels for object boundaries and edges. To make this compensation in the final vision space image such that the relative sharpness of the edges may be visible on objects, lines of occlusion may be defined.
  • FIG. 14 provides an illustrative implementation of lines of occlusion and occlusion distance.
  • the area between the lines may indicate the extent of space within the depiction where disorder at occluded edges of objects within the area may be treated as a group and may affect one another. If an object appears outside this demarcated area it may be subjected to occlusion control where its spatial proximity becomes relevant and influences the degree of disorder appearing at perimeter boundaries. The level of disorder associated with a further object may be prevented from influencing the degree of disorder apparent on the edge of the closer object.
  • the distance 11 between the lines of occlusion 10 can be extended or reduced to control the sensitivity of the occlusion control. Alternative methods for controlling this facet of the invention in 2D images could be developed.
  • the occlusion adjustment may be made one-way, where disorder is applied to edges dependent on the distance from the fixation point, or two-way, where all edges remain sharp within the demarked area.
  • FIG. 15 provides an illustrative implementation of an image stretched in the Y direction.
  • the peripheral space of the image may be stretched 12 in the Y direction. This stretching 12 may help to additionally distinguish objects in the volume of central vision included in the fixation volume from the objects outside the fixation volume in peripheral space.
  • a region including the fixation volume remains unstretched 13 in FIG. 15.
  • FIG. 16 provides an illustrative implementation of an image rotated around the fixation point.
  • Objects outside of the fixation volume may be rotated 14.
  • objects may be rotated clockwise as in the implementation shown.
  • This rotation 14 may be a further example of the creative segmentation of central vision and peripheral vision and can be applied to representational media.
  • the rotation 14 could simulate the visual effect that results from the dominance of the right eye or left eye in humans. Since the majority of people are right eye dominant, most people may prefer a clockwise rotation of the peripheral space in the final vision space image. However, left eye dominant people may prefer a counterclockwise rotation. In human vision, the rotation may change as an observer blinks. This modulation of the rotation can be replicated in moving image media.
  • the viewing area of the image may be delineated by a frame or edge.
  • the presence of this frame may negatively affect the vision space effect by disrupting the increasing disorder pattern as we move further away from the fixation point.
  • an area of disorder may be provided around the border of the image which transitions from the solid color of the frame to the disordered area in the peripheral space.
  • the color of the frame can be changed to reflect the overall color scheme of the image and create a smoother transition.
  • FIG. 17 provides an illustrative implementation of a final enhanced image in vision space.
  • the enhanced image produced may further increase the saliency and perceived reality of the original image.
  • the disorder apparent at the border 15 of the image may be detected and may dictate the transition to solid color, which would further obviate the influence of a frame in representational media.
  • a single data set separated into central and peripheral regions may be utilized.
  • specialized configurations using two or more data sets at any one time across the media can be implemented.
  • Each region may be transformed to simulate the differentiation between human central and peripheral vision.
  • the two data sets may be merged/combined in various compositions.
  • the technique of using two data sets may be convenient for use in computer programs because the two data sets can be independently streamed and transformed. However, when transformations are made using two data sets that are later combined, artifacts can be present at the point where the two data sets overlap.
  • a further enhancement of the two data set technique can be achieved by cutting out an area outside the fixation volume in data set 1 (i.e. central vision) or cutting out an area in the fixation volume in data set 2 (i.e. peripheral vision) and then merging the two data sets.
  • the two images are combined, there may be a limited useful interface and hence less adverse double referencing.
  • a disorder field from the individual observer or camera position instead of from a fixated object appearing in the field of view.
  • the 3D disorder field can be warped or centered on any point on or outside the field of view to achieve variable spatial effects.
  • Video can be transformed into vision space by transforming the sequence of individual images that make up the moving picture.
  • the fixation point may be moved by interpolation between the fixation point in an early image in the sequence and a later image in the sequence.
  • Film editors may be given the flexibility to define fixation point manually by using a touch screen, mouse or similar positioning device to track the fixation point while the video is played back.
  • Another technique involves the use of eye tracing/tracking techniques to determine where a viewer's vision is fixating while viewing the original video.
  • the eye tracking technique may be used to define the fixation point for every frame in the video. While this could have applications in post-production media, the application of eye-tracking technology may be best suited to real-time media. Other data/information pertaining to camera position (and camera movement) could be useful in managing/controlling the application of the disorder field.

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
PCT/US2008/072041 2007-08-02 2008-08-01 Method and software for transforming images WO2009018557A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2008283765A AU2008283765A1 (en) 2007-08-02 2008-08-01 Method and software for transforming images
DE112008002083T DE112008002083T5 (de) 2007-08-02 2008-08-01 Verfahren und Software für Bildtransformation

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US96305207P 2007-08-02 2007-08-02
US60/963,052 2007-08-02
US55129007A 2007-09-29 2007-09-29
US10/551,290 2007-09-29

Publications (1)

Publication Number Publication Date
WO2009018557A1 true WO2009018557A1 (en) 2009-02-05

Family

ID=40304929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/072041 WO2009018557A1 (en) 2007-08-02 2008-08-01 Method and software for transforming images

Country Status (3)

Country Link
AU (1) AU2008283765A1 (de)
DE (1) DE112008002083T5 (de)
WO (1) WO2009018557A1 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019048819A1 (en) * 2017-09-06 2019-03-14 Fovo Technology Limited METHOD FOR MODIFYING AN IMAGE ON A COMPUTER DEVICE
US11353953B2 (en) 2017-09-06 2022-06-07 Fovo Technology Limted Method of modifying an image on a computational device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973700A (en) * 1992-09-16 1999-10-26 Eastman Kodak Company Method and apparatus for optimizing the resolution of images which have an apparent depth
US20030076413A1 (en) * 2001-10-23 2003-04-24 Takeo Kanade System and method for obtaining video of multiple moving fixation points within a dynamic scene

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB307307A (en) 1928-03-02 1930-06-04 Ig Farbenindustrie Ag Process for the manufacture of amino-substituted tertiary alcohols
GB328839A (en) 1929-04-05 1930-05-08 Oerlikon Maschf Surge protection arrangement for electric plants of low operating voltage
CA2490103C (en) 2002-06-20 2011-04-19 Kitz Corporation Actuator for valve

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973700A (en) * 1992-09-16 1999-10-26 Eastman Kodak Company Method and apparatus for optimizing the resolution of images which have an apparent depth
US20030076413A1 (en) * 2001-10-23 2003-04-24 Takeo Kanade System and method for obtaining video of multiple moving fixation points within a dynamic scene

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019048819A1 (en) * 2017-09-06 2019-03-14 Fovo Technology Limited METHOD FOR MODIFYING AN IMAGE ON A COMPUTER DEVICE
CN111164542A (zh) * 2017-09-06 2020-05-15 福沃科技有限公司 修改计算设备上的图像的方法
US11212502B2 (en) 2017-09-06 2021-12-28 Fovo Technology Limited Method of modifying an image on a computational device
US11353953B2 (en) 2017-09-06 2022-06-07 Fovo Technology Limted Method of modifying an image on a computational device

Also Published As

Publication number Publication date
AU2008283765A1 (en) 2009-02-05
DE112008002083T5 (de) 2010-10-28

Similar Documents

Publication Publication Date Title
JP6873096B2 (ja) イメージ形成における及びイメージ形成に関する改良
US8711204B2 (en) Stereoscopic editing for video production, post-production and display adaptation
EP3035681B1 (de) Bildverarbeitungsverfahren und -vorrichtung
US9445072B2 (en) Synthesizing views based on image domain warping
JP4766877B2 (ja) コンピュータを用いて画像を生成する方法、コンピュータ可読メモリ、および、画像生成システム
DE202017105894U1 (de) Headset-Entfernung in virtueller, erweiterter und gemischter Realität unter Verwendung einer Blick-Datenbank
CN105894567B (zh) 放缩三维场景中的用户控制的虚拟对象的像素深度值
Blum et al. The effect of out-of-focus blur on visual discomfort when using stereo displays
US11659158B1 (en) Frustum change in projection stereo rendering
US9681122B2 (en) Modifying displayed images in the coupled zone of a stereoscopic display based on user comfort
Berning et al. A study of depth perception in hand-held augmented reality using autostereoscopic displays
WO2014121108A1 (en) Methods for converting two-dimensional images into three-dimensional images
Zhong et al. Reproducing reality with a high-dynamic-range multi-focal stereo display
US10210654B2 (en) Stereo 3D navigation apparatus and saliency-guided camera parameter control method thereof
EA013779B1 (ru) Способ усиления зрительного восприятия и система
JP2011529285A (ja) 再現描写メディア中への両眼ステレオ情報の包含のための合成構造、メカニズムおよびプロセス
WO2009018557A1 (en) Method and software for transforming images
JP2017163373A (ja) 装置、投影装置、表示装置、画像生成装置、それらの方法、プログラム、およびデータ構造
Ardouin et al. Design and evaluation of methods to prevent frame cancellation in real-time stereoscopic rendering
JP2003521857A (ja) ソフトウェア焦点外し3d方法、そのシステム及び装置
GB2548080A (en) A method for image transformation
AU2004226624B2 (en) Image processing
Berning et al. Improving Depth Perception for Hand-held Augmented Reality using Autostereoscopic Displays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08797078

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008283765

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 470/KOLNP/2010

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2008283765

Country of ref document: AU

Date of ref document: 20080801

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 08797078

Country of ref document: EP

Kind code of ref document: A1

RET De translation (de og part 6b)

Ref document number: 112008002083

Country of ref document: DE

Date of ref document: 20101028

Kind code of ref document: P

REG Reference to national code

Ref country code: DE

Ref legal event code: 8607