WO2014182555A1 - A log-space linear time algorithm to compute a 3d hologram from successive data frames - Google Patents

A log-space linear time algorithm to compute a 3d hologram from successive data frames Download PDF

Info

Publication number
WO2014182555A1
WO2014182555A1 PCT/US2014/036521 US2014036521W WO2014182555A1 WO 2014182555 A1 WO2014182555 A1 WO 2014182555A1 US 2014036521 W US2014036521 W US 2014036521W WO 2014182555 A1 WO2014182555 A1 WO 2014182555A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
pixels
operators
computer implemented
data
Prior art date
Application number
PCT/US2014/036521
Other languages
French (fr)
Inventor
Michael Manthey
Original Assignee
Michael Manthey
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michael Manthey filed Critical Michael Manthey
Publication of WO2014182555A1 publication Critical patent/WO2014182555A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • Patent Application No. 61/819,884 filed on May 06, 2013 and titled "Data Processing With a Log-Time Computer Vision Algorithm".
  • the disclosure of the above-identified provisional patent application is incorporated by reference herein in its entirety.
  • An example of a. computer-vision task is the "bin-of-paxts" problem, according to which from a bin of identical parts one part has to be picked out with the use of machine vision, and then be grasped by a robot arm. Even today, long after the term was coined, "the bin of parts problem" still appears in the abstracts of academic papers.
  • a more sophisticated example may be provided by a scene in which there is a tiger behind some sun-lit foliage such that some portions of the tiger are hidden by greenery, and other portions - and one is unsure which they are - are actually the tiger and its stripes.
  • Embodiments of the invention provide a method for data-processing of optical data representing a scene being imaged and acquired as a set of input image frames to form a hierarchical structure of data frames that, aggregately, provide a 3D representation (as termed herein - a hologram) of the set of input data frames.
  • the method includes combining the readings from any two lD-vector-pixels x, y (chosen from the detector pixels such as a, b, c, d pixels; in one implementation - the two adjacent pixels) to form a "2D-vector spinor".
  • the 2D- vector spinor is formed by mapping the pixels' outputs into the map of + 1, -1.
  • Such 2-vector spinor indicates whether the two pixels in a pair perceive the same scene or object.
  • Such 2D- spinors are, individually, quaternion operators, that define an oriented plane.
  • the method further includes combining two disjoint 2D-vector spinors to form at least one of taiiquernion operators ⁇ , J, and K (which together form a triplet of extended quaternion operators).
  • extended quaternion and “tauquernion” are used herein interchangeably.
  • the purpose of forming the tauquernion(s) is to improve the image registered by the retina based at least on comparison of the ima ging information contained in a given 2D-vector spinor with that contained in any of the other 2D-veeior spinors, while simultaneously emulating the uniquely useful spatial -coordinate basis provided by ordinary quaternions.
  • the method further includes mapping of the "tauquernion operators" into “super- spinor” (4D-vector spinor) by combining (for example, multiplying) various tauquernions I, J, K (formed based on various pairs of pixel readings).
  • the orientation of a 4D-vector spinor (just like the orientation of a 2D-vector spinor ) yields one bit of information about the constituents of the tauquernion components of the 4D-vector spinor).
  • the so-formed 4D-vector spinors (super spinors) are further "extended” as if they were ID-vectors (by analogy with formation of I, j, k quaternions).
  • a first polynomial is formed that represents the sum of all these spinors and is considered to be the hierarchy at this point. The above steps are repeated isntil no further improvement in a given image frame can result.
  • this process is repeated for the second frame, third frame, and so on to obtain the respective second, third etc polynomials representing the views of the imaged scene at different angles.
  • the summation of such polynomials represents a superposition of views of the scene at various angles. Elements of the polynomials that are cancelled as a result of summation are inconsistent with one another.
  • the user As a descriptor containing information from which the user can extract an image of the scene, by determining a projection of such description onto a chosen coordinate system, resulting in a transverse distribution of irradiance across the field of vie (for example, the distribution of irradiance across the imaging plane of the detector or retina), and phase information representative of the depth-of- field (for example, along the z-axis that is substantially co-incident with an optical axis of the employed imaging system).
  • a transverse distribution of irradiance across the field of vie for example, the distribution of irradiance across the imaging plane of the detector or retina
  • phase information representative of the depth-of- field for example, along the z-axis that is substantially co-incident with an optical axis of the employed imaging system.
  • the "saccadic" movement of an eye or a tremor / vibration of the optical detector / retina(s) is substituted with data processing of such multiple image frames (or sequential image snap-shots), the results from which are compared to extract the depth component of the image.
  • a goal of visual recognition is extraction of data representing three-dimensional
  • (3D) information about the object which includes the "depth of field” associated with the object.
  • the extraction of such data becomes particularly complicated when no a priori knowledge exists about the background.
  • any type of information - not only visual information - can be processed according to embodiments of the invention.
  • the present invention provides a solution to a problem of visual recognition of a object on a background that obscures or conceals the object (such as a object that is camouflaged by the background), with an object of optica] imaging being the scene that contains both the background and the object, by (1) creating a logarithmically compacted ( hierarchical or generational, as discussed below) version of the visual input in the form of a software-level hologram of such input; and (2) comparing successive hierarchical versions, corresponding to successive data frames, to identify coherent, highly correlated objects in the successive scenes represented by such data frames,
  • 2D retina provide outputs (pixel or sensor values) forming an image frame (an image) when light from the scene being imaged is acquired by the retina.
  • image frame an image
  • Pixels a.b.c .d... are, therefore, 1-bit sensors, expressed mathematically as unit vectors.
  • RGB or color scene one should use three such bits, each with its own intensity-multiplier; for grey levels, one such bit.
  • intensity-multiplier for grey levels, one such bit.
  • These vectors are considered to be the generators of a geometric (Clifford) algebra.
  • a vision algorithm must extract 3D information indirectly via multiple (simultaneous or sequential) retinal images to establish the relative positions of, for example, Object 1 relative to Object2 in 3D (three dimensions).
  • each frame is encoded as polynomial expressing various wave dimensionalities, intensities and phases.
  • the tauqueraion encoding of retinal information establishes these relative positions by summing the hierarchical (polynomial) encodings of multiple frames, which embodies the physical fact that all of the various frames are simultaneously valid from a imiversa] point of view.
  • the algebra's logical inner consistency ensures that this summing will accurately reflect the polynomials combined and compressed visual scene.
  • Embodiments of the present invention provide a method to process data using a
  • the algorithm combines multiple iterations (generations) of operators that are defined based on readings from pixels of the optical detector of the system.
  • generations the more levels or generations in a hierarchy the better the resolution of the identified object and/or the overall scene being imaged.
  • the number of generations is limited only by the number of pixels in an input image frame, specifically ⁇ by the number of possible permutations of pixels (the readings from which are organized in pairs, according to an embodiment of the algorithm of the invention), and represents a logarithmic reduction in duration of computational iterations in comparison with algorithms of related art.
  • the actual number of generations in an algorithm, when used, is defined by the required precision of the outcome and may be smaller than the pixels-of-the-image-frame-defined (theoretical) number of generations.
  • the proposed algorithm creates, in a computer memory, a hierarchical structure that corresponds to a holographic representation of the set of data frames provided as input.
  • a "frame” is a set of data, all elements of which are, conceptually, simultaneously valid.
  • the input to the algorithm is provided, for example, by 2D image frames to be converted, as a result of the process represented by the algorithm, into a 3D visual hologram representing the combined points-of-view of the input image frames.
  • the algorithm therefore, is described in reference to retina or artificial retina (be it an optical detector or an eye of a human).
  • the proposed algorithm is structured to process, generally, data of any kind and dimensionality that is presented to it as a succession of frames.
  • a contemporary implementation is assumed, with inbuilt parallel processing of pixels, so what happens to one pixel happens simultaneously to all pixels, resulting in the "linear time" processing.
  • the presented linear time data- processing is build on a mathematical representation that provides a 3D mathematical coordinate space that simultaneously possesses/expresses the wave-like nature of a hologram.
  • the coordinate space is defined as "extended quaternion" space (a 3D tauquernion space built from triplets as discussed above) that is novel and superior to the conventional quaternion space.
  • extended quaternion space a 3D tauquernion space built from triplets as discussed above
  • 3 tauquernions are superior to quaternions because (1 ) their encoding of a scene better captures the actual correlations between pixels than a quaternion encoding, and (2) their mathematical structure yields an efficient method for hierarchically condensing and processing the visual information in a scene.
  • the algorithm first compares the new input frame to the preceding frame, marking all pixels that have changed (topologically discarding all pixels that didn't, change). Using this set of changed pixels, the algorithm (a) pairs pixels (in one specific case - neighboring pixels), then (b) combines these pairs into overlapping tauquernions (thus encoding spatial dimensionality and relationship of the pixels paired at the previous step), then (c) creates a new, higher-level picture element from each of the tauquernions formed in step b). The algorithm then (d) compares the "pixel" in this new higher-level frame with the corresponding level/pixels of the preceding frame, and repeats steps a),b),c) until the input frame is completely processed. Each such iteration increases the information in the hologram that was created by the previous iteration.
  • Fig, 1 presents a flow-chart of an embodiment of the algorithm of processing frame data according to the present invention.
  • the chosen declarations characteristic of the imaging plane the artificial retina
  • the pixels (expressed as ID vectors) of the imaging plane are initialized according to a default value assignment strategy (in one embodiment - as discussed in the Example below), and forming an initial image frame.
  • the pixels of the imaging plane (artificial retina) are denoted as "a,b,c,d, e ", and notations "w, x, y, z" are used as place-holders for these pixels, with unspecified signs.
  • the construction of a given hierarchical level of representation of the scene observed by the artificial retina includes steps FI4, Fl 8, F22, F26, and F30.
  • a frame of the input is formed by associating with or assigning to each of the ID vectors of the imaging plane new values that correspond to a view of the scene being imaged onto the imaging plane, thereby forming an updated image frame.
  • a set of data is created indicating, for each pixel, whether the status of such pixel changed between the updated frame and the frame that preceded it.
  • two individual pixels of the image plane are associated in pairs to produce a 2D-vector spinor (or quaternion).
  • Each 2D-vector spinor or quaternion combines the two single-bit data outputs from a given disjoint pair of pixels x and y into a single one-bit output datum representing the local spatial orientation of these pixels in the scene being imaged.
  • mapping of individual pixels to a 2-vector spinor is accomplished according to the following prescription: x-y or -x+y (i.e., when one pixel is black AND one pixel is white) - xy is assigned +1 x+y or -x-y (i.e., when both pixels are black xOR both pixels are white) xy is assigned -1
  • each pixel When operating in an RGB color space, each pixel includes three-sub-pixels R, G, and B which represent (respectively) red, green, blue spectral elements.
  • R, G, B sub-pixels may be accompanied by a multiplier indicating the irradiance associated with the corresponding element, and this multiplier is drawn from ⁇ -1,0,+! ⁇ in the present explanation.
  • the algebra of I, J, tauquernions works well for any number system. For example, one could write 3I+4J+2K and the algebra works just as well, as does the processing described here.
  • the state of a given 2D- vector spinor indicates whether the two inputs forming such spinor (i.e., in this example , x and y) are the same or different.
  • a 2D-vector spinor formed on the basis of outputs from two pixels of the detector indicates whether optical inputs at these two pixels are the same.
  • quaternions xy, yz, zx are examples of such oriented planes and are commonly used to express symmetries of 3D-space that are not visible from the point of view of the pixels x, y, z.
  • a corresponding 2D- vector spinor or quaternion is formed for each pair of imaging plane pixels.
  • the representation of the scene by 2D- vector spinors does not yet provide sufficient spatial resolution because the quaternions are closely correlated or coupled with one another, which causes blurring.
  • any two pixels x and y from the set of pixels in the imaging plane can be associated.
  • x and y are chosen to be spatially adjacent, neighboring pixels.
  • disjoint 2D-vecior spinors are paired according to at least one of various strategies, depending on the characteristics of the input; in principle they can be paired randomly.
  • the fundamental criterion is that the paired spinors be disjoint, i.e. ab+cd is okay (i.e. pairs are forming a tauqueraion) whereas ab+ac is not.
  • the tauquemions I, J, K are similar to quaternions in that they have the same multiplication table.
  • the purpose of forming the tauquernion(s) is to improve the retinal image (based on realization that a pair of disjoint ID- vector spinors contains more information in comparison with a single 2D-vector spinor), while otherwise simultaneously emulating the uniquely useful spatial-coordinate basis provided by ordinary quaternions, in addition, every tauquernion operator can itself be mapped into a higher-level object, thereby providing an opportunity for hierarchical condensation and consequent substantial efficiency improvements.
  • the user identifies that, blurring of the tauquernion-imaged object is reduced as compared to a quaternion -imaged object.
  • the 2D-vector spinor wx is not directly coupled to the 2D- vector spinor yz because they share no common factors (while the spinor wx, for example, is directly coupled with wy or xy since they share at least one factor or variable).
  • the conjugate triplet of tauquernions is defined as
  • Step F30 denotes the next, step of the analysis at which the spinor pairs
  • Step F30 denotes the next step of the analysis, at which the spinor pairs (tauquernions) are combined to produce a ne meta-sensor by multiplying tauquernion elements together: wx+yz wxyz.
  • the resulting 4D-vector spinor with orientation sign is a "super-spinor" representing an Object in the scene.
  • This Object is treated conceptually at this point, as a new 1 -vector (as follows) - a kind of "super-piXel" - thus effecting a hierarchical condensation of the retinal image.
  • Such newly mapped I D vectors represent the new (updated) image frame for use in building of the next hierarchical level of image representation of the scene being imaged, following by comparison of the new hierarchical level with the previous corresponding hierarchical level.
  • step F38 The build-up of the hierarchy of a given image frame continues according to reiteration of steps F14, F 18, F2.2, F26, F30, and F34 until the threshold condition has been satisfied, step F38. Specifically, this "build a new level of hierarchy, then compare with corresponding old level of hierarchy” iteration continues until either the hierarchical aggregation process exhausts the top-most set of ID Object vectors, or the informational logarithmic limit (the number of log 4 (# pixels in imaging plane ⁇ ) has been completed. It is appreciated thai, for a 1 -megapixel artificial retina (imaging plane), the process of building hierarchy of a given image frame will take at most 10 iterations.
  • the Frame data representing such frame is stored on a tangible, non- transient computer-readable medium to form an updatable set of History data, at step F42, according to
  • Histor - (History + Frame)
  • the leading minus sign preserves the signs of polynomial's terms, but allows additive cancellation to zero.
  • This cancellation Indicates that the cancelling elements exclude each other's simultaneous presence in the scene.
  • the History is the cumulative hologram of all theframes presented to a given point, Thus, when performing the operation of (History + Frame), data corresponding to sensors (and consequents) that changed their signs from plus to minus or from minus to plus all cancel out, and the remainder remains unchanged.
  • recognition or tracking of an object that is moving through the scene being imaged can be effectuated by determining a difference between the History and a given image frame:
  • x,y,z frame with respect to the imaging plane a,b,c,... , for example in the form of a. set of extended quaternion relationships that connect x,y,z and a,b,c,... for example (xy-ab, xa+yb, xb-ay ⁇ .
  • This set XYZ The desired projection, for a single frame, is then defined as the inner product
  • Object is the a (hierarchically-accumulated) polynomial drawn from some
  • Example 1 In the example provided below and in reference to the Python v.
  • the size of the artificial retina (or image plane), on which the scene of interest is imaged is considered to be 4x4 pixels, in reference to Fig. 2, the 3D scene 200 being imaged onto the image plane corresponds to the 4x4x4 pixel space and includes 4 objects.
  • Object 1 is a 1x1 (wide)x 2(high) block positioned standing vertically on the floor surface corresponding to the xz-plane at the front of the scene:
  • Object 2 is a 1x1x2 horizontal beam positioned horizontally, separated by 1 pixel from Object 1, and stretching in -z direction;
  • Object 3 is a I lx l block positioned at the back of the scene adjacently to Object 2; and
  • Object 4 is a lxlxl block positioned on the floor surface under Object 3, from which it is separated by 1 pixel.
  • Figs. 3 A, 3B, 3C, and 3D illustrate the views of the scene 200 of Fig. 2.
  • Fig. 3A presents the view from the front (in -z direction)
  • Fig. 3B presents the view from the right (in -x direction)
  • Fig. 3C presents the view from the left (in +x direction)
  • Fig. 3D presents the view from the top (in -y direction).
  • Pixel-based representation of the above-mentioned views of the scene 200 can be expressed as follows:
  • OXXO 0140 (seeing the side facet of Object 1 and another facet of Object 4)
  • the algebra is formed with a given set of 1 --dimensional vectors ⁇ a,b,c,... ⁇ .
  • the vector inner product is denoted by "j" and defined via a
  • B aB.
  • the pixels of the artificial retina are respectively associated with ID-vectors (denoted with lower case letters): a b e d
  • the image plane is represented as
  • the Retina a+b+c+..,+n+o+p.
  • each of the ID vectors is assigned a value of -1, for example:
  • an orientation is assigned to the spinor pairs (as calculated in the algorithm) as specified by the right-most column of ihe tables below, in one implementation, assign as the orientation of wxyz (as the co-boundary of the three extant spinor pairs) the sign obtained by multiplying the (above) three pair orientations together, (+-l)x(+-l )x(+-l). This assigns four plus and four minus orientations equably. (A different assignment can be used based on another combining rule in a related implementation.) The corresponding inputs and outputs are expressed according to the first pair orientation:
  • an algorithm substantially similar to the algorithm described above can be used to create holograms in other than 3D space, eg. octonion space; and of course, as well, operate with other mathematical isormorphs of the quaternions and their like.
  • the algorithm typically delivers i ts result in the form of structures of higher dimensionality than that desired in the application. For example, ordinary video wants n dimensions projected down to 2D screen frames, and 3D projection systems will want the n dimensions projected down to three spatial dimensions, and perhaps all three of these are different from the dimensions used for 3 D frames. For this reason - that the projection from hologram space to user space is application-dependent - it is not specified and is considered to be "back-end" functionality. [0052] Therefore, embodiments of the present invention provide a computer implemented method for defining a collection of data representing a scene. The method includes the steps of (i)
  • N-dimensional (ND) image frames that represent the scene (where > i, for example, 2D image frames); (ii) based on a pixei-by-pixeS comparison, with a programmed computer processor, of data between a chosen image frame from said 2D image frames with a reference image frame, forming a hierarchy of data representing the chosen image frame, wherein said hierarchy of data includes an operator formed from a disjoint pair of quaternions defined by said pixel-by-pixel comparison; and (iii) in a computer process, multiplying these operators.
  • ND N-dimensional
  • the forming a hierarchy of data may include forming a hierarchy of data containing a triplet of above-defined operators, each of which has been formed from a pair of quaternions defined by said pixel-by- pixel comparison, and wherein the triplet represents a basis or a coordinate in a three- dimensional (3D) space.
  • the forming a hierarchy includes a) mapping pairs of pixels, from the chosen image frame, to a set of signed 2D-vecior logical operators, where each of the signed 2D-vector logical operators is being formed from a pair of the pixels, and where each of the signed 2D-vector logical operators represents whether first and second input visual data corresponding to the pair of said pixels are the same or not.
  • the forming a hierarchy may additionally include b) combining first and second of the 2D-vector logical operators to define tauquernion operators I, J, K such that the tauquernion operators and quaternions have the same multiplication table.
  • a method of the invention may further include mapping a product of tauquernion operators I, J, K to new ID-vectors to form an updated chosen image frame pixels of which are represented by such new I D-vectors.
  • the method may further comprise repeating the steps of forming a hierarchy and multiplying said triplets until a threshold condition is satisfied, said threshold condition defined by an occurrence of either of (ai) the use of ail pairs of pixels in the mapping at step a); and (bi) a number of iterations reached a value of lag 4 (number of pixels in image frame).
  • the method may additionally include a step of forming a polynomial representing the chosen image frame in multiple dimensions an/or generating an output initiating an external action based on such polynomial.
  • a summation of polynomials representing a difference between chosen image frames and the 2D image frames can be performed to form a representation of the scene containing a superposition of views of the scene at different angles.
  • the method may additionally include a step of extracting data representing the scene from multiple angles of view by defining an inner product of the representation of the scene with a set of operators associated with the multiple angles of view and /or additionally include extracting data, representing a 3D image of said scene by defining an inner product of the representation of the scene with i tself.
  • Embodiments of the implementation of the algorithm have been described as including a processor controlled by instructions stored in a memory.
  • the memory may be random access memory (RAM), read-only memory (ROM), flash memory or any other memory, or combination thereof, suitable for storing control software or other instructions and data.
  • RAM random access memory
  • ROM read-only memory
  • flash memory any other memory, or combination thereof, suitable for storing control software or other instructions and data.
  • instructions or programs defining the functions of the present invention may be delivered to a processor in many forms, including, but not limited to, information permanently stored on non-writable storage media (e.g. read-only memory devices within a computer, such as ROM, or devices readable by a computer 170 attachment, such as CD-ROM or DVD disks), information alterably stored on writable storage media (e.g. floppy disks, removable flash memory and hard drives) or information conveyed to a computer through communication media, including wired or wireless computer networks.
  • non-writable storage media e.g. read-only memory devices within a computer, such as ROM, or devices readable by a computer 170 attachment, such as CD-ROM or DVD disks
  • writable storage media e.g. floppy disks, removable flash memory and hard drives
  • communication media including wired or wireless computer networks.
  • the functions necessary to implement the invention may optionally or alternatively be embodied in part or in whole using firmware and/or hardware components, such as combinatorial logic, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPG As) or other hardware or some combination of hardware, software and/or firmware components.
  • firmware and/or hardware components such as combinatorial logic, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPG As) or other hardware or some combination of hardware, software and/or firmware components.
  • RetinaFront Retinalnit ⁇ Front; # Front view's pixels
  • RetinaRight Retinalnit - Right; RetinaRight # Right view's pixels
  • RetinaTop Retinalnit - Top; it Top view's pixels
  • TauqCobs TauqCobs + sign(wx, yz) *abs(wx*yz)
  • Retmalnit -a - b - c - d - e g - h n - o - ⁇ ;
  • RetinaFront Retinalnit - ⁇ Front
  • RetinaRight Retinalnit - Right
  • RetinaTop Retinalnit - Top ,- # Top view's pixel
  • Tauquernions [b*f+j*n, b*f+n*o, f*j+n*o]
  • History! NewRetina + Q + Q2 + Objectsl
  • NewRetina + a+b+c-d-e-f -g+h+i- j -k-l+m-n-o+p ' *n b*f*n*o - f*j*n*o
  • 0 jects2__l QuatsToTauqCobs (Q) # Convert Q's quaternions to taiiquernions,
  • History2 History! + NewRetina + Q + 0bjects2;
  • History2 + d + e + f + g + j + k + l + n + o + a*c - a*f + b*f - b*j - b*o - c*f c*h - f*h - f*i - h*j - i*j + i*in - i*n - j*m + j*o - m*n - o*p - a*c*f*i - a*c*f*j a*c*h*j + a*f*h*j + a* *i*j - a*f*i*m + a* *i*n + a*f*j*rr.
  • History2 1 History2 # "2 1" c z there's going to be Level “2 2" too.
  • Retma2 2 - a*o* f* 'i - a*c*f' *j - a*c*h*j + a*f*h *j + a*f* 1 *j - a*f* I* Tfi + a*f* 1 a*f *j*m " a*f *j *n + a*f*j*o - b*L : *j*n - b*f*n*o + c*f *h*i c*f *h* j + c*f*i* j c*f*i*m + c*f*i*n + c*f*j*m + c*f*j*m + c*f*j*n + c*f*j*n + c*f*j*n + c*f*j*o - c*h*i*j c*h*j*m c*h*j*n c*h
  • Q2 2 PixelsToQuaternions (Changed2 ) ;
  • 0bjects2 2 QuatsToTauqCobs (Q2 2); 0bjects2_2
  • Objectsl i History2 1 -1 -fn +jn +no -bfj +bfo -bjn -bno -fjn +fjo -fno +jno
  • Objectsl History2 2 + 1 - ai - am; - ap + ch - ci - err; - fn + ip + jn + no - bfj + bfo - bjn - bno - fjn + fjo - fno + jno + ac i + achm - achp - acip - acmp + ahip + ahri!p aimp - chip - chmp + cimp - himp
  • O jectS3 QuatsToTauqCobs (Q) ;
  • 0bjects3 + a*f*i*j + a*f*i*m + a*f*i*n + a*f*j*m ⁇ - a*f*j*n + a*f*m*n ⁇ - i*i*j*m + f*i*tr,*n -- f*j*m*n
  • History3 History2 2 + NewRetina + Q + Objects3; History3

Abstract

Method to process data using a "log-time computer vision algorithm", the advantage of which over existing computation-intensive methods is logarithmic reduction in duration of computational iterations. The method creates a logarithmically-compacted hierarchical version of visual input in the form of a software-encoded hologram of such input based on tauquemion operators; and compares successive hierarchical versions, corresponding to successive data frames, to identify coherent, highly correlated objects in the successive scenes represented by such data frames. The "saccadic" movement of an eye (which helps to acquire the depth-of-field information) is addressed data processing of two data frames, the results from which are compared to extract the depth component of the image.

Description

A LOG-SPACE LINEAR TIME ALGORITHM TO COMPUTE A 3D HOLOGRAM FROM SUCCESSIVE DATA FRAMES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority of and benefit from U.S. Provisional
Patent Application No. 61/819,884 filed on May 06, 2013 and titled "Data Processing With a Log-Time Computer Vision Algorithm". The disclosure of the above-identified provisional patent application is incorporated by reference herein in its entirety.
BACKGROUND
[0002] It is well recognized that the processing and use of visual information is computationally expensive, in terms of both analysis and storage. For example, a task the solution to which is commonly required ■■ specifically, the task of identification of various objects in the scene and separation of these objects from the background of the scene based on a static 2D image acquired by a computer processor - is computationally extremely involved. When a given object of interest (interchangeably referred to herein as object) is moving, the motion of the object has to be incorporated into the computational algorithm, which significantly complicates the already complex problem. In general, "computer vision" remains an unsolved problem.
[0003] An example of a. computer-vision task is the "bin-of-paxts" problem, according to which from a bin of identical parts one part has to be picked out with the use of machine vision, and then be grasped by a robot arm. Even today, long after the term was coined, "the bin of parts problem" still appears in the abstracts of academic papers. A more sophisticated example may be provided by a scene in which there is a tiger behind some sun-lit foliage such that some portions of the tiger are hidden by greenery, and other portions - and one is unsure which they are - are actually the tiger and its stripes. Figuring out that there, indeed, is a tiger in the weeds is the kind of computer vision problem that is very difficult to solve. Conventional algorithms apply various assumptions and rules-of-mumb to make the problem tractable, but this is still a very computation-intensive affair. One of the shortcomings of the existing algorithms is that they are inherently and irreparably sequential and "local" in their very conceptualization, their "parallelization" notwithstanding.
[0004] The amount of information contained in visually acquired data is difficult to overestimate. For example, a single high-definition image frame may contain a million pixels, which at a rate of 24 frames per second results in acquisition of information corresponding to 24 mega-pixels per second and 10"' mega-pixels per hour. This amount will grow even further considering the 8 or 24 bits per pixel for black white vs. color image frames, respectively. There remains an unsolved need for an algorithm that represents and processes such volumes of visual information in a compact and time-efficient fashion.
SUMMARY
[0005] Embodiments of the invention provide a method for data-processing of optical data representing a scene being imaged and acquired as a set of input image frames to form a hierarchical structure of data frames that, aggregately, provide a 3D representation (as termed herein - a hologram) of the set of input data frames. The method includes combining the readings from any two lD-vector-pixels x, y (chosen from the detector pixels such as a, b, c, d pixels; in one implementation - the two adjacent pixels) to form a "2D-vector spinor". The 2D- vector spinor is formed by mapping the pixels' outputs into the map of + 1, -1. Such 2-vector spinor indicates whether the two pixels in a pair perceive the same scene or object. Such 2D- spinors are, individually, quaternion operators, that define an oriented plane. The method further includes combining two disjoint 2D-vector spinors to form at least one of taiiquernion operators ΐ, J, and K (which together form a triplet of extended quaternion operators). The terms "extended quaternion" and "tauquernion" are used herein interchangeably. Each of these tauquernion operators is a quaternion isomorph, and together, as a triplet (1, J, K) = I + J + K they provide a coordinate in the coordinate system, i.e. just like ordinary quaternions. The purpose of forming the tauquernion(s) is to improve the image registered by the retina based at least on comparison of the ima ging information contained in a given 2D-vector spinor with that contained in any of the other 2D-veeior spinors, while simultaneously emulating the uniquely useful spatial -coordinate basis provided by ordinary quaternions. From such comparison, the user identifies that blurring of the image is reduced, hi a related embodiment, a conjugate extended triplet is formed as well For example, if I=¾b-cd; J=ac+bd; and K=ad-bc, then the conjugate extended triplet is defined as I -ab-cd; J -ac-bd; and K ad+bc.
[0006] The method further includes mapping of the "tauquernion operators" into "super- spinor" (4D-vector spinor) by combining (for example, multiplying) various tauquernions I, J, K (formed based on various pairs of pixel readings). The orientation of a 4D-vector spinor (just like the orientation of a 2D-vector spinor ) yields one bit of information about the constituents of the tauquernion components of the 4D-vector spinor). The so-formed 4D-vector spinors (super spinors) are further "extended" as if they were ID-vectors (by analogy with formation of I, j, k quaternions). As a result of processing data corresponding to a given first imaging frame, a first polynomial is formed that represents the sum of all these spinors and is considered to be the hierarchy at this point. The above steps are repeated isntil no further improvement in a given image frame can result.
[0007] According to an embodiment of the method, this process is repeated for the second frame, third frame, and so on to obtain the respective second, third etc polynomials representing the views of the imaged scene at different angles. The summation of such polynomials represents a superposition of views of the scene at various angles. Elements of the polynomials that are cancelled as a result of summation are inconsistent with one another.
[0008] At the output of the last generation of processing, the user as a descriptor containing information from which the user can extract an image of the scene, by determining a projection of such description onto a chosen coordinate system, resulting in a transverse distribution of irradiance across the field of vie (for example, the distribution of irradiance across the imaging plane of the detector or retina), and phase information representative of the depth-of- field (for example, along the z-axis that is substantially co-incident with an optical axis of the employed imaging system). According to the idea of the invention, the "saccadic" movement of an eye or a tremor / vibration of the optical detector / retina(s) (all of which acquire the depth-of-field information) is substituted with data processing of such multiple image frames (or sequential image snap-shots), the results from which are compared to extract the depth component of the image.
DETAILED DESCRIPTION
[0009] A goal of visual recognition is extraction of data representing three-dimensional
(3D) information about the object, which includes the "depth of field" associated with the object. The extraction of such data becomes particularly complicated when no a priori knowledge exists about the background.
[0010] In the following, the terms "vision", "visual", "visually", visibly", "seeing",
"seen" and similar and related terms are understood to refer to data ami or information representing a object, scene and acquired with the use of either an optical detector (which may be part of a. bigger computer-vision or machine-vision system). In this context, the term "retina" is applied to a light-sensitive, pixilated area of the optical detector, which registers or "sees" the scene. Generally, however, any type of information - not only visual information - can be processed according to embodiments of the invention.
[0011] The present invention provides a solution to a problem of visual recognition of a object on a background that obscures or conceals the object (such as a object that is camouflaged by the background), with an object of optica] imaging being the scene that contains both the background and the object, by (1) creating a logarithmically compacted ( hierarchical or generational, as discussed below) version of the visual input in the form of a software-level hologram of such input; and (2) comparing successive hierarchical versions, corresponding to successive data frames, to identify coherent, highly correlated objects in the successive scenes represented by such data frames,
[0012] According to the idea of the invention, the pixels a, b, c, d, ... etc, of the artificial
2D retina, provide outputs (pixel or sensor values) forming an image frame (an image) when light from the scene being imaged is acquired by the retina. In the case of black-and-white scene, such outputs are considered to be +1 or -I, Pixels a.b.c .d... are, therefore, 1-bit sensors, expressed mathematically as unit vectors. (For an RGB or color scene, one should use three such bits, each with its own intensity-multiplier; for grey levels, one such bit.) While consideration of color adds a lot of bits of information to the image frame, it results only in a constant multiplier and, therefore, does not affect the scope of the invention). These vectors are considered to be the generators of a geometric (Clifford) algebra.
[0013] In the multi-dimensional vector algebra used here, the coefficients of variables are chosen from the Base 3 number system {0,1,2} , but which is "left-shifted" so that it becomes {-1,0,+1 }. The +/-1 binary symmetries of so shifted Base 3 number system now match the digital electronic industry's use of Base 2 = {0,1 } binary symmetries. The product ah in this way becomes the exc!usive-or of a and b, viz. (+/-l)x(+/-l). Thus the built-in xor logic of such Base 3 system invites an efficient hardware realization, as corresponding circuits are commonly available and are used in similar applications. While this novel Base 3 usage is a preferred number base, the algorithm described in this application can in principle be applied using any base.]
[0014] Particularly confounding is the fact that the retina only sees a 2-dimensional shadow (or projection) of its surrounding environment. Thus a vision algorithm must extract 3D information indirectly via multiple (simultaneous or sequential) retinal images to establish the relative positions of, for example, Object 1 relative to Object2 in 3D (three dimensions). Recall now that each frame is encoded as polynomial expressing various wave dimensionalities, intensities and phases. The tauqueraion encoding of retinal information establishes these relative positions by summing the hierarchical (polynomial) encodings of multiple frames, which embodies the physical fact that all of the various frames are simultaneously valid from a imiversa] point of view. The algebra's logical inner consistency ensures that this summing will accurately reflect the polynomials combined and compressed visual scene.
[0015] Embodiments of the present invention provide a method to process data using a
"log-time computer vision algorithm", at least one advantage of which over the existing computation-intensive methods is the speed of computation. According to an embodiment, the algorithm combines multiple iterations (generations) of operators that are defined based on readings from pixels of the optical detector of the system. Generally, the more levels or generations in a hierarchy the better the resolution of the identified object and/or the overall scene being imaged. The number of generations is limited only by the number of pixels in an input image frame, specifically ·· by the number of possible permutations of pixels (the readings from which are organized in pairs, according to an embodiment of the algorithm of the invention), and represents a logarithmic reduction in duration of computational iterations in comparison with algorithms of related art. The actual number of generations in an algorithm, when used, is defined by the required precision of the outcome and may be smaller than the pixels-of-the-image-frame-defined (theoretical) number of generations.
[0016] It is appreciated, therefore, that:
■■ The proposed algorithm creates, in a computer memory, a hierarchical structure that corresponds to a holographic representation of the set of data frames provided as input. A "frame" is a set of data, all elements of which are, conceptually, simultaneously valid.
- The input to the algorithm is provided, for example, by 2D image frames to be converted, as a result of the process represented by the algorithm, into a 3D visual hologram representing the combined points-of-view of the input image frames. The algorithm, therefore, is described in reference to retina or artificial retina (be it an optical detector or an eye of a human). The proposed algorithm is structured to process, generally, data of any kind and dimensionality that is presented to it as a succession of frames. A contemporary implementation is assumed, with inbuilt parallel processing of pixels, so what happens to one pixel happens simultaneously to all pixels, resulting in the "linear time" processing.
· The presented linear time data- processing is build on a mathematical representation that provides a 3D mathematical coordinate space that simultaneously possesses/expresses the wave-like nature of a hologram. The coordinate space is defined as "extended quaternion" space (a 3D tauquernion space built from triplets as discussed above) that is novel and superior to the conventional quaternion space. The related art has not been using a concept of tauquemions in mathematics to date. For the purpose of data processing presenting in this application,
3 tauquernions are superior to quaternions because (1 ) their encoding of a scene better captures the actual correlations between pixels than a quaternion encoding, and (2) their mathematical structure yields an efficient method for hierarchically condensing and processing the visual information in a scene.
- The algorithm first compares the new input frame to the preceding frame, marking all pixels that have changed (topologically discarding all pixels that didn't, change). Using this set of changed pixels, the algorithm (a) pairs pixels (in one specific case - neighboring pixels), then (b) combines these pairs into overlapping tauquernions (thus encoding spatial dimensionality and relationship of the pixels paired at the previous step), then (c) creates a new, higher-level picture element from each of the tauquernions formed in step b). The algorithm then (d) compares the "pixel" in this new higher-level frame with the corresponding level/pixels of the preceding frame, and repeats steps a),b),c) until the input frame is completely processed. Each such iteration increases the information in the hologram that was created by the previous iteration.
- Given this hologram, the user can efficiently
* Store, retrieve, process, and transmit the now logarithmically compressed, hologram-encoded information
* Identify arbitrary coherent objects - from tigers to phonemes - in the scene
(implicit)
* Track an object that is moving through a scene
* Project an image/version of a scene from the point of view of a given frame
* Project an image/version of a scene from an external point of view
* Project a 3D image/version of the hologram as a whole, or of individual objects of the scene
* Project sequences of such images/versions
* Use the hologram as the basts for a computer vision (hearing, sensing) system
* Use the hologram as the basis for evolving a system's state
* Use the hologram to steer a 3D printing or other external process
[0017] Fig, 1 presents a flow-chart of an embodiment of the algorithm of processing frame data according to the present invention. At the initialization step F10, the chosen declarations characteristic of the imaging plane (the artificial retina) are accounted for and the pixels (expressed as ID vectors) of the imaging plane are initialized according to a default value assignment strategy (in one embodiment - as discussed in the Example below), and forming an initial image frame. The pixels of the imaging plane (artificial retina) are denoted as "a,b,c,d, e ...", and notations "w, x, y, z" are used as place-holders for these pixels, with unspecified signs. The construction of a given hierarchical level of representation of the scene observed by the artificial retina includes steps FI4, Fl 8, F22, F26, and F30.
[0018] At step F14, a frame of the input, is formed by associating with or assigning to each of the ID vectors of the imaging plane new values that correspond to a view of the scene being imaged onto the imaging plane, thereby forming an updated image frame.
[0019] At the step F18, a set of data is created indicating, for each pixel, whether the status of such pixel changed between the updated frame and the frame that preceded it.
[0020] At step F22, two individual pixels of the image plane (for example, image pixels x and y) are associated in pairs to produce a 2D-vector spinor (or quaternion). Each 2D-vector spinor or quaternion combines the two single-bit data outputs from a given disjoint pair of pixels x and y into a single one-bit output datum representing the local spatial orientation of these pixels in the scene being imaged. For a black-and-white scene, for example, the mapping of individual pixels to a 2-vector spinor is accomplished according to the following prescription: x-y or -x+y (i.e., when one pixel is black AND one pixel is white) - xy is assigned +1 x+y or -x-y (i.e., when both pixels are black xOR both pixels are white) xy is assigned -1
(Note: When operating in an RGB color space, each pixel includes three-sub-pixels R, G, and B which represent (respectively) red, green, blue spectral elements. Each of R, G, B sub-pixels may be accompanied by a multiplier indicating the irradiance associated with the corresponding element, and this multiplier is drawn from {-1,0,+!} in the present explanation. Notwithstanding, the algebra of I, J, tauquernions works well for any number system. For example, one could write 3I+4J+2K and the algebra works just as well, as does the processing described here. In addition, the extra intensity information can itself be applied to imaging improvements according to a wide variety of summing strategies,) The state of a given 2D- vector spinor indicates whether the two inputs forming such spinor (i.e., in this example , x and y) are the same or different. In other words, a 2D-vector spinor formed on the basis of outputs from two pixels of the detector indicates whether optical inputs at these two pixels are the same. Such 2-vector spinors are, individually, quaternion operators that define oriented planes normals to which are provided by dimensions of 3D space (cf according to a "right hand rule", thumb up / down = 1 bit of information). For example, quaternions xy, yz, zx (conventionally written as i, j, and k) are examples of such oriented planes and are commonly used to express symmetries of 3D-space that are not visible from the point of view of the pixels x, y, z. A corresponding 2D- vector spinor or quaternion is formed for each pair of imaging plane pixels. The representation of the scene by 2D- vector spinors does not yet provide sufficient spatial resolution because the quaternions are closely correlated or coupled with one another, which causes blurring. Generally, any two pixels x and y from the set of pixels in the imaging plane can be associated. (In one specific implementation, however, x and y are chosen to be spatially adjacent, neighboring pixels.) Table 1 defines quaternion multiplication, and includes ij = -ji, ijk = -1 , and not least (ij)x(ij) = - I, so spinors are representations of sqrt(- l). Stated differently, the entire algebra as presented is about perpendicular dimensions that encode a scene's phase and intensity.
Figure imgf000010_0001
Table 1.
[0021] In addition to having a +/- (thumb up/down) orientation, the D-veetors retain their properties as algebraic operaiors (namely, state-rotators = state transformers). Thus the polynomials, and the frame History that they form when summed, can be used as operators to retrieve and project information that is encoded in said History. This provides for a very wide range of applications of the method.
[0022] At step F26, disjoint 2D-vecior spinors are paired according to at least one of various strategies, depending on the characteristics of the input; in principle they can be paired randomly. Here, we use a simple "adjacent neighbor" strategy for purposes of illustration. The fundamental criterion is that the paired spinors be disjoint, i.e. ab+cd is okay (i.e. pairs are forming a tauqueraion) whereas ab+ac is not. In one implementation - only neighboring 2D- vector spinors are paired, to avoid unnecessary combinatorial explosion of the amount of data, and based on such pairs tauquernion operators (or extended quaternion operators) I, J , and K are defined according to the following mapping principles:
I - wx-yz; J = wy + xz; K, - wz - xy
or one of the duals thereof and represent a point in 3D space. The tauquemions I, J, K are similar to quaternions in that they have the same multiplication table. The purpose of forming the tauquernion(s) is to improve the retinal image (based on realization that a pair of disjoint ID- vector spinors contains more information in comparison with a single 2D-vector spinor), while otherwise simultaneously emulating the uniquely useful spatial-coordinate basis provided by ordinary quaternions, in addition, every tauquernion operator can itself be mapped into a higher-level object, thereby providing an opportunity for hierarchical condensation and consequent substantial efficiency improvements. Comparing object, the user identifies that, blurring of the tauquernion-imaged object is reduced as compared to a quaternion -imaged object.
Figure imgf000011_0001
It is understood that, in a. tauquernion, the 2D-vector spinor wx is not directly coupled to the 2D- vector spinor yz because they share no common factors (while the spinor wx, for example, is directly coupled with wy or xy since they share at least one factor or variable). The conjugate triplet of tauquernions is defined as
Γ ;;; wx+yz; Γ wy - xz; K' ;;; wz + xy
and can be used as an alternative coordinate system.
[0023] Step F30 denotes the next, step of the analysis at which the spinor pairs
(tauquernion pairs) are combined in a. new meta-sensor (a 4D -vector spinor) by multiplying various 1, J, K tauquernions. Step F30 denotes the next step of the analysis, at which the spinor pairs (tauquernions) are combined to produce a ne meta-sensor by multiplying tauquernion elements together: wx+yz wxyz. The resulting 4D-vector spinor with orientation sign is a "super-spinor" representing an Object in the scene. This Object is treated conceptually at this point, as a new 1 -vector (as follows) - a kind of "super-piXel" - thus effecting a hierarchical condensation of the retinal image. At step F34, the so -formed 4D vector spinors are mapped to a ID-vectors, according to a scheme according to which, for example, taking (wx + yz)*(wxyz) X, then when X=wx+yz, ie., the same, then the sign(wxyz) = +1, else -1. Such newly mapped I D vectors represent the new (updated) image frame for use in building of the next hierarchical level of image representation of the scene being imaged, following by comparison of the new hierarchical level with the previous corresponding hierarchical level. As a result of processing data corresponding to a given first imaging frame, a first polynomial - for example, as shown in Appendix 1 , History 1 = (-a+b-c-d-e+f-g-h-i+j-k-l-m+n÷o-p) + (bf+fj+jn+no) + (-jo-bo-bj) + (- bfjn-bmo-fjno) - is formed that encodes the retinal image itself (the 1 -vectors), two sets of 2- vector spinors encoding basic adjacencies, and three inferred Objects bfjn, bfno, and fjno.
[0024] The build-up of the hierarchy of a given image frame continues according to reiteration of steps F14, F 18, F2.2, F26, F30, and F34 until the threshold condition has been satisfied, step F38. Specifically, this "build a new level of hierarchy, then compare with corresponding old level of hierarchy" iteration continues until either the hierarchical aggregation process exhausts the top-most set of ID Object vectors, or the informational logarithmic limit (the number of log4(# pixels in imaging plane}) has been completed. It is appreciated thai, for a 1 -megapixel artificial retina (imaging plane), the process of building hierarchy of a given image frame will take at most 10 iterations.
[0025] Once the threshold condition has been satisfied, and a given frame has been completely processed, the Frame data representing such frame is stored on a tangible, non- transient computer-readable medium to form an updatable set of History data, at step F42, according to
Histor = - (History + Frame)
[0026] Here, the leading minus sign preserves the signs of polynomial's terms, but allows additive cancellation to zero. This cancellation Indicates that the cancelling elements exclude each other's simultaneous presence in the scene. The History is the cumulative hologram of all theframes presented to a given point, Thus, when performing the operation of (History + Frame), data corresponding to sensors (and consequents) that changed their signs from plus to minus or from minus to plus all cancel out, and the remainder remains unchanged.
[0027] It is appreciated that recognition or tracking of an object that is moving through the scene being imaged can be effectuated by determining a difference between the History and a given image frame:
Changed = History - Frame
[0028] As a result of the operation (History - Frame), data corresponding to output(s) from sensors (and consequents) that changed their signs from plus to minus or from minus to plus (and, therefore, correspond to moving object) all remain, while data corresponding to the remaining portion of History is cancelled. Accordingly, as will be understood by a person of skill in the art, "tracking" an object as its position and/or orientation with respect to the surrounding / scene changes from frame to frame now becomes a matter of exporting the successive Changed frames,
[0029] In the occasional situation where there is a complete change of scene from one frame to the next (acquired, e.g., with video input), there will be many cancellations and a slight hysteresis-like 'noise hiccup' in the resulting "image". Depending on the application, this hiccup can either simply be ignored (and passed thru as output) or controlled via some special frame- deletion algorithm.
[0030] It is appreciated that as a result of building a hierarchy of representation of the scene being imaged, the changes in the scene occurring from one image frame to another and stored in History are reflected in various projections of the hierarchical data, for example the scene from the point-of-view of some inferred Object or some external coordinate.. Accordingly, at any stage of the data analysis such projections connoting aspects of the scene can be extracted and used for further processing. Such further processing might include visualizing multi-dimensional data relationships, or looking for patterns or exceptions in complex data sets. For example, to project (obtain) an image of a scene from the point of view (pov) of a given frame Fi (i.e., to define an image corresponding to Fi), the inner product of F; and History has to be defined:
F^pov ^ Fj j History
[0031] To project an image of a scene from an externa] point-of-view (that is, a point-of- view of an "Observer") with coordinates x,y,z, define the orientation of the
x,y,z frame with respect to the imaging plane a,b,c,... , for example in the form of a. set of extended quaternion relationships that connect x,y,z and a,b,c,... for example (xy-ab, xa+yb, xb-ay} . Call this set XYZ, The desired projection, for a single frame, is then defined as the inner product
Observerpov :;; XYZ | Frame
And, for an image incorporating the information accumulated from multiple points of view,
Observerpov = XYZ | History
[0032] A 3D image of what is known about the scene based on the acquired Frames
(which is the History) can be determined with the use of a 3D projection operator effectuating the inner product as follows:
3DImage History j History
A view of the 3D image from the point of view of an object in the scene ObserverObjectpov = Object j History
where Object is the a (hierarchically-accumulated) polynomial drawn from some
the Frame (or set of Frames) all of terms of which share a common parent super-spinor or set thereof. For example, one could watch a chemical reaction from the point of view of a given atom in a molecule in a data set drawn either from a theoretical simulation or measurements from an actual physical experiment. In reference to the flow-chart of Fig. I , the determination of various images (projections) of the overall hologram is represented by step
F46.
[0033 Example 1. In the example provided below and in reference to the Python v.
2,6 computer code appended to this disclosure, the size of the artificial retina (or image plane), on which the scene of interest is imaged, is considered to be 4x4 pixels, in reference to Fig. 2, the 3D scene 200 being imaged onto the image plane corresponds to the 4x4x4 pixel space and includes 4 objects. Object 1 is a 1x1 (wide)x 2(high) block positioned standing vertically on the floor surface corresponding to the xz-plane at the front of the scene: Object 2 is a 1x1x2 horizontal beam positioned horizontally, separated by 1 pixel from Object 1, and stretching in -z direction; Object 3 is a I lx l block positioned at the back of the scene adjacently to Object 2; and Object 4 is a lxlxl block positioned on the floor surface under Object 3, from which it is separated by 1 pixel.
[0034] Figs. 3 A, 3B, 3C, and 3D illustrate the views of the scene 200 of Fig. 2. In particular, Fig. 3A presents the view from the front (in -z direction), Fig. 3B presents the view from the right (in -x direction), Fig. 3C presents the view from the left (in +x direction), and Fig. 3D presents the view from the top (in -y direction).
[0035] Pixel-based representation of the above-mentioned views of the scene 200 can be expressed as follows:
[0036] Vie from Front;
OXOO - 0200 (seeing the end facet of Object 2)
OXOO = 0300 (seeing the end facet of Object 3)
OXOO = 0100 (seeing the side facet of Object 1)
OXXO = 0140 (seeing the side facet of Object 1 and another facet of Object 4)
0037 Similarly, the views from the right, the top, and the left can be expressed as:
[003S View from Right:
XXXO ----- 2220
OOOX - 0003 XOOO = 1000
XOOX = 1004
View from Top:
OXXO = 0340
0X00 - 0200
0X00 = 0200
0X00 = 0200
View from Left:
OX XX = 022.2
XOOO - 3000
OOOX 000 !
XOOX - 4001
In the following, the geometric (Clifford) algebra is derived over Base 3 = {0,1,- i ).
This algebra is a so-called "graded vector algebra", to which the following rules and conventions and basic declarations apply:
1. The algebra is formed with a given set of 1 --dimensional vectors {a,b,c,...} .
2. Addition is associative and commutative: (a÷b)+c = a+(b+c) and a+b = b+a.
3. Multiplication is associative and anti-commutative: (ab)(c) = (a)(bc) and ab = -ba.
4. "ab" is a "2-vector", "abc" is a "3-vector", .... to "ni-vectors".
5. Sums of vectors and various m- vectors are called multi- vectors and denoted with capital letters.
6. For example, multiplication distributes over addition: A(B+C) = AB + AC,
7. Absolute value: jA+Bj = |Aj + |B| when A is perpendicular to B. Example: |ab-cd| = jabj + |-cdj = ab+cd
8. The vector inner product is denoted by "j" and defined via a|B = aB.
9. The algebra's coefficients must be drawn from {0, 1 ,-3 } : HI = -1 , whence X - X X -0. Also a*a=l, a|a=l, a|b=0.
[0042] The programming language used to describe parts of the Algorithm is Python v2.6 running a purpose-built Base 3 geometric algebra interpreter ga.py .
[0043] According to an embodiment of the algorithm of the invention, the pixels of the artificial retina (image plane) are respectively associated with ID-vectors (denoted with lower case letters): a b e d
e f g h
i j k l
m n o p
Accordingly, the image plane is represented as
The Retina = a+b+c+..,+n+o+p.
[0044] The signs of the value assigned to these 1 D-vectors indicate the presence (in case of +1) or absence (in case of -1) of light detected by a corresponding pixel. As past, of the initialization procedure, each of the ID vectors is assigned a value of -1, for example:
Initialization:
Figure imgf000016_0001
= -T
[0045] When the retina acquires an image of a particular scene, its pixels are re-assigned the values corresponding to the viewed scene, thereby forming an (image) Frame corresponding to a particular View, in case when the retina acquires an image corresponding to the view from the front of Fig. 2, for example, the corresponding Frame
0X00
0X00
0X00
oxxo demands that b, f. j, m, and p pixels be assigned a value of +1. Keeping only the signs, the status of Retina in this case is expressed as
[0046] At the next step, an orientation is assigned to the spinor pairs (as calculated in the algorithm) as specified by the right-most column of ihe tables below, in one implementation, assign as the orientation of wxyz (as the co-boundary of the three extant spinor pairs) the sign obtained by multiplying the (above) three pair orientations together, (+-l)x(+-l )x(+-l). This assigns four plus and four minus orientations equably. (A different assignment can be used based on another combining rule in a related implementation.) The corresponding inputs and outputs are expressed according to the first pair orientation:
INPUTS: w x y z -) OUTPUT: wx+yz = (w xor x) plus (y xor z) Accordingly,
For ROW 00
For ROW 01 -+ -> 0
For ROW 02 -;- ~ Q
For ROW 03 + + -» -
For ROW 04: -- -» 0
For ROW 05: . + ~ +
For ROW 06: + - +
For ROW 07: -f- + ~ o
For ROW 08: -- -> 0
For ROW 09 -.+ " +
For ROW 10 + - ~ +
For ROW 11 + +->0
For ROW 12:
For ROW 13: - - o
For ROW 14: - + - -» 0
For ROW 15: - + + ~ -
[0048] The corresponding inputs and outputs are also expressed according to the second pair orientation:
INPUTS: w x y z OUTPUT y+xz
Accordingly,
For ROW 00:
For ROW 01: ---+ -» 0
For R.OW 02: - - + - 0
For ROW 03:
For ROW 04: - 4 - - ™ 0
For ROW 05: - + -+ -» - For ROW706: - + + - ~ +
For ROW 07: - + + ()
For ROW 08: + -> 0
For ROW 09: -i- - - + ~ +
For ROW 10:
For ROW 11: + - -h + ~> 0
For ROW712: -t- -h - - ~ +
For ROW 13: -i- + - -i- -» o
For ROW 14: + + + --» 0 For ROW 15: + + + +-> -
[0049] The corresponding inputs and output are also expressed according to the third pair orientation:
INPUTS: w x y z -> OUTPUT wz+xy
Accordingly,
For ROW 00: . . . . ~> .
For ROW 01 : - - - + 0
For ROW 02: - - + - ~ o
For ROW 03: - - + + - +
For ROW 04: - + - - -> 0
For ROW 05: - -;- - + ~ +
For ROW 06: .. -i- + .. ~ ..
For ROW 07: - + + + ~ o
For ROW 08: + - - - ~ 0
For ROW 09: ÷■·■■ + - -
For ROW 10: -; - + - -> +
For ROW 11 : + - + + -> 0
For ROW 12: + + . - ~
For ROW 13: + + - + -» 0
For ROW 14: -i- + - ~» }
For ROW 15: + + + + _
[0050] A relevant portion of the Python code which, when loaded on a programmable data-processing electronic circuitry (processor), performs the steps of the above-described algorithm, is presented in Appendix A. The application of calculations made with this code for the above Example (of a 4x4 artificial retina) is shown in Appendix B.
[0051] An algorithm substantially similar to the algorithm described above can be used to create holograms in other than 3D space, eg. octonion space; and of course, as well, operate with other mathematical isormorphs of the quaternions and their like. Also, the algorithm typically delivers i ts result in the form of structures of higher dimensionality than that desired in the application. For example, ordinary video wants n dimensions projected down to 2D screen frames, and 3D projection systems will want the n dimensions projected down to three spatial dimensions, and perhaps all three of these are different from the dimensions used for 3 D frames. For this reason - that the projection from hologram space to user space is application-dependent - it is not specified and is considered to be "back-end" functionality. [0052] Therefore, embodiments of the present invention provide a computer implemented method for defining a collection of data representing a scene. The method includes the steps of (i)
acquiring, with a computer processor, input visual data corresponding to N-dimensional (ND) image frames that represent the scene (where > i, for example, 2D image frames); (ii) based on a pixei-by-pixeS comparison, with a programmed computer processor, of data between a chosen image frame from said 2D image frames with a reference image frame, forming a hierarchy of data representing the chosen image frame, wherein said hierarchy of data includes an operator formed from a disjoint pair of quaternions defined by said pixel-by-pixel comparison; and (iii) in a computer process, multiplying these operators. The forming a hierarchy of data may include forming a hierarchy of data containing a triplet of above-defined operators, each of which has been formed from a pair of quaternions defined by said pixel-by- pixel comparison, and wherein the triplet represents a basis or a coordinate in a three- dimensional (3D) space. In a specific case, the forming a hierarchy includes a) mapping pairs of pixels, from the chosen image frame, to a set of signed 2D-vecior logical operators, where each of the signed 2D-vector logical operators is being formed from a pair of the pixels, and where each of the signed 2D-vector logical operators represents whether first and second input visual data corresponding to the pair of said pixels are the same or not. In an even more specific case, the forming a hierarchy may additionally include b) combining first and second of the 2D-vector logical operators to define tauquernion operators I, J, K such that the tauquernion operators and quaternions have the same multiplication table. A method of the invention may further include mapping a product of tauquernion operators I, J, K to new ID-vectors to form an updated chosen image frame pixels of which are represented by such new I D-vectors. Alternatively or in addition, the method may further comprise repeating the steps of forming a hierarchy and multiplying said triplets until a threshold condition is satisfied, said threshold condition defined by an occurrence of either of (ai) the use of ail pairs of pixels in the mapping at step a); and (bi) a number of iterations reached a value of lag4(number of pixels in image frame). The method may additionally include a step of forming a polynomial representing the chosen image frame in multiple dimensions an/or generating an output initiating an external action based on such polynomial. Additionally, a summation of polynomials representing a difference between chosen image frames and the 2D image frames can be performed to form a representation of the scene containing a superposition of views of the scene at different angles. The method may additionally include a step of extracting data representing the scene from multiple angles of view by defining an inner product of the representation of the scene with a set of operators associated with the multiple angles of view and /or additionally include extracting data, representing a 3D image of said scene by defining an inner product of the representation of the scene with i tself.
[0053] Embodiments of the implementation of the algorithm have been described as including a processor controlled by instructions stored in a memory. The memory may be random access memory (RAM), read-only memory (ROM), flash memory or any other memory, or combination thereof, suitable for storing control software or other instructions and data. Some of the functions performed by the disclosed algorithm have been described with reference to flowcharts and/or block diagrams. Those skilled in the art should readily appreciate that functions, operations, decisions, etc. of all or a portion of each block, or a combination of blocks, of the flowcharts or block diagrams may be implemented as computer program instructions, software, hardware, firmware or combinations thereof. Those skilled in the art should also readily appreciate that instructions or programs defining the functions of the present invention may be delivered to a processor in many forms, including, but not limited to, information permanently stored on non-writable storage media (e.g. read-only memory devices within a computer, such as ROM, or devices readable by a computer 170 attachment, such as CD-ROM or DVD disks), information alterably stored on writable storage media (e.g. floppy disks, removable flash memory and hard drives) or information conveyed to a computer through communication media, including wired or wireless computer networks. In addition, while the invention may be embodied in software, the functions necessary to implement the invention may optionally or alternatively be embodied in part or in whole using firmware and/or hardware components, such as combinatorial logic, Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPG As) or other hardware or some combination of hardware, software and/or firmware components.
[0054] Disclosed aspects of the invention, or portions of these aspects, may be combined in ways not listed above. Accordingly, the invention should not be viewed as being limited to the disclosed embodiment(s). APPENDIX .4
Python Code representing an embodiment of the algorithm of the invention,
with commentaries
# This file contains running Python code (file extension , py) followed by
# an Annotated Example (4x4 Retina) calculated via this same code.
# Initialize the Example's retinal frames (see Figure 1 for 2d and 3d drawings)
# Step F14 of flow-chart.
# The initial Retina is featureless:
Ret.inaln.it = -a - b - c - d - e - f - g - h - i j - k - l - m - n - o - p;
Front = b + f + j + n + o;
RetinaFront = Retinalnit Front; # Front view's pixels
Right = a+b+c + h + i + m + p;
RetinaRight = Retinalnit - Right; RetinaRight # Right view's pixels
Top = b + c + f + j + n;
RetinaTop = Retinalnit - Top; it Top view's pixels
# Hark each pixel as either changed or unchanged,
# Find changed pixels & and update (Mew) Retina.
# Step F18 of flow-chart
(t
Newvec - list(Ne [:]); OldVec - list(01d[:])
for i in range(len(Ne ) ) :
for j in range(len(01d) ) :
if New[i] == Old j ] : NewVec[i] - Q; continue #This newbie isn't new. if abs(New[ij) == abs(01d[j]): 01dVec[j] = 8 # "==" => opposite signs,
# so dump Old for Mew, return [a.dd(*NewVec), a.dd(*01dvec) + add(*NewVec)] # = [Changed, NewRetina]
# Any pixels that don't change are dropped from further processing,
# and the regions that they once separated now touch,
# From the set of changed pixels, say {i,j,p,q}, create 2-vectors
# ("quaternion # atoms") from any/all two neighboring pixels (eg, {ij,pq}).
# Make Quaternions out of (changed) pixels, Step 22 of flow-chart # This simplified code ignores the
# 'neighbor' criterion: it is
# Record found -quaternions w/corr eet sign: # just for small examples.
def PixelsToQuaternions { Pixels) :
PQ = 0
for p in Pixels:
for q in Pixels:
if p*q == +1: continue # Discard p*p, keep p*q.
if p*q > Θ: continue # Needs no sign-adjustment, if p<0 or q<0: PQ = PQ + p*q # pq's MINUS is legitimate, else: PQ = PQ - p*q # MINUS was from qp = -pq. return PQ
# Compute Co-boundary 4-vectorObject) of an arbitrary tauquernion +-ab+-cd.
#
# This code also works for 6-vectorObjects, which appear spontaneously in the
# calculation, Other m-Gbjects than 8mocl4 and 0mod6 don't occur - 4&6 are special
# because 6+8 # = 4+4+4,
#
# Theorem: Boundary x Object ~= Boundary => Co -boundary (Boundary) = Object
# [le. the Integral of the Derivative = the Original]
#
# The main issue is assigning the correct sign (plus/minus) to the resulting co-
# boundary, +-ijpq. The table below gives the decision procedure, which depends
# on the signs of +-ij and + -pq relative to each other,
# Assign SIGN to new 4-vector Objects
# (ab+cd)abcd ab cd minus
# (ab-cd)abcd ~ + ab - cd same = " plus
# ( -ab+cd)abcd - - ab + cd same ; :> plus
# ( -ab-cd)abcd - + ab + cd minus
# (ac+bd)abcd ~ + ac + bd same ■ => plus
# (ac-bd)abcd =. ac + bd minus
# ( -ac+bd)abcd ~ + ac - bd minus
# ( -ac-bd)abcd ac - bd same ; :> plus
# (ad+bc)abcd v. ad - be minus
# (ad -be) abed = + ad - be same - => plus
# ( -ad+bc)abcd ad + be same - -> plus
# ( -ad -be) abed = + ad + e minus
#
# In other words:
# Let (ij+pq)*(ijpq) = X.
#
# If x = ij+pq, then sign(ijpq) = +1 else -l # Compute sign of co- boundary
def sign(ij?pq):
if rank(ij*pq) % 4 ~~ 8 or rank(ij*pq) % 6 == 0: # NB: "%" - "modulo" if (ij+pq)*abs(ij*pq) == ij+pq: return +1
else: return - 1
# Do Quaternions -- Tauquernions --> Objects
# Steps 28, 38, 34 of flow chart
# Record found-tauquernions' co-boundaries;
# Ignores 'neighbor' criterion - def QuatsToTauqCobs(Quats) : # this is just for small examples.
TauqCobs = 0
for wx in Quats: # Keep only 0mod4- and 0mod6- vectors, for yz in Quats: # Note 6+6 = 4+4+4 => 2 sequences: if wx == yz: continue
if (rank(wx*yz) % 4 ] - 9 or (rank(wx*yz) % 6) == 0: # NB: "%" - "modulo" TauqCobs = TauqCobs + sign(wx, yz)*abs(wx*yz)
return TauqCobs
# Do Tauquernions --> Objects
def TauqsToTauqCobs(Tauqs) :
TauqCobs = 0 ;
for T in Tauqs:
wx = T[0] ; yz = T[l] # Keep only BiJiod4 - and Bmod6-vectors, if wx == yz: continue # noting that 6+6=4+4+4 => 2 sequences, if (rank(wx*yz) % 4 ) == ø or (rank(wx*yz) % 6 ) == ø: # MB: "%" = "modulo"
TauqCobs = TauqCobs + sign(wx, yz) *abs(wx*yz)
return TauqCobs
# Do Quaternions --> Tauquernions --> Objects [Ignores 'neighbor' criterion -
# a variant for small examples]
# Records found-tauquernions' co- boundaries.
def QuatsToTauqCobs (Quats} :
TauqCobs = 0
for wx in Quats:
for yz in Quats:
if wx == yz: continue
if (rank(wx*yz) % 4 ) ~~ 0 or (rank(wx*yz) % 6 ) ~~ 8 : # NB: "%" = "modulo" TauqCobs = TauqCobs + sign(wx,yz)*abs(wx*yz)
return TauqCobs >ete corresponding to Example 1 with Code of Appendix A. if Initialize the Example's retinal frames
# (see elsewhere for 2d and 3d drawings of retina) .
Retmalnit = -a - b - c - d - e g - h n - o - Ό;
Fron - b + f + j + n + o ;
RetinaFront = Retinalnit - Front,
Right = a-rb-t-c + h + i + rn + p;
RetinaRight = Retinalnit - Right;
Top ~ b + c + f + j + n;
RetinaTop = Retinalnit - Top ,- # Top view's pixel
# Run the Example:
# Get first frames Retinalnit <-- RetinaFront
Changes - Changed?ixe Is (Retinalnit, RetinaFront) ;
Changed = Changes [0] ; ewRetina = Changes [1]
# whence
Changed = b + f + j + n + o
NewRetina - - a + b - c - d - e + f - g - h - i. + j - k - 1 - m + n + o "C p
# NewRetina will be compared with the new Frame in next iteration
# Note in NewRetina that the only +'s are the Changed, as expected,
if Now make quaternion atoms Q out of the Changed:
Q - PixelsToQuaternions (Changed) ; Q
# whence
Q - b*f + b*j + b*n + b*o + f*j + f*n + f*o + j *n + j*o + n*o
#Toss noil- neighbors (the example code ignores this 'neighbor' criterion) :
Q = b*f + f*j + j*n + n*o
# Just do Quaternions --> Tauque nions by hand, from Q:
Tauquernions = [b*f+j*n, b*f+n*o, f*j+n*o]
# Now make Objects out of the tauquernions:
Objects J auqs ioTauqCo.os ( auquernions ) 4 -vector Object:: # These form the FIRST HIERARCHICAL LEVEL on RetinaFront:
Objectsl = - b*f*j*n - b*f*n*o "C f*j*n*o # These are pixels, ie.
ϊϊ (next level's pixels}
ίϊ NB: Contrary to the formal specification, to keep the complexity of this
# example code to a minimum, we do not map these -vectors to 1- ectors, but
# use them "as is", which greatly improves readability.
# Repeat for 2nd Ob ect- level :
Changes = ChangedPixels (Retinalnit, Objectsl) ,· # Later on, this will read
#"Changed?ixeIs (Objects?, ObjectsQ) »
# whence # - (use Retinalnit as a default)
Changes = i-b*f*j*n "C b*f*n*o "C f*j*n*e ,
-a-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p "C b*f*j*n - b*f*n*o "C f*j*n*o]
Changed ~ Changes [0]
ewRetina = Changes [1]
# Compared to Retinalnit, these three Objects are also what has changed:
Changed2 = -b*f*j*n - b*f*n*o - f*j*n*o
# As before, make quaternion atoms out of the Changed.
Q2 = PixelsToQuaternions (Changed?)
ίί The Changed are clearly Adjacent, so make quaternion atoms {ie. 0mod2- ifvectors) by mult iplying them :
if Q2 = {-bfjn) {-bfno) = -jo, {-bfjn) {-fjno) = -be, {-bfno) {-fjno) = -bj
# [The fact that a bona fide quaternion triple { -jo, -bo, -bj } falls out of this is a fi coincidence, due to overlapping coverage.]
# { - jo, -bo, -bj } cannot form a -vector Object because they are not pair-wise disjoint.
# So no 2nd Level can be formed for the Front view., starting from a blank precursor. if But the new quaternions Q2 = { - jo, -bo, -bj } , valid as derived, are added to the mix.
# Now sake the first hologram, History!, by combining all the items we have found: History! = NewRetina + Q + Q2 + Objectsl
ίί whence
History! = (-a+b-c-d-e+f-g-h-i+j -k-i-m+n+o-p) +
(bf+fj+jn+no) + ί-jo-bo-bj) + {-b jn-bfno- jno)
# End of processing Retinalnit --> RetinaFront, which yields a just one # set (ie. just a single Level) of 4 -vectorObj ects .
# iifow move f rom Front view to Right view, is. New Frame:
# RetinaFront --> RetinaRight
ewRetina = - a + b - c - d - e + f - g - h - i + j - k - 1 - tn + n + o - p
# "NewRetina " is inherited from preceding stage, ie. the analysis of Init vs. Front
Changes - ChangedPixels (NewRetina, RetinaRight) ;
Changed ~ Changes [0] NewRetina ~ Changes [1] ;
ίί whence
Changed = + a + b + c + h + i + m + p
NewRetina = + a+b+c-d-e-f -g+h+i- j -k-l+m-n-o+p ' *n b*f*n*o - f*j*n*o
Q = PixelsToQuaternions (Changed) ;
if whence
Q - + a*c ■■ a*f + a*h + a*i ■■ a*j + a*ra ■■ a*n a*o + a*p ■■ c*f + c*h + c*i ■■ c*j f*h - f*i - f*j - f*m - f*n - f*o - f*p + h*i h*j + h*m - h*n - h*o + h*p - i*j -> i _ i*n _ *0 + i*p _ _ j*n _ j*0 _ j*p ir;*n - m*o + m*p - n*o - n*p - o*p
# Delete those quaternions in Q that stern from non-ADJACENT pixels:
fi The pixels ADJACENT to a pixel [i,j] , ie. "neighbors", are those
if that are at points [i+-l, j+-l] , ie. at both 45o and 90o angles.
# Adjacency sets in the form "{[i,j] ; neighbors}":
# {a; c,-f}, {c; a,-f,h}, {-f; c,h,i,-j}, {h? c,-f,-j}, -f , - j ,m, -nj ,
# {-j; -f ,h, i,m, -n, -o) , {m; i,-j,-n}, {-ii; -j,i,ro,-o}, {-o; -j,-n,p}, {p; -o}
Therefore :
Q = a*c a*f c*f c*h f *h f *i f*j - h*j - i*j i*m i*n j *n j*o m*n n*o o*p
0 jects2__l = QuatsToTauqCobs (Q) # Convert Q's quaternions to taiiquernions,
# and thence to -vector Objects:
# whence
0bjects2_l
-- a*c*f* i -- a*c*f*j -- a*c*h*j -- a*c*i*j + a*c*i* -- a*G*i*n -- a *c -- a *c* *n a*c*j*o - a*c*m*n - a*c*n*o - a*c*o*p + a*f*h*j a*f*i*j - a*f*i* a*f*i*n + a*f * j*m + a*f*j*n + a*f*j*o + a*f*m*n + a*f*n*o + a*f*o*p + c*f *h c*f *h*j + c*f *i*j -- G*f*i*m + G*f*i*n + c*f*j*m ÷ G*f*j*n + c*f*j*o + G*f *m G*f *n*o + c*f*o*p - c*h*i*j + c*h*i*m - c*h*i*n - c*h*j*m - c*h*j*n - c*h*j c*h tn*n - c*h*n*o - c*h*o*p - f*h*i*ni + f*h*i*n + f*h*j*iri + f*h*j*n + f*h*j f *h*m*n + f*h*n*o + f*h*o*p - f*i*j*trs + f*i*j*c + f*i*rr:*n + f*i*n*c + f*i*o* f*j *m*n + f*j*n*o + f*j*o*p + h*i*j*m - h*i*j*n + h*j*m*n + h*j*n*o + h*j*o* i*j *m*o + i*j*o*p - i*m*n*o - i*m*o*p + i*n*o*p - j*m*n*o + j*m*o*p + j*n*o ;T;*n *o*p
# Delete non-Adjacent Objects: (manual calculation for this Example) 0bjects2 1 = - a*c*f*i - a*c*f*j - a*c*h*j + a*f*h*j + a*f*i*j - a*f*i*m + a*f*i*n a*f*j*K! + a*f*j*n + a*f*j*o + c*f*h*i - c*f*h*j + c*f*i*j - c*f*i*in + c*f*i*n + c*f*j*m + c*f*j*n + c*f*j*o - c*h*i*j - c*h*j*m - c* *j*n - c*h*j*o - f*h*i*m + f*h*i*n + f*h*j*m + f*h*j*n + f*h*j*o - f*i*j*m + f*i*j*o + f*i*m*n + f*i*n*o + f*j*rn*n + f*j*n*o + f*j*o*p + h*i*j*m - h*i*j*n + h*j*rr:*n + h*]*n*o + h*j*o*p + i*j*m*o + i*j*o*p - i*m*n*o + i*n*o*p ~ j *m*n*o + j*m*o*p + j*n*o*p + m*n*o*p if 0bjscts2_l is the first Level built on RetmaRight. These form the FIRST HIERARCHICAL LEVEL on RetinaRight
# Now update the hologram by combining all items found, plus History to date:
History2 = History! + NewRetina + Q + 0bjects2;
# whence
History2 = + d + e + f + g + j + k + l + n + o + a*c - a*f + b*f - b*j - b*o - c*f c*h - f*h - f*i - h*j - i*j + i*in - i*n - j*m + j*o - m*n - o*p - a*c*f*i - a*c*f*j a*c*h*j + a*f*h*j + a* *i*j - a*f*i*m + a* *i*n + a*f*j*rr. + a*f*j*n + a* *j*c + b*f*j*n + b*f*n*o + c*f*h*i - c*f*h*j + c*f*i*j - c*f*i*m + c*f*i*n + c*f*j*m + c*f*j*n + c*f*j*o c*h*i*j - c*h*j*m - c*h*j*n - c*h*j*o - f*h*i*it\ + f*h*i*n + £*h*j*m + f*h*j*n + f*h*j*c - f*i*j*m ÷ f*i*j*c + f*i*rr.*n + f*i*n*c + f*j*m*n - f*j*n*o + f*j*o*p + h*i*j*m - h*i*j*n + h*j*m*n + h*j*n*o + h*j*o*p + i*j*m*o + i*j*o*p - i*m*n*o + i*n*o*p - j *m*n*o + j *m*o*p + j *n*o*p + m*n*o*p
# Rename :
History2 1 = History2 # "2 1" c z there's going to be Level "2 2" too.
# Repeat all this for the next Level:
Changes - ChangedPixels (Ofoj ects'i , Obj ects2 ) ; Changes # le. always compare corre
#sponding
Leve1s/Objects,
# here Level- 1 Objects.
Changed = Changes [0] ; NewRetina2 1 = Changes [1],·
# The "changed" here are in fact just Obj ects2 , whilst Objects! are all
if icoincidentally) "unchanged" {=> delete), but this needn't be so.
Changed -- a*c*f*i -- a*c*f*i -- a*c*b*i + a*f*h*j ÷ a*f*i*j -- a*f*i*m + a*f*i*n a*f*j*m + a*f*j*n + a*f*j*o + c*f*h*i 1 c*f*h*j + c*f*i*j c*f*i*m + c*f*i*n + c*f*j*m + c*f*j*n + c*f*j*o - c*h*i*j - c*h*j*m - c*h*j*n c*h*j*o f*h*i*iu + f *h.*i*n + f*h*j *m + f*h.*j*n + f*h.*j*o -- fi*i*j*m + f *i*j*o + fi*i*m*n + f *i*n*o + f*j*m*n + f*j*n*o f*j*o*p + - h*i*j*n + h*j*m*n + h*j*n*o h*j*o*p + i*j*m*o + i*j*o*p i*m*n*o +
Figure imgf000027_0001
- j*rfi*n*o + j *m*o*p + j*n*o*p + rri*n*o*p
# And the Changed are therefore also the the new : Retina:
Retma2 2 = - a*o* f* 'i - a*c*f' *j - a*c*h*j + a*f*h *j + a*f* 1 *j - a*f* I* Tfi + a*f* 1 a*f *j*m" a*f *j *n + a*f*j*o - b*L:*j*n - b*f*n*o + c*f *h*i c*f *h* j + c*f*i* j c*f*i*m + c*f*i*n + c*f*j*m + c*f*j*n + c*f*j*o - c*h*i*j c*h*j*m c*h*j*n c*h*j*o f*h*i*m + f*h*i*n + f*h*j*m + f*h*j*n + f*h*j*o f*i*j*m + f*i*j*o + f *i*tr,*n + f*i*n*o + f*j*m*n + f*j*n*o ÷ f*j*o*p + n*i*j*m -- h.*i*j *n + h*j*m*n + h*j*n*o + h*j*o*p + i*j*m*o + i*j*°*P - i*rri*n*o + i*n*c*p j *ir.*n*o + j *m*o*p + j*n*o*p + m*n*o*p if Rename:
Cha:nged2 - Changed
if Fair Changed items to make 0mod2 -vectors ("quaternions"} :
Q2 2 = PixelsToQuaternions (Changed2 ) ;
# whence
Q2_2 =
- a*f + a*h + a*i + a*j - a* - a" - a* o + a*p - c*f - c*h -i- c*j -i- c *m - c*n - f*n f*p + h*i + h*m - h*o - h*p - i*m - i*n + i*p - j*m - j*p - m*n + m*o - m*p - n*o + n*p + a*c*f*h + a*c*f*m + a* c*f*n a*c* f *o - a*c*h*tn - a*c*h*n + a* c *h*o - a*c*i*j a*c*i*n - a*c*j*m - a*c*j*n - a*C 'ίΰ' "O - a *c*n*o - a*f*h*i + a*f*h*m + a*f*h*n + a* *h*c - a*f *h*p + a*i:*i*j + a*?* <i* 'n - a * *i*c - a* *i*p + a*f*j*c - a* *m*n † a*f*m*o + a*f *o*p - a*h*i*o - a*h' "j' 'n - a *h*m*o - a*h*n*o - a*i*j*o + a*i*m*n - a*i*n*o - a*j*m*n + a*j*m*o - a*j^ 'n! "O + a *rf:*n*o - a*m*o*p - a*n*o*p + c*f*h*i - c*f*h*i -- c*f*h*m + c*f*h*o + C*f' <i> 'j + c*f*i*m -- c*f*i*o -- c*f*i*p -- c*f*j*m + c*f*j*n + c*f *j*o + c*f*m*n - C*f' 'tn* Ό - c*f*n*o - c*f*o*p - c*h*i*m + c*h*j*m - c*h*j*n - c*h*tn*n - c*h*ra*o + c* -> -p + c *h*n*o + c*h*n*p + c*h*o*p - c*i*j*o + c*i*m*n + c*i*m*o + c*i*o*p - c*j> <m> <n + c *j*m*o - c*j*n*o + c*m*n*o c*m*o*p - c*n*o*p + f*h i*j - £*h*i*m - f*h- i» -r £ *h*j*m - f*h*j*o -r £*h*m*n -i- f*h*m*o - f*h*n*o - f*h*n*p - f*i*j*m + I*!"1 'j* 'n + f*i*j*o - f*i*j*p - f*i*m*n - f*i*m*p - f*i*n*o - f*i*n*p ■■ f*i*o*p - f*j> <m> '11 -- f*j*ra*p - f*j*n*o - f*j*n*p f*j*o*p + f*m*n*o - f*m *o* + f*n*o*p - h*i- j" 'Q - h *i*m*n - h*i*n*p - h*i*o*p -i- h*j*m*o - h*j *m*p - h*m*n*o - h*m*o*p - h*n" "O* p + i *j*m*n - i*j*m*o + i*j* *p - i*j*n*p + i*m*n*o -- i*m*n*p + i*m*o*p + i*n> Ό'rp - j*m*n*o + j*m*n*p + m*n*o*p + a*c*f*h*i*i - a*c*f*h*i*m + a*c*f*h*j*m + a*c*f h» j*n - a*c*f*h*m*n + a*c*£*h*m*o - a*c*f*h*n*o + a*c*f*h*o*p - a*c*f*i*j*m - a*c*f " j*n - a*c*r*i* *o - a*c*f*i*o*p - a*c*f*j*m*o - a*c*f*j*o*p + a*c*f*m*n*o -- a*c*f' 'ΙΓι' Ό*ρ + a*c*f*n*o*p -- a*c*h*i*j*n - a*c*h*i*m*o a*c* *i.*o*p - a*c*h*m*n*o - a*c*h" Ό*ρ - a*c*h*n*o*p + a*c*i*j*m*n a*c*i*j*in*o - a*c*i*j*o*p - a*f*h*i*j*m + a*f *h- Ίϊι*η - a*f *h*i*m*o - a*£*h*i*o*p a*f *h*m*n*o - a*f*h*m*o*p -- a*f*h*n*o*p -- a*f*i' rm*n + a*f*i*j*m*o + a*f*i*j*n*o + a*f*i*j*n*p + a*f*i*j*o*p + a*f*i*n*o*p + a*f*j" 'η*ρ a*f*m*n*o*p + a*h*i*j*m*n
Figure imgf000028_0001
+ a*i*j*m*n*o - a*i*j*tn*o*p + a*i*j i <n* Ό*ρ - c*f*h*i*j*m - c* *h*i*j*n - c* *h*i*m*n c*f*h*i*m*o + c*f*h*i*n*o - c*f*h' <i> Ό*ρ - c*f* *j*m*n + c*f* *j*m*o - c*f*h*j*n*o + c*f*h*j*o*p + c*f*h*tt\*n*o - c*f*h" Ό* c*f*i*j*m*n + c*f*i*j*n*o c*f*i*j*n*p - c* *i*j*o*p † c* *i*n*o*p + c*f*j j 'η*ρ + c*f *m*n*o*p - c*h*i*j*m*n c*h*i* j *n*o - c*h*i*j*n*p + c*h*i*m*n*c + c*h*i' co*p + c*h*i*n*o*p - c* *j*m*n*o - c* *j*m*n*p - c*h*j*m*o*p + c*h*j*n*o*p + c*h*m'' 'n! "°*Ρ + c*i*j*m*n*o - c*i*j*m*o*p + c*i* *n*o*p
£*h*i*j*m*o - f*h*i*j*n*p † f*h*i* 'j' Ό*ρ - *h*i*m*n*o - f*h*i*m*o*p - *h*i*n*o*p - f* *j*tn*n*o + f*h*j*m*n*p - f*h*j' ΊΎ'co*p - f*i*j*m*n*o + f*i*j*m*n*p + f*i*j*m*o*p + i*i*j*n*o*p + h*i*j*m*n*o - h*i*j" "η*ρ - h*i*j*m*o*p - h*i*j*n*o*p + h*i*m*n*o*p + h* *m*n*o*p + i*j*m*n*o*p + a*c* ' <h> 'i*j*n*o + a*c*f *i*j*m*n*o + a*c* f *i*j*m*o*p + a*c*f*i*j*n*c*p + a*c*f*i*m* n*o*p + a*c* f *j*m*n*c*p + a*c* *i*j*m*n* G +
a*c*h*j*m*n*o*p + a*f*h*i* *n*o*p - c*f*h*i* j *m*n*o - c*f*h*i* j *n*o*p +
c*f *h*j*m*n*o*p -- c*h*i* j *m*n*o*p -- f *h*i* j *m*n*c*p
if Note the combinatorial explosion of the number of items/associations.
# Now make it worse by combining these quaternions in disjoint pairs to produce
# new Objects, on Level 2_2 of RetinaRigh :
0bjects2 2 = QuatsToTauqCobs (Q2 2); 0bjects2_2
if These are the Objects populating Level 2 2 on RetinaRight: 0 jects2_2 - - a*c*f*i -t- a*< £*ίϊ! -t- a* * -p ¾' ,·^ a*c* f*p - a*c :* *.i. - a* a* Q + a * * h * Ό _ a * * ί * + a*c * j_ * o - a * c * j * n - a*c* j * o - a*c*m*o - a* c*m*p - a*c*n*o - a*c*n*p - a*f *h*i - a*f *h*j - a*f *h*m - a*f*i*j - a*f*i*m - a* "Γ τ_ ^ Ϊ1 i O - a*f*i*p - a*f *rri*o + a*f *rri*t> ·- a*f *n*p a*f *o*p + a*h*i*ffi - a*h*i*n + a* *i*o + a*h*j*m + a*h* j *n - a* X) + a*h*tn*n - a* *m*o - a*h*m*P + a* h*n*o - a*h*n*p - a*i*j*m + a*i* j *n + a*i 'k3*P + a*i*m*o + a*i*m*p + a*i*n*o + a* i*o*p - a*j*rfi*n - a*j*rri*o + a* j *n*o - a*j *n*p - a*rri*n*o a*rri*o*p + ] * 3 ^ F * ]] * -j - 0*^*]* -- c*f*h.*n + c*f *h*o + c * f * i * j + * "F * j * 11 -- * F * ] * ] c*f *i*o + C* vr -j r m _ £ -k ' -k -j vr + c*f*j*o - c*f *m*n - c*f *!t!*0 - c*f *m*p + c*f *n*p - c*f*o*p - c* h*i*j - o*h*i*tn + *h*i*n + + c*h + c*h*j*n + c*h*j*o + c*h.*j*p + h*tr.*n - c*h*m*o -- c*h*n*o -- c*h*n*p + c*h *o*p -- c*i*j *n -- + c*i*j*p + C " i*n-.*o + c*i*tn*p + c*i*n*o + c*i*n*p - c*i ¾o¾p + c* j *rt;*n - c*j *m*o - j * j *n*'o -t- c * J * o * o *ITl* *0 ~ -k rr^-k Y'k'Q - c*n !:*h* i * j *h* i *
* Π * 3 * Ή F * + f *h*i*m + p * *γ] + f*h. * *0 -- F * ] * "j * + * Q f*h*m*p + f* h*o*p - f*i*j*at _ £ -k j -k -j 'λ' j-, .;. f*i* *o - f * i *j*p - f*i*ni*n - f ' -\ -k m 'λ' Q + t St m St - f * i * n * o ~ * j_ * n * o f * j *r*o - f*j r * -j * ~k -ø - f*m*n*o +
h*j*m*n +
Figure imgf000029_0001
a*c*f *h*i * m - a*c*f*h*i*n -- a*c*f*h*i*o - a*c*f *h*i*p - a*c *f *h* j *p a*c*f *h a*c*f*h*m - a*c*f*h*m*p + Ά' Ά' -F ] * j-] * a*c*f *h*n*p a*c + a*c*f *i * -j * rr;
Sr v 3 * j: * j_ #rn#Q _ * Q * * j_ * m * _ a * Q * * ί * Ώ * Ό - a * Q ;- * * Q * - + * Q * f * j
*o -- a*C*f *η *™*Ρ -- - a*c*f * j *o*p - a*c + a*c*f *!Γι a*c*h*i*j *m + a*c*h*i*m*n - - a*c* * i *ϊϊ\*ρ - a*c a*c* *i a * c * h * j * in *n + a*c*h*j*n*c - * * j * * - a * * h * j * c * - a * Q * h * m * * + a * * h * τί * n * Ό a*c*i* j *tn *n - a*c*i*j *m*o - + a*c*i - a*c - a*c*i* j *o*p a* c n - a*c*i*m*n*p + a*c*i*n*o*p - a*c*m*n*o*p - a*f * H * ~j * a*f*h*i
3 # F *] _ * m *o + a*f*h*i*n*o + a * * "h * j_ * "Π * "O _ a* f: * j *ff[* n + a * f + a*f * * j a*f *h*m*n - a*f*h*m*n*p + a*f *h*m*o*p + a*f*h *n*o*p - a*f ^i* j *m*o + avt£vcivc j 'tn^ a * " * ί. * St-QSf a^^^i^j^ri^p -- + a*f*j + a*f - ~j * rr, * o * + J2 * p) St it m * -Q * Q Sr + a* *i*j*n*o - a * " * i * j * * - a*h*i * Ώ * O a * h * i * Ώ * o * + a* * j *m*n*o j * j *n _ *n*p
Figure imgf000029_0002
*n * * ] * -j -- c*f *h* j *m*p + c*f*h. * j *n*o -- c * f + -p * ] * m *n*o c*f*h*m*o *P - c*f *n;*n - c*f *i*j *n\*p - c*f*i * j n*o - c*f *n*p + c*f *i*j *o*p
*o -f * * ί * ff' * 11* P + + C *j *rr;*n*p - c*f + *o*p
Q * Π * * -j * rn *o + c*h.*i*i *m*p + c* *i* j *n*p -- c* *i -- c*h + c*h*i*n * O * Ό c*h*j *m*n '"p + c»h*] *ίη*ο* - -j *rt;*n p -f- c*i*-j *r,;vr0vrp -r C 'Λ' 1 *-j *n*o*P - C r1*{r, rn *o*p c* j *m*n*o *'C * in * _ * j * iTi * o - f*h*i * -j * * [; + f*h - * in * i * in * ii * c f *h* ^ ^ * li * ~j * in * o * f *i* *m*n*p + f * i * ή * Γί*Τ] *ο*ΐ) -- h*i* j *m *n*o h*i*j *rr;*n " - h*i*j*n*o*p - i*;] *in¾n*o*p - a*c*f *h*i*j 'Λ'ίίνΛΌ - a*c*f*h* *j*n*o - * r * ji * j_ * "j * O * C * r * * j_ * [ fa *h*i*n*o*ic
a*c*i* *j *tn*n*o + a*c*i_* * j *m*n*p + a*c*i*h* j *m*o*p - a*c*i* * j *n*o*p - a * c * f * h * ill * n * o * ρ - a*c*f*i* j *πι*ο*ρ - a*c*f*i* j *η*ο*ρ -f- a*c*f* j *πι*η*ο*ρ - Q ^ ^ _ ^ j f -Q -Q — a ^ c ^ in ^ i ^ il ^ xi ^ o ^ o ~ 3 ^ ^ in ^ i ^ J | ^ ιι ^ m ^ o i- a ^ ^ in ^ i ^ j ^ ΪΪΊ ^ o ^ o ~
a*f *h*i*j *n*o*p + a*f *1ι*ϊ*ΐη*η*ο*ρ + a*f* * j*tn*n*o*p + a*f *i* j*m*n*o*p -
Q ]- -j -' -j -' τπ -Q -' Q -' - .j.. -' * ]- -' -j * j *n * r") * - * * ]"[ * ~i * rri * * * ρ ■- r1 * -P * -j * r * * * ρ -ι.
i * ^: * ί * j * ΪΊ * "Π * O * ΐ) "I" * h * Ϊ * j * HI * Ώ * O * Ό
#These form the SECOND HIERARCHICAL LEVEL on RetinaRight .
# U date th® holograiEi by adding in th® ne Lsvel :
History2 2 = History2 1 + Changed2 2 + 0bjects2 2; History2 2 = + d + e + f + c + j ^- 3- + 1 + n + o + a*c - a*f + b*f - t *j - - b*o - Ck c*h - f*h - f*.i - h*j - i*j -t- i*in - i*n - j*m + j*o - m*n - o*p - a* c* f*j - a*c*f* in a*c*f*o - a*c*f*p - a*c* *i + a * c i 'j - a*c*h*m + a*c* *n - a*c*h*o + a*cJ ' *p - a*c*i*m + a*c*i*n + a*c*i*o - a*c 'T' <n - a*c*j*o - a*c*!ti*o - a*c*m*p - a*c 'n*o - a*c*n*p - a*f*h*i + a*f*h*j - a*f" "h" "in + a*f*i*j + a*f*i*n - a*f*i*o - a*f" ^i*p - a*f*j*m - a*f*j*n - a*f*j*o - a*i:^ <m* Ό + a*f*m*p - a*f*n*p - a*f*o*p + a* h> 'i*m - a*h*i*n + a*h*i*o + a*h*j*m + a*lv 'j' <n - a*h*j*p + a*!!*-,*!! - a*h*m*o - a*lv trr!*p + a*h*n*o - a*h*n*p - a*i*j*m + a*i'J "n + a*i*j*p + a*i*m*o + a*i*m*p + a*i" 'n*o + a*i*o*p -- a*-j*rr.*n - a*j*m*o + a*i> •n» Ό -- a*i*n*p -- a*tr.*n*o -- a*rr.*o*p -- G*f ' • *j - c*f*h*m - c*f*h*n + c*f*h*o - C*f' <i' 'IR + c*f*i*n - c*f*i*o + c*f*j*n - C*f' Γ™.*η - c*f*m*o - c*i*m*p + c*f*n*p - C*E" 'oj -p - c*h*i*rri + c*h*i*n + c*h*i*p - G*h" 'j*m - c*h.*j*n -- c*b*j*o + G*h.*j*p + c*h' •tr.» rn -- c*h*tr.*o -- c*h.*n*o -- c*b*n*p + G*h' Ό*ρ -- c*i*j*n - c*i*j*o + c*i*;j*p + c*i' iiV Ό -t- c*i*n;*p + c*i*n*o + c*i*n*p - c*i' O*p + c*j*m*n - c*j*m*o - c*j*m*p - c*j^ ' ni p † c*j*o*p - c*m*n*o - c*m*n*p - lo*p + f*h.*i*j -- f*h*i*m - f*h*i*o -- f*h> <i> rp -- f*h*j*p + f*h.*m*o + f*h*m*p + f *h> '0*ρ - f*i*j*n - f*i*j*p + f*i*m*n - f*i- i!V Ό -t- f*i*m*p + f*i*n*o - f*i*n*p - f*i- Ό*ρ - f*j*m*n - f*j*m*o - f*j*m*p - f*j< 'nj Ό - f*j*n*p + f*j*o*p + f*m*n*o f *n" "o*p - h*i*j*n + h*i*j*p + h*i*m*o - *i' n' Ό + h*i*n*p + h*j*m*n + h*j*m*o h*i- '11*0 + h*j*n*p - h*j*o*p + h*m*n*o + h*nv ov 'P - h*n*o*p - i*j*m*o + i*j*in*p - 'η*ο - i*j*n*p + i*m*n*c - i*ra*n*p + i*^ 'cj p - i*n*o*p - j*vri*n*o - j*m*n*p j *vri" 'C*D - m*n*c*p + a*c*f*h*i*m - a*c* f*h*i' <n - a* c*f*h*i*o - a*c*f*h*i*p - a*c* f*h' •j*P - a*c*f*h*m*n a*c*f *h*it\*o - a*c*r^ < * 'm*p -i- a*c*f*h*n*o + a*c*f*h*n*p a*cJ f*h*o*p a*c* *i*j*rr. - a*c*f*i*j*p - a*C*ri i* <m*c - a*c* *i*m*p - a*c* *i*n*p - a*c< 'f *i*c*p + a*c*f*j*m*n + a*c*f *j *m*o - a*c*f ' 1' 'm*p - a*c*f*j*n* - a*c*f*j*o* a*c 'f *ir!*n*o + a*c*f*m*n*p + a*c*h*i*j*m + a*c*h" 'i* !H*n - a*c*h*i*m*o - a*c*h*i*rt\*p a*cJ ' *i*n*o - a*c*h*i*c*p - a*c*h*j*m*n + a*c*h* 'j' <n*c - a*c*h*j*n*p - a*c*h*j*o*p - a*c< fh*m*n*c + a*c*h*m*n*p + a*c*i*j*m*n - a*c*i' 'j' <m*o - a*c*i*j*m*p + a*c*i*]*n*o - a*c 'i*j*n*p - a*c*i*j*o*p + a*c*i*m*n*o - a*c*i-> -n*p + a*c*i*n*o*p - a*c*m*n*o*p a*fJ 'h*.i.*j*o - a*f*h*i*m*n + a*f*h*i*m*o + a*f *h* <i* <n*o + a*£* *i*n*p - a*f* *j*m*n + a*i:< ' * j *m*o + a*f*h*j*m*p - a*f*h*m*n*o - a*f*lv <n*p + a*f*h*m*o*p + a*f*h*n*o*p - a*f' + a*f*i*j*m*p - a*f*i*j*n*o - a*f*i" "n*p + a*f *i*rr:*n*p + a*f * j *rri*n*o + a*f" 'j*m*o*p + a*f*j*n*o*p - a*f*m*n*o*p + a*h*i* 'j* <n*o - a*h*i*j*n*p - a* *i*m*n*o + a* * fi*n*o*p + a*h*j*m*n*o + a*h*m*n*o*p + a*i*j' <n*o + a*i*j*iTL*o*p - a*i*m*n*o*p + C*f' 'h*i*j*m + c*f*h*i*j*n + c*f*h*i*j*o + c*f *h" "m*n + c*f *η*ι*ΐίϊ*ο + c*f *h*i*rf!*p - c*f" "h*i*n*o - c*f*h*i*n*p + c*f*n*j*m*n -- c*f *h> <i> 'tr.*o -- c*f*h*j*m*p + c*f*h.*j*n*o -- G*f ' ' * *n*p c*f*h*m*n*o + c*f*h*m*o*p - c*f*i- j" 'n\*n - c*f*i*j*m*p - c*f*i*j*ri*o - c*f- i*j*n*p -f- c*f*i*j*o*p + c*f*i*m*n*o + c*f *i" 'id* "n*p + c*f*j*ri;*n*o + c*f *j*rii*n*p c*t< 'j*m*o*p + c*f *m*n*o*p + c*h*i*j *m*o + G* *i> <j> 'tr.*p + c* *i*j*n*p -- c*h.*i*j*o*p -- G*h.' 'i*m*n*p c*h*i*n*o*p - c*h*j*rn*n*p + c*h*y iiV O*p - c*i*j*n;*n*p + c*i*j*m*o*p + j*n*o*p - c*i*m*n*o*p - c* j *m*n*o*p † f *h*i" "m*n - f*h*i*j*m*p - f*h*i*j*n*p + f *h" "i*j*o*p - f*b.*i*m*n*o + f*h.*i*m*n*p + f* *i' •IT.' Ό*ρ -- f*i*i*m*n*p + f*i*i*n*o*p -- f *i> 'tr.*n*o*p h*i*j*m*n*o + h*i*j *m*n*p - h*i* - IT' Ό*ρ - i*j*m*n*o*p - a*c*f*h*i*j* tn* o - a*c*f*h*i*j*n*o - a*c*f*h*i* j*o*p + a*c* f*h*i*m*n*p + a*c*f * *i*v:i*Q* P - a*c*f*h*i*n*c*p + a*c*f *h*j *m*n*o + a*c*f *h*j*m*n*p + a*c*f *h* j *m*o*p
a*c*f*h*j*n*o*p - a*c*f*h*m* n*o*p - a*c* f*i*j*m*o*p - a*c*f*i*j*n*o* P r
a*c*f*j*m*n*o*p - a*c*h*i*j* m*n*p - a*c* h*i*iri*n*c*p - a*f *h*i* j *m*n* P
a*f*h*i*j*m*o*p - a*f *h*i* *n*o*p + s*f*h*i*m*n*c*p + a*f * * *m*n*o*p +
a*f*i*j*m*n*o*p · a*h*i*j *:r\* n*o*p c*f* *i*j*n*o*p · c*f*h*i*m*n*o* P
c* *h*j*m*n*o*p + c* *i*j*m* n*o*p + f*h* i* j *m*n*c*p
# To demonstrate some results, project out various Views:
# First, the pov of (say) -bfjn:
ίϊ [The scalar +-l's appearing below are mod3 measures of dimensional matches.
# It will pay to preserve this sum from mod 3 truncation;] (- b*f*j*n) I Historyl = + 1 + b*f + b*n + f*n + j*n - b*f*j + b*f*n - b*j*n + f*j*n
# We get 8 associations/items of information. Each item contains one bit of
# information. All of these are aspects of Object bfjn (even tho it
if contains only 4 pixels) . if We can constrain/focus the projection by combining pov's, eg. the three pov's represented by Objects!:
Objectsl I Historyl = -bf +bn +fj +fn +fo +jn -no -bfj +bfo -bjn -bno +fjo +jno
# Every one of the above items - 5 more now makes 13 in all - is truly to be found if RetinaFront, & none that are not.
# What if we use the information from TWO frames?
Objectsl i History2 1 = -1 -fn +jn +no -bfj +bfo -bjn -bno -fjn +fjo -fno +jno
# With, two Frames' information, we get more 3-vector associations and fewer
# 2 -vector ones (and 11 in all) .
# Will another Level's worth of information improve this?
Objectsl History2 2 = + 1 - ai - am; - ap + ch - ci - err; - fn + ip + jn + no - bfj + bfo - bjn - bno - fjn + fjo - fno + jno + ac i + achm - achp - acip - acmp + ahip + ahri!p aimp - chip - chmp + cimp - himp
# Here we see even larger chunks of the scene being associated with Objectsl. Note
# that pixels not found in any of the Front, Right, or Top views (eg. d's or k's) if do not appear in the projections, even though they do appear in the interiTiediate
# structures, a mild indication that the mathematics is on- track with retinal reality.
# The Int oduction, mentions that the resolution, of the hologram is logarithmic (to
# the base 4) in the number of pixels, which here is 16, whose log 4 is 2. This "2"
# is the Retina plus one level of Objects derived from the Retina. The above
# projection, using History2__2 exceeds this by 1, and. so all of the -vectors in this
# last projection are in fact redundant.
# End of processing RetinaFront > RetinaRight
# Now move from Right view to Top view:
# New frame: RetinaRight --> RetinaTop
Changes = ChangedPixels (NewRetina2 1 , RetinaTop) ;
Changed = Changes [0] ; NewRetina = Changes [1],·
# Whence
Changed - - a + f - h. - i + j - m + n "C p NewRetina = - a + b + c - d - e + f - g - h - i + j - k - - m + n - o - p - a*c*f*i - a*c*f*j - a*c*h*j -i- a*f*h*j + a*f*i*j - a*f*i*m + a*f* * -i- a*f*j*:n + a*f*j*n + a*f*j*o - b*f*j*n - b*f*n*o + c*L:*h*i - c*f*h*j + c*f *i* j c*f*i*m + c*f*i*n + c*f*j*m + c*f*j*n + c*f*j*o - c*h*i*j - c*h*j*m - c*h*j*n c*h*j*o - f*h*i*m + f*h*i*n + f*h*j*rn + f*h*j*n + i*h*j*o - f*i*j*m + f*i*j*o f*i*rfi*n + f*i*n*o + f*j*m*n + f*j*n*o + f*j*o*p + h*i*j*m - h*i*j*n + h*j *m*n h*j*n*o + h*j*o*p + i*j*m*G + i*j*o*p - i*m*n*o + i*n*c*p - j *m*n*o + j *m*o*p j*n*o*p + τη*η*ο*ρ
Q = PixelsToQuaternions (Changed) ;
# whence
Q = - a*f - a*h - a*i - a*j - a*m - a*n - a*p - f*h - f*i + f*j - f*tn + f*n - f*p - h*i - h*j - h*m - h*n - h*p - i*j - i*m - i*n - i*p - j*m + j*n - j*p - m*n - in*p - n*p
# Adjacency sets in the form "{pixels; neighbors} ;
# {-a; f}, {f; -i,j}, {-h;}, {-i; f,j,-m,n},
ίί {j; f , -i, -ιτι,η} , {-m; -i,j,n), in; {-p;}
# Non-adjacency criterion yields;
Q -■ - a*f - f*i + f*j - i*j - i*nt - i*n - j*ir; -> j*n - ir\*n
O jectS3 = QuatsToTauqCobs (Q) ;
#
These form the FIRST HIERARCHICAL LEVEL on RetinaTop.
# which looks like:
0bjects3 = + a*f*i*j + a*f*i*m + a*f*i*n + a*f*j*m - a*f*j*n + a*f*m*n - i*i*j*m + f*i*tr,*n -- f*j*m*n
# Update the hologram:
History3 = History2 2 + NewRetina + Q + Objects3; History3
# The hologram ari ing from RetinaTop, plus History to date
History = - a + b 1 + c - f - h - i - j - m - n - p + a*c + a*f + b*f b*j - b *o
+ c*h f *h + f*i f*j - h* 'j + i*j + i *n + j *in + "j*n + j*o + n\*n - 0* ■p a*c* f*i a*c*f* j - a*c*f *m + a*c*f *o - a*c*f*p - a*c*h*i - a*c*h*m + a*c*h*n - a*c*h*o + a*c*h*p a*c*i*m + a*c*i*n + a*c*i*c - a*c*j*n a*c*j*o a*c*m*o - a*c* iR*p a*c*n*o - a*c*n*p - a*f*h*i - a*f*h*j - a*f*h*in - a*f*i*o - a*f*i*p a*f*j*:i! - a*f *j*n + a*f *m*n - a*f*m*o + a*ii*m*p - a*f*n*p - a*f *o*p + a*h*i*m - a*h*i*n + a*h*i*o + a*h*j*m + a*h*j*n - a*h*j*p + a*h*m*n - a*h*!ti*o - a*h*m*p + a*h*n*o - a*h*n*p - a*i*j*m + a*i*j*n + a*i*j*p + a*i*rri*o + a*i*m*p + a*i*n*o + a*i*o*p - a*j *m*n - a* j *m*o + a*j*n*o - a*j*n*p - a*m*n*o - a*m*o*p - b*f *j *n - b*f*n*o + c*f*h*i + c*f*h*j - c*f*h*m - c*f*h*n + c*f*h*o + c*f*i*j + c*f*i*m - c*f*i*n - c*f*i*o + c*f*j*rn - c*f*j*n + G*f*j* 0 - c*f*rn*n - c*f*m*o - c*f *rri*p + c*f*n*p - c*f *o*p c*h*i*j c*h*i*rrt + c*h*i*n ÷ c*h*i*p + c*h.*j*m + c*h*j *n + G*h.*j*o + c*h*j*p + c*h*m*n - c*h*m*o - c*h*n*o - c*h*n*p + c*h*o*p - c*i*j*n - c*i*j*o + c*i*j*p + c*i*rri*o + c*i*rfi*p + c*i*n*o + c*i*n*p - c*i*o*p + c*j *rri*n - c*j *rfi*o - c*j*m*p - c*j*n*p + c*j*o* P "ΙΊ' <G - c*tri*n*p - c*n*o *p + f*h*i*j + f*lv + f*h*i*n - f*h*i*o f*h*.i* P -i- f*h¾ 'Hi + f*h*j*n + f*h*j *o - f*h*j*p f*h" 'in*o -t- f*h*m*p + f*h*o*p + f*i*j* m - f*i> fn + f*i*j*o - f*i*j *p - f*i*tn*o + f *i* fm*p - f*i*n*o - f*i*n*p - f*i*o* P - f*r "ΙΓ.' <n - f*j*m*0 - f*j*!t! *p - f*j*n*p - f*j, '°*P + f*rn*n*o + f*n*o*p + h*i*j* m + h*i- 'n + h*i*j*p + h*i*m *o - h*i*n*o + h*i" "η*ρ - h*j *m*n + h*j*m*o + h*j *n* P + h*m> Ό + h*m*o*p - h*n*o*p + i*j*m*p - i*j* fn*o - i*j*n*p + i*j*o*p - i*m*n* P + i*m' 'P + j *π\*η*ο - j *m*n *p - j*m*G*P + j*n- Ό*Ρ + a*c*f*h*i*rr: - a*c*f*h*i*n - a*G*f - ^h" '1*0 - a*c*f*h*i*p - a*c*f*h*j*p - a*C
a*c*f*h*m*o - a*c*f*h*m*p + a*c*f ' <b> '11*0 + a*c*f*h*n*p + a*c*f *h*o*p + a * G ' 'f *i> «j *m a*c*f*i*j*p - a*c*f*i*m*o - a*c*f ' <i' trr!*p - a*c*f*i*n*p - a*c*f*i*o*p + a*c '81*11 + a*c*f*j*tri*o - a*c*f*j*rn*p - a*c*f ' LjJ "n*p - a*c*f*j*o*p - a*c*f *rri*n*o + a*C 'n*p + a*c*h*i*i *tr, + a*c*h*i*m*n a*c*h' «i' <m*o -- a*c*b.*i*m*p -- a*G*h.*i*n*o a * G ' «o*p a*c*h*j*m*n + a*c*h*j *n*o - a*c*h- 'η*ρ - a*c*h*j*o*p - a*c*h*m*n*o + a*c- h*nv «n*p + a*c*i*j*m*n - a*c*i* j *m*o - a*c*i- * llT;*p + a*c*i*j*n*o - a*c*i*j*n*p - a*c" '<ο*ρ + a*c*i*m*n*o - a*c*i*m*n*p a*c*i' 'Ώ' <o*p -- a*c*m*n*o*p -- a*f *h.*i*j *o a*f ' 'm*n a*f*h*i*m*o + a*f*h*i*n*o + a*f *h- <i- 'η*ρ - a*f *h*j*rr;*n + a*f*h*j*m*o + a*f- h*j- <Πΐ*ρ - a*f *h*vri*n*o - a*f*h*m*n*p -t- a*f *lr "o*p + a*f*h*n*o*p - a*f *i*] *iii*o + a*f " kffi*p - a*f*i*j*n*o - a*f*i*j*n*p + a*f*i> 'n*p + a*f*j*m*n*o + a*f *j*m*o*p + a*f' <i*n> 'G*p a*f*m*n*o*p + a*h*i*j*n*o - a*h*i- 'η*ρ - a*h*i*m*n*o + a*h*i*n*o*p + a*h- j*m- «n*o + a*h*m*n*o*p + a*i* j *rri*n*o -t- a*i*y "o*p - a*i*vri*n*o*p + c*f*h*i*j*m G*f " + c*f*h*i*j*c + c*f*h*i*m*n + c*f *h> <i' + c*f*h*i*m*p - c*f *h*i*n*c C*f' 'h*i' 'n*p + c*f*h*j*m*n - c*f*h" , 'ϊϋ*ρ ... c*f*h*j*n*o - c*f*h*j*n*p -f- c*f< 'h*:iP *n*o -t- c* *h*m*c*p - c* *i*j*m*n - c*f *i> 'm*p - c*f*i*j*n*c - c* *i* j *n*p + G* * i*j' *c*p + c*f *ί*:η*η*ο + σ*ί*ί*πι*η*ρ + c*f*j> <n*o + c*f*j*m*n*p - c*f *j*m*o* + C*f' o*p + c*h*i*j*m*o + c*h*i*j*m*p -t- c*h*i' , n*p - c*h*i*j*o*p - c*h*i*m*n*p + c*h< 'i*n^ *o*p c*h*j*m*n*p + c*h*j*m*c*p - c*i* j ' <n*p + c*i*j*m*c*p + c*i*j*n*o*p - G*i* fm*n' *c*p - c*j*m*n*o*p + f*h*i*j*m*n - f*h*i' ,j, 'IR*P - f*h*i*j*n*p + f*h*i*]*o*p - f*lv "n*o + f*h*i*m*n*p + f*h*j*m*o*p - f*i*j> -n*p + f*i*j*n*o*p - f*i*m*n*o*p h*iJ 'j*iiP *n*o + h*i*j*m*n*p - *i*j*n*o*p - i*j*m*n*o*p - a*c*f *h*i*j*m *o - a*c*f*h*i *j *n* Ό - a*c*f*h*i*j*o*p + a*c*f*n* i*m*n*p + a*c*f*h*i*m*c*p - a *c*f *h*i*n*o* P +
a*c*f*h*j*ra*n*o + a*c*f*h*j*m*n*p a*c*f*h*j*ra*o*p - a*c*f*h*j*n*o*p
a*c*f *h*m*n*o*p - a*c*f *i*j *m*o*p a*c*f*i*j*n*c*p + a*c*f *j*m*n*o*p
a*c*h*i*j*m*n*p - a*c*h*i*in*n*o*p a*f*h*i*j*m*n*p + a*f*h*i*]*m*o*p
a*f*h*i*j*n*o*p + a*f *h*i*rn*n*o*p a*f *h*j*iri*n*o*p + a*f *i*j*rii*n*o*p
a*h.*i* j *m*n*o*p + c*f *h.*i*j *n*o*p c*f *h.*i*trl*n*c*p -- c*f *h.*j*m*n*o*p
c*f*i*j*m*n*o*p + f *h*i*j * ΐ*η*ο*ρ
# What does Objectsl's pov look like with all three Frames' information?
Objectsl History 3 = - ! - a*i - a*m - a*p + b*f + b*n + c*h - c*i - c*m - f*n + f*o i*p jin - n*o + b*f*j - b*f*n - b*f*o + b*j*n + b*n*o + f*j*n - f*j*o - f*n*o - j*n*o + a*c*h*i + a*c*h*iTi - a*c*h*p - a*c*i*p - a*c*m*p + a*h*i*p + a*h*m*p + a*i*m*p - c*h*i*p - α*1ι *ρ + c*i*in*p - 1ι*ί*πι*ρ
ίϊ This differs by just a few 2 -vectors from Objectsl j History2 2, thus underlining
# the logarithmic limit mentioned earlier. That is, there is simply no more
# information to be wrung out of Front + Right + Top.
# End of Annotated Example

Claims

What is claimed is:
1. A computer implemented method for defining a collection of data representing a scene, the method comprising the steps of:
acquiring, with a computer processor, input visual data corresponding to two-dimensional (2D) image frames that represent the scene;
based on a pixel-by-pixei comparison, with a programmed computer processor, of data between a chosen image frame from said 2D image frames with a reference image frame, forming a hierarchy of data representing the chosen image frame, wherein said hierarchy of data includes an operator formed from a pair of disjoint quaternions defined by said pixel-by-pixel comparison; and
in a computer process, multiplying said operators.
2. A computer implemented method according to claim 1, wherein said forming a hierarchy of data includes forming a hierarchy of data containing a triple of said operators, each of which has been formed from a pair of disjoint quaternions defined by said pixel-by-pixei comparison, the triplet representing a. basis in a three-dimensional (3D) space.
3. A computer implemented method according to claim 1 , wherein said forming a hierarchy includes
i) mapping pairs of pixels, from the chosen image frame, to a set of signed 2D-vector logical operators, each of said signed 2D-vector logical operators being formed from a pair of said pixels, each of said signed 2D-vector logical operators representing whether first and second input visual data corresponding to the pair of said pixels are the same.
4. A computer implemented, method according to claim 3 , wherein said forming a hierarcliy further includes
it) combining first and second of said disjoint 2D-vector logical operators to define tauquemion operators I, J, K, said tauquemion operators and quaternions having the same multiplication table.
5. A computer implemented method according to claim 1 , further comprising mapping a product of tauquernion operators I, J, K to new ID-vectors to form an updated chosen image frame pixels of which are represented by said new ID-vectors.
6. A computer implemented method according to claim 5, further comprising
repeating the steps of forming a hierarchy and multiplying said triplets until a threshold condition is satisfied, said threshold condition defined by an occurrence of either of
a) the use of all pairs of pixels in the mapping at step i); and
b) a number of iterations reached a value of
log4(number of pixels in image frame)
7. A computer implemented method according to claim 6, further comprising forming a polynomial representing said chosen image frame in multiple dimensions.
8. A computer implemented method according to claim 7, further comprising generating an output initiating an external action based on such polynomial.
9. A computer implemented method according to claim 7, comprising summing polynomials representing a difference between chosen image frames and said 2D image frames to form a representation of the scene containing a superposition of views of said scene at different angles.
20. A computer implemented method according to claim 8, further comprising extracting data representing said scene from multiple angles of view by defining an inner product of said representation of the scene with a set of operators associated with said multiple angles of view.
31. A computer implemented method according to claim 9, further comprising extracting data representing a 3D image of said scene by defining an inner product of said representation of the scene with itself.
PCT/US2014/036521 2013-05-06 2014-05-02 A log-space linear time algorithm to compute a 3d hologram from successive data frames WO2014182555A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361819884P 2013-05-06 2013-05-06
US61/819,884 2013-05-06

Publications (1)

Publication Number Publication Date
WO2014182555A1 true WO2014182555A1 (en) 2014-11-13

Family

ID=51867652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/036521 WO2014182555A1 (en) 2013-05-06 2014-05-02 A log-space linear time algorithm to compute a 3d hologram from successive data frames

Country Status (1)

Country Link
WO (1) WO2014182555A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110206236A1 (en) * 2010-02-19 2011-08-25 Center Jr Julian L Navigation method and aparatus
US20120038549A1 (en) * 2004-01-30 2012-02-16 Mandella Michael J Deriving input from six degrees of freedom interfaces

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120038549A1 (en) * 2004-01-30 2012-02-16 Mandella Michael J Deriving input from six degrees of freedom interfaces
US20110206236A1 (en) * 2010-02-19 2011-08-25 Center Jr Julian L Navigation method and aparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MANTHEY, MICHAEL ET AL.: "TauQuemions Ti, Tj, Tk: 3+1 Dissipative Space out of Quantum Mechanics.", PREPRINT V IN CITESEERX, 2012, Retrieved from the Internet <URL:hftp://citeseer.ist.psu.edu/viewdoctdownload?doi=10.1.1.357.4697&rep=rep1&type=pdf> [retrieved on 20140818] *

Similar Documents

Publication Publication Date Title
US11562498B2 (en) Systems and methods for hybrid depth regularization
Shin et al. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images
Rossi et al. Geometry-consistent light field super-resolution via graph-based regularization
Maset et al. Practical and efficient multi-view matching
US8385630B2 (en) System and method of processing stereo images
KR101706216B1 (en) Apparatus and method for reconstructing dense three dimension image
US10395343B2 (en) Method and device for the real-time adaptive filtering of noisy depth or disparity images
EP3465611A1 (en) Apparatus and method for performing 3d estimation based on locally determined 3d information hypotheses
Gong Enforcing temporal consistency in real-time stereo estimation
KR100943635B1 (en) Method and apparatus for generating disparity map using digital camera image
Aleotti et al. Neural disparity refinement for arbitrary resolution stereo
Mishiba Fast guided median filter
Mills et al. Hierarchical structure from motion from endoscopic video
Gong Real-time joint disparity and disparity flow estimation on programmable graphics hardware
WO2014182555A1 (en) A log-space linear time algorithm to compute a 3d hologram from successive data frames
Yang et al. Multilevel enhancement and detection of stereo disparity surfaces
JP7308913B2 (en) Hyperspectral high-speed camera video generation method using adversarial generative network algorithm
Stühmer et al. Parallel generalized thresholding scheme for live dense geometry from a handheld camera
Baek et al. Occlusion and error detection for stereo matching and hole-filling using dynamic programming
TW201426634A (en) Target image generation utilizing a functional based on functions of information from other images
Fan et al. Unsupervised depth completion and denoising for RGB-D sensors
Mitchel et al. Quotienting impertinent camera kinematics for 3D video stabilization
Kalomiros Dense disparity features for fast stereo vision
Miled et al. Dense disparity estimation from stereo images
Jeny et al. Deeppynet: A deep feature pyramid network for optical flow estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14795357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14795357

Country of ref document: EP

Kind code of ref document: A1