US12198391B2 - Information compression method and apparatus - Google Patents
Information compression method and apparatus Download PDFInfo
- Publication number
- US12198391B2 US12198391B2 US17/748,418 US202217748418A US12198391B2 US 12198391 B2 US12198391 B2 US 12198391B2 US 202217748418 A US202217748418 A US 202217748418A US 12198391 B2 US12198391 B2 US 12198391B2
- Authority
- US
- United States
- Prior art keywords
- field
- subspaces
- fixed
- machine learning
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/004—Predictors, e.g. intraframe, interframe coding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
Definitions
- Computer-based processing is used to effect image-based processing of various kinds.
- the underlying data and/or the processing involved outstrips the ability of the computing platform to effect the desired results within a desired window of time and/or within the available memory resources. This can be particularly so when the application setting deals with three-dimensional model representations, image rendering, processing to achieve a particular lighting effect, approximating physical interactions of a three-dimensional point-to-object via approximation of a three-dimensional model's signed distance field, and so forth.
- FIG. 1 comprises a block diagram as configured in accordance with various embodiments of these teachings
- FIG. 2 comprises a flow diagram as configured in accordance with various embodiments of these teachings
- FIG. 3 comprises graphical depictions as configured in accordance with various embodiments of these teachings
- FIG. 4 comprises a block diagram as configured in accordance with various embodiments of these teachings
- FIG. 5 comprises a block diagram as configured in accordance with various embodiments of these teachings.
- FIG. 6 comprises a graphic depiction as configured in accordance with various embodiments of these teachings.
- FIG. 7 comprises a block diagram as configured in accordance with various embodiments of the invention.
- FIG. 8 comprises a block diagram as configured in accordance with various embodiments of these teachings.
- FIG. 9 comprises a block diagram as configured in accordance with various embodiments of the invention.
- FIG. 10 comprises a block diagram as configured in accordance with various embodiments of these teachings.
- FIG. 11 comprises a block diagram as configured in accordance with various embodiments of these teachings.
- a control circuit facilitates compressing source field information (such as, for example, information that comprises a scalar field and or a vector field) having a corresponding initial space for a given object into a corresponding compact representation.
- That source field information may comprise, for example, a (signed) distance field to a three-dimensional object represented as a two-dimensional manifold embedded in Euclidean space. This can comprise subdividing the initial space into a plurality of subspaces and generating a fixed-dimensionality vector representation for each field that corresponds to one of the subspaces.
- teachings can then provide for inputting the fixed-dimensionality vector representations and query point coordinates corresponding to each of the subspaces to a field estimator neural network (such as, but not limited to, a neural network configured as an encoder-decoder machine learning model) trained to output corresponding field values.
- a field estimator neural network such as, but not limited to, a neural network configured as an encoder-decoder machine learning model
- the aforementioned subspaces may comprise geometric primitives such as a sphere or a cube.
- all of the subspaces constitute only a single geometric primitive category; for example, all of the subspaces may only constitute spheres.
- the subspaces may represent a heterogeneous collection of differing geometric primitive categories.
- the aforementioned subspaces may comprise various geometric primitives (such as both spheres and cubes) or more complex shapes functioning as bounding volumes.
- Subdividing the initial space into a plurality of subspaces may comprise, for example, generating a point set comprising a fixed set of points and then generating at least one descriptor for each of the points.
- These teachings can then serve to provide those descriptors as input to at least one of an analytic algorithm and a machine learning model trained as a spatial decomposer that outputs corresponding subspace parameters to the plurality of subspaces.
- the aforementioned subspace parameters may include center positions and defining parameters of corresponding subspaces that are represented by geometric primitives.
- Generating the fixed-dimensionality vector representation for each field that corresponds to one of the subspaces may comprise, for example, identifying kernel points within each of the subspaces. Field values can then be calculated at positions defined by those kernel points for each subspace and those field values then combined to generate descriptors for each of the subspaces.
- the encoder portion thereof can learn representations of field variation specifics within a subspace and a decoder portion thereof serves to estimate field values at each of a plurality of corresponding coordinate points.
- FIG. 1 an illustrative apparatus 100 that is compatible with many of these teachings will now be presented.
- the enabling apparatus 100 includes a computing device 101 that itself includes a control circuit 102 .
- the control circuit 102 therefore comprises structure that includes at least one (and typically many) electrically-conductive paths (such as paths comprised of a conductive metal such as copper or silver) that convey electricity in an ordered manner, which path(s) will also typically include corresponding electrical components (both passive (such as resistors and capacitors) and active (such as any of a variety of semiconductor-based devices) as appropriate) to permit the circuit to effect the control aspect of these teachings.
- Such a control circuit 102 can comprise a fixed-purpose hard-wired hardware platform (including but not limited to an application-specific integrated circuit (ASIC) (which is an integrated circuit that is customized by design for a particular use, rather than intended for general-purpose use), a field-programmable gate array (FPGA), and the like) or can comprise a partially or wholly-programmable hardware platform (including but not limited to microcontrollers, microprocessors, and the like).
- ASIC application-specific integrated circuit
- FPGA field-programmable gate array
- This control circuit 102 is configured (for example, by using corresponding programming as will be well understood by those skilled in the art) to carry out one or more of the steps, actions, and/or functions described herein.
- control circuit 102 operably couples to a memory 103 .
- This memory 102 may be integral to the control circuit 102 or can be physically discrete (in whole or in part) from the control circuit 102 as desired.
- This memory 103 can also be local with respect to the control circuit 102 (where, for example, both share a common circuit board, chassis, power supply, and/or housing) or can be partially or wholly remote with respect to the control circuit 102 (where, for example, the memory 103 is physically located in another facility, metropolitan area, or even country as compared to the control circuit 102 ). It will also be understood that this memory 103 may comprise a plurality of physically discrete memories that, in the aggregate, store the pertinent information that corresponds to these teachings.
- this memory 103 can serve, for example, to non-transitorily store the computer instructions that, when executed by the control circuit 102 , cause the control circuit 102 to behave as described herein.
- this reference to “non-transitorily” will be understood to refer to a non-ephemeral state for the stored contents (and hence excludes when the stored contents merely constitute signals or waves) rather than volatility of the storage media itself and hence includes both non-volatile memory (such as read-only memory (ROM) as well as volatile memory (such as a dynamic random access memory (DRAM).)
- non-volatile memory such as read-only memory (ROM)
- DRAM dynamic random access memory
- control circuit 102 also operably couples to a user interface 104 .
- This user interface 104 can comprise any of a variety of user-input mechanisms (such as, but not limited to, keyboards and keypads, cursor-control devices, touch-sensitive displays, speech-recognition interfaces, gesture-recognition interfaces, and so forth) and/or user-output mechanisms (such as, but not limited to, visual displays, audio transducers, printers, and so forth) to facilitate receiving information and/or instructions from a user and/or providing information to a user.
- user-input mechanisms such as, but not limited to, keyboards and keypads, cursor-control devices, touch-sensitive displays, speech-recognition interfaces, gesture-recognition interfaces, and so forth
- user-output mechanisms such as, but not limited to, visual displays, audio transducers, printers, and so forth
- control circuit 102 operably couples to one or more networks.
- Various data communications networks are well known in the art, including both wireless and non-wireless approaches. As the present teachings are not overly sensitive to any particular selections in these regards, further elaboration regarding such networks is not provided here for the sake of brevity. So configured, the control circuit 102 can communicate with any of a variety of remote network elements including, for example, one or more neural networks 106 and/or electronic databases 107 as discussed below.
- FIG. 2 presents a process 200 that can be carried out by the aforementioned apparatus, and in particular the aforementioned control circuit 102 .
- the process 200 serves to compress source field information having a corresponding initial space for a given object into a corresponding compact representation.
- That source field information may comprise at least one of a scalar field and a vector field.
- a scalar field comprises a function of coordinates in space whose value at each point is a scalar value.
- a vector field comprises a function of coordinates in space whose value at each point is a vector.
- a vector value at a given point may represent a color value, a translucency value, a radiance emissiveness value, and so forth.
- the source field information will be presumed to comprise a signed distance field towards a three-dimensional object represented by a polygon mesh.
- a three-dimensional polygon mesh comprises a collection of polygons, such as triangles, in three-dimensional space that are connected one to another in order to represent a three-dimensional object's surface.
- this process 200 provides for subdividing the aforementioned initial space into a plurality of subspaces.
- some or all of these subspaces comprise geometric primitives, such as a sphere or cube.
- These subspaces may all have an identical size or at least some may differ in size from others.
- FIG. 3 presents an illustrative example in these regards, where a three-dimensional image of a motorcycle 301 is subdivided into a plurality of differently-sized spheres as depicted at reference 302 .
- subdividing that initial space into the aforementioned plurality of subspaces can comprise generating a point set comprising a fixed set of points.
- that fixed set of points can be selected/generated so as to correspond to the most meaningful information about the source field data.
- these geometric primitives that serve as meaningful subspaces of three-dimensional space can be spheres or access aligned bounding boxes (AABB), but these teachings will accommodate other approaches including, but not limited to, oriented bounding boxes (OBB), discrete oriented polytopes (k-DOP), and so forth.
- OOBB oriented bounding boxes
- k-DOP discrete oriented polytopes
- this subdivision activity can comprise generating at least one descriptor for each of the aforementioned points.
- those generated descriptors can comprise accordance within, for example, a Euclidean context.
- the aforementioned descriptors can be provided as input to an analytic algorithm and/or a machine learning model that was trained as, for example, a spatial decomposer.
- the resultant output comprises subspace parameters that correspond to the plurality of subspaces.
- subspace parameters can include center positions and other defining parameters (such as a radius metric) of corresponding subspaces that are represented by the aforementioned geometric primitives.
- this process 200 generates a fixed-dimensionality vector representation for each field that corresponds to one of the aforementioned subspaces.
- this activity can comprise first identifying kernel points within each of the subspaces. This process 200 can then provide for calculating field values at positions defined by the kernel points for each subspace and combining those field values to generate descriptors for each of the subspaces. Those descriptors for each of the subspaces can then be provided to a machine learning model trained as a local field embedder that outputs corresponding fixed-length vectors of real numbers that comprise the fixed-dimensionality vector representation for each field that corresponds to one of the subspaces.
- this process 200 provides for inputting the three-dimensionality vector representations and query point coordinates corresponding to each of the subspaces into a field estimator neural network trained to output corresponding field values.
- that field estimator neural network and comprise an encoder-decoder machine learning model.
- the encoder portion of the encoder-decoder machine learning model can be configured to learn representations of field variation specifics within a subspace and a decoder portion of the encoder-decoder machine learning model can be configured to estimate field values at each of a plurality of corresponding ordinate points.
- the aforementioned field values can comprise signed distance fields as are known in the art.
- a signed distance function is known to comprise a continuous function that, for a given point in space, returns the point's distance to a closest surface with a corresponding sign. The sign may be negative when the point is inside of the closest surface and positive when the point is outside of the surface.
- these teachings support fast compression of arbitrary scalar or vector fields into a compact representation.
- These teachings provide for subdividing an original space, upon which the foregoing field is defined, into a set of maximally informative subspaces. By one approach, one such metric may correspond to the non-uniformity of directions of the field's gradient.
- These teachings then provide for using neural networks to form a fixed-dimensionality vector representation of a subspace's field. This representation can be continuous and allow obtaining a value by querying any point in the subspace with granularity limited only by float precision. So configured, the subspaces can be combined in a way that makes it possible to query relevant information in the entire parent space.
- teachings can be beneficially employed in a variety of application settings. Examples include, but are not limited to, three-dimensional model representation, rendering, lighting, and approximation of three-dimensional point-to-object interactions through an approximation of a three-dimensional model's signed distance field. Unlike many prior art approaches, these teachings permit a one-shot (i.e., with no additional optimization required) conversion of a watertight three-dimensional mesh into neural signed distance field representations. The latter, in turn, makes physical interaction computations and many global illumination routines more efficient on general-purpose hardware. This can be beneficial in game engines, physical simulations, and virtual reality experiences.
- a spatial decomposer inference pipeline 400 presumes the construction of a fixed set of points referred to herein as a basis point set (BPS).
- BPS basis point set
- the basis point set may correspond to a cubic grid, a hexagonal close packed grid, or any other meaningful set of points.
- the points that form the basis point set may be referred to as nodes.
- Descriptors 401 are formed for each point in the basis point set.
- the descriptors may be distance/vectors from the node to a closest mesh vertex or a point in a point cloud sampled on the mesh surface.
- Other descriptive features of the node may include such things as the field's value or the field's gradient (if the field is differentiable).
- descriptors 401 are input to a decomposer neural network 402 .
- the latter outputs corresponding subspace parameters such as center positions and radii of predicted spheres.
- subspace parameters such as center positions and radii of predicted spheres.
- Calculating corresponding error can be accomplished in at least one of two ways.
- Lp spaces sometimes called Lebesgue spaces
- Lp norms of radii differences are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces.
- mesh-induced signed distance fields and spheres as subspaces one can calculate the signed distance field generated by the union of predicted spheres, and the error can be defined as the mean absolute difference between the ground truth signed distance field and the sphere union signed distance field (or any other appropriate meaningful loss function).
- Error gradients can be utilized to update decomposer parameters if desired.
- FIG. 5 an approach to a local field embedder will be described.
- These teachings provide for positioning a kernel inside each of the aforementioned subspaces. Referring to FIG. 6 , this results in a crafted set of points 600 . In particular, this can comprise taking scale into account to fit each kernel appropriately into a corresponding subspace.
- this can comprise taking scale into account to fit each kernel appropriately into a corresponding subspace.
- one calculates the field values and corresponding gradients (when the field is differentiable) at all positions defined by the aforementioned kernel points of each subspace. Those values are then combined to obtain each subspace's descriptor 501 .
- An embedder neural network 502 receives those subspace descriptors 501 and outputs corresponding local field embeddings as fixed-length vectors of real numbers.
- sampling a set of points can comprise sampling a number of points inside each subspace.
- this can comprise sampling a number of points in a unit cube, having a predefined ratio of such points located in a small neighborhood near the mesh surface.
- One can then recalculate the coordinates of these points to be in each subspace's local coordinate system. Either way, each subspace has it own set of sampled points defined, at least in part, by local coordinates corresponding to that subspace.
- a loss function can be calculated as the power mean of the Lp norm of the difference between predicted and ground truth field values (and, optionally, any corresponding gradients).
- the aforementioned local field embedding information and corresponding query point coordinates are input to a simple feed-forward regressor machine learning neural network 701 that predicts the field's value (in this example, as a local signed distance).
- the aforementioned local field embedding information first passes through a hypernetwork 801 to generate field regressor parameters. The latter then serves to construct a regressor neural network 802 having a predefined architecture that takes a query point and produces a local signed distance value (in this example) out this point as output.
- the aforementioned local field embedding information is input to a small neural network 901 that predicts the parameters of line segments. Those resultant line segment parameters are then passed along with the aforementioned query point coordinates to an analytic function 902 . The latter again outputs the predicted information (in this case, local signed distance information).
- point coordinates can be mapped into local coordinates of a line segment (for example, cylindrical or a mix of cylindrical and spherical at the ends of a segment) and the analytical function may be any (parameterized) function that uses those coordinates to derive local field values.
- This variant may be particularly useful when modelling the signed distance function of some piecewise-rigid object, such as an animable model of the human body.
- the local field embeddings information is again input to a neural network 901 that predicts the parameters of several line segments and the latter is input to an analytic function that again receives the query point coordinates.
- the output of the analytic function 902 is input to an h regressor machine learning program 1001 along with the local field embedding information.
- h regressor refers to a regressor network that is parameterized by a hypernetwork.
- the outputs of the machine learning model are gathered with respect to each of the line segments to calculate the final output (i.e., the field value predicted at a query point) as a combination of those outputs.
- an element 1100 comprises a small neural network that predicts the parameters of corresponding line segments, and that further includes a hypernetwork configured to receive the local field embeddings as input and that serves to predict weights for the aforementioned small neural network.
- An analytic function 902 receives resultant line segment parameters along with corresponding query point coordinates and provides the corresponding output to an h regressor neural network 1001 , the latter having its weights constructed from the outputs of the aforementioned hypernetwork.
- the outputs of the h regressor neural network 1001 are gathered with respect to each of the line segments and corresponding predicted field values are output for given query points as a combination of those outputs.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/748,418 US12198391B2 (en) | 2021-05-19 | 2022-05-19 | Information compression method and apparatus |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163201919P | 2021-05-19 | 2021-05-19 | |
| US17/748,418 US12198391B2 (en) | 2021-05-19 | 2022-05-19 | Information compression method and apparatus |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220375135A1 US20220375135A1 (en) | 2022-11-24 |
| US12198391B2 true US12198391B2 (en) | 2025-01-14 |
Family
ID=84104013
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/748,418 Active 2043-07-27 US12198391B2 (en) | 2021-05-19 | 2022-05-19 | Information compression method and apparatus |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US12198391B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116597082B (en) * | 2023-05-17 | 2024-08-09 | 杭州电子科技大学 | A wheel hub workpiece digitization method based on implicit 3D reconstruction |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050018916A1 (en) * | 1996-07-17 | 2005-01-27 | Sony Corporation | Apparatus for and method of processing image and apparatus for and method of encoding image |
-
2022
- 2022-05-19 US US17/748,418 patent/US12198391B2/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050018916A1 (en) * | 1996-07-17 | 2005-01-27 | Sony Corporation | Apparatus for and method of processing image and apparatus for and method of encoding image |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220375135A1 (en) | 2022-11-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Martel et al. | Acorn: Adaptive coordinate networks for neural scene representation | |
| US11810250B2 (en) | Systems and methods of hierarchical implicit representation in octree for 3D modeling | |
| EP3667569B1 (en) | Processing method and device, operation method and device | |
| US11507846B2 (en) | Representing a neural network utilizing paths within the network to improve a performance of the neural network | |
| KR20230156105A (en) | High-resolution neural rendering | |
| Melvær et al. | Geodesic polar coordinates on polygonal meshes | |
| US20190370642A1 (en) | Processing method and device, operation method and device | |
| EP3678037A1 (en) | Neural network generator | |
| US11972354B2 (en) | Representing a neural network utilizing paths within the network to improve a performance of the neural network | |
| US20230360278A1 (en) | Table dictionaries for compressing neural graphics primitives | |
| US12198391B2 (en) | Information compression method and apparatus | |
| CN117972986A (en) | A Flow Simulation Method with Weakly Grid-Dependent Moving Boundary Containing Generalized Integral Kernel | |
| CN115388906B (en) | Pose determining method and device, electronic equipment and storage medium | |
| CN113467881B (en) | Method and device for automatically adjusting chart style, computer equipment and storage medium | |
| Xian et al. | Efficient and effective cage generation by region decomposition | |
| CN117611727B (en) | Rendering processing method, device, equipment and medium | |
| CN115984440B (en) | Object rendering method, device, computer equipment and storage medium | |
| CN118734899A (en) | Segmentation model optimization method and device based on memory efficient attention mechanism | |
| Duan et al. | A multimetric evaluation method for comprehensively assessing the influence of the icosahedral diamond grid quality on SCNN performance | |
| CN118351516A (en) | Head posture estimation method, device, equipment and vehicle | |
| US8031957B1 (en) | Rewritable lossy compression of graphical data | |
| CN116822407A (en) | Flow field simulation method, flow field simulation device, computer equipment and storage medium | |
| Qian et al. | Normal mapping and normal transfer for geometric dynamic models | |
| CN117830490A (en) | Rendering method and device | |
| Amiraghdam et al. | LOOPS: LOcally Optimized Polygon Simplification |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: ZIBRA AI, INC.,, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZAVADSKYI, VLADYSLAV;MARTYNIUK, TETIANA;REEL/FRAME:064360/0106 Effective date: 20230720 |
|
| AS | Assignment |
Owner name: ZIBRA AI, INC., DELAWARE Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE'S NAME ON THE COVER SHEET PREVIOUSLY RECORDED AT REEL: 064360 FRAME: 0106. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:ZAVADSKYI, VLADYSLAV;MARTYNIUK, TETIANA;REEL/FRAME:064568/0160 Effective date: 20230720 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |