Disclosure of Invention
In view of at least one of the above technical problems, an object of the present invention is to provide an object pose measurement method, an object pose measurement device, and a storage medium.
The technical scheme adopted by the invention is as follows: in one aspect, the embodiment of the invention comprises an object pose measuring method, which comprises an off-line modeling stage and an on-line matching stage;
the offline modeling phase comprises:
inputting a three-dimensional model of an object, wherein the three-dimensional model comprises model point cloud coordinates and a model point cloud normal vector;
sampling the model point cloud coordinates and the model point cloud normal vector;
constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained by sampling, and calculating the model point pair features;
storing the extracted model point pair characteristics into a hash table;
the online matching stage comprises:
inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
sampling the scene point cloud coordinates and the scene point cloud normal vector;
constructing a feature set in the scene point cloud coordinates and the scene point cloud normal vectors obtained by sampling, and calculating scene point pair features;
quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
inputting a color image and extracting a scene edge point cloud;
screening the candidate object poses according to the scene edge point cloud;
clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
Further, the step of sampling the model point cloud coordinates and the model point cloud normal vector specifically includes:
calculating a boundary frame surrounding the model point cloud according to the model point cloud coordinates to obtain a model point cloud space;
rasterizing the model point cloud space to obtain a plurality of grids with equal sizes, wherein each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector;
clustering point clouds contained in each grid according to the size of an included angle between normal vectors of the model point clouds;
and averaging the model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain the model point cloud coordinates and the model point cloud normal vectors after each grid sampling.
Further, the step of constructing a feature set in the sampled model point cloud coordinates and model point cloud normal vectors and calculating the model point pair features specifically includes:
constructing a K-D tree for the sampled model point cloud coordinates;
selecting a reference point, wherein the reference point is any point in a model point cloud coordinate obtained by sampling;
searching a target point in the K-D tree, wherein the distance between the target point and the reference point is less than a first threshold value;
and sequentially calculating the model point pair characteristics formed by the reference points and the target points.
Further, the step of storing the extracted model point pair characteristics in a hash table specifically includes:
quantizing the extracted model points to the features;
solving a key value of the quantization result through a hash function, and taking the key value as an index of the point pair characteristics in a hash table;
point pair characteristics with the same index are stored in the same bucket of the hash table, and point pair characteristics with different indexes are stored in different buckets of the hash table.
Further, the step of quantizing the extracted scene point pair features and matching the scene point pair features with the model point pair features stored in the hash table to obtain a plurality of candidate object poses specifically includes:
quantizing the extracted scene points to obtain features;
expanding the quantization result to compensate for the characteristic offset caused by the noise;
using the expanded result values as key values, and searching model point pair characteristics with the same key values in a hash table;
and acquiring a plurality of candidate object poses according to the model point pair characteristics.
Further, the step of extracting the scene edge point cloud from the input color image specifically includes:
graying the color image;
performing edge detection on the grayed image by using an edge detector;
corresponding pixels at the edge of the image to the depth image one by one, and calculating the spatial coordinates of pixel points according to camera internal parameters;
and extracting the space coordinates to be used as a scene edge point cloud.
Further, the step of screening the candidate object poses according to the scene edge point cloud specifically includes:
projecting the object three-dimensional model corresponding to the candidate object pose to an imaging plane according to camera internal parameters to obtain an edge point cloud of the three-dimensional model;
selecting any point from the edge point cloud of the three-dimensional model as a reference point, and finding out a corresponding matching point from the scene edge point cloud, wherein the matching point is the point closest to the reference point;
calculating a first distance, wherein the first distance is the distance from a matching point to a reference point, if the first distance is smaller than a second threshold value, the matching point is reserved, otherwise, the matching point is discarded;
and calculating the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
Further, the step of clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses specifically includes:
selecting any one of the candidate object poses obtained by screening as a first candidate object pose;
respectively calculating the distance between the first candidate object pose and the other candidate object-free poses obtained by screening;
respectively initializing the candidate object poses obtained by screening into clusters with corresponding numbers;
clustering candidate object poses obtained by screening according to a hierarchical clustering method;
and extracting candidate object poses with the highest voting scores in each cluster to obtain a plurality of preliminary object poses.
In another aspect, the embodiment of the present invention further includes an object pose measurement apparatus, including a memory for storing at least one program and a processor for loading the at least one program to execute the object pose measurement method.
In another aspect, the embodiment of the present invention further includes a storage medium, in which processor-executable instructions are stored, and when executed by a processor, the processor-executable instructions are used for executing the object pose measurement method.
The invention has the beneficial effects that: (1) The invention provides a more efficient model sampling strategy, and reduces the number of point clouds, thereby reducing the subsequent calculation amount; but also can keep enough object surface change information; (2) The distance range when the point pair characteristics are calculated is limited, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced; (3) A quantization expansion method is provided, so that the influence of noise on the deviation of the point on the feature calculation is reduced; (4) The color image information is introduced, the input information is increased, the edge information is extracted from the color image, the candidate object poses are screened and the ICP registration is carried out, the measurement precision is improved, and therefore the scene recognition rate under the conditions of shielding, gathering, stacking and the like is more accurate.
Detailed Description
As shown in fig. 1, the present embodiment includes an object pose measurement method, which includes an offline modeling phase and an online matching phase;
the offline modeling phase comprises:
inputting a three-dimensional model of an object, wherein the three-dimensional model comprises model point cloud coordinates and a model point cloud normal vector;
sampling the model point cloud coordinates and the model point cloud normal vector;
constructing a feature set in model point cloud coordinates and model point cloud normal vectors obtained by sampling, and calculating model point pair features;
storing the extracted model point pair characteristics into a hash table;
the online matching stage comprises:
inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
sampling the scene point cloud coordinates and the scene point cloud normal vector;
constructing a feature set in the scene point cloud coordinates and the scene point cloud normal vectors obtained by sampling, and calculating scene point pair features;
quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
inputting a color image and extracting a scene edge point cloud;
screening the candidate object poses according to the scene edge point cloud;
clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
In the embodiment, the off-line modeling stage mainly performs feature modeling on the object three-dimensional model and stores the feature modeling for subsequent scene attitude measurement; and the online matching stage is mainly used for measuring the object pose of the RGB-D image of the given scene.
Further, referring to fig. 2, the offline modeling phase includes the steps of:
s1, inputting a three-dimensional model of an object, wherein the three-dimensional model comprises a model point cloud coordinate and a model point cloud normal vector;
s2, sampling the model point cloud coordinates and the model point cloud normal vector;
s3, constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained through sampling, and calculating the model point pair features;
and S4, storing the extracted model point pair characteristics into a hash table.
In this embodiment, the offline modeling stage is called an offline modeling stage because the scene image does not need to be input; step S1 at this stage is to input a three-dimensional model of the object, where the three-dimensional model does not need to have information such as texture and color, and only needs to retain point cloud coordinates and point cloud normal vectors, such as a CAD model obtained by computer modeling or a 3D model obtained by a three-dimensional reconstruction technique, and the process can reduce the complexity of three-dimensional model modeling.
Step S2, namely, the step of sampling the model point cloud coordinates and the model point cloud normal vectors, specifically includes the following steps:
s201, calculating a boundary frame surrounding the model point cloud according to the model point cloud coordinates to obtain a model point cloud space;
s202, rasterizing the model point cloud space to obtain a plurality of grids with the same size, wherein each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector;
s203, clustering the point clouds in each grid according to the size of an included angle between normal vectors of the model point clouds;
and S204, averaging the model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain the model point cloud coordinates and the model point cloud normal vectors after each grid sampling.
In the embodiment, a bounding box surrounding the point cloud is calculated according to the maximum and minimum values of the model point cloud coordinates on the X, Y and Z axes to obtain a model point cloud space, and the diagonal diameter of the bounding box is recorded as the model diameter d
m (ii) a And rasterizing the model point cloud space, wherein each grid is a small cube and is equal in size. The size of the grid is set to τ × d
m τ is the sampling coefficient, set to 0.05; each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector; for each grid, clustering point clouds in the grids according to the size of an included angle between point cloud normal vectors, wherein the size of the included angle is not more than a threshold value delta theta, belonging to the same cluster, and then averaging model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain model point cloud coordinates and model point cloud normal vectors after each grid is sampled. This sampling strategy is employed for all grids in the model space, where Δ θ is generally set to
By the sampling strategy, the subsequent operation amount is reduced, the influence of the sampling strategy on losing surface change information is reduced, and the point is opposite to the subsequent operation amountThe identification information of the cells can be preserved.
S3, namely, a characteristic set is constructed in the model point cloud coordinates and the model point cloud normal vectors obtained through sampling, and the step of calculating the characteristics of the model point pairs specifically comprises the following steps:
s301, constructing a K-D tree for model point cloud coordinates obtained through sampling;
s302, selecting a reference point, wherein the reference point is any point in a model point cloud coordinate obtained through sampling;
s303, searching a target point in the K-D tree, wherein the distance between the target point and the reference point is smaller than a first threshold value;
and S304, sequentially calculating the model point pair characteristics formed by the reference points and the target points.
In the embodiment, a K-D tree is constructed for the sampled model point cloud coordinates, so that the subsequent rapid distance search is facilitated; recording each point after sampling as a reference point, and searching a target point in the K-D tree, wherein the distance between the target point and the reference point is less than D
range Point of (a), here d
range And sequentially calculating the model point pair characteristics formed by the reference points and the target points for the first threshold. The specific calculation process is as follows: let the reference point coordinate be m
r The normal vector corresponding to the reference point is n
r Target point is m
s The normal vector corresponding to the target point is n
s Model point pair characteristic F formed by reference point and target point
r,s =(||d
r,s ||,∠(n
r ,d
r,s ),∠(n
s ,d
r,s ),∠(n
r ,n
s ) Wherein d) is
r,s Represents the slave point m
r To point m
s Is given by the distance vector, | d
r,s | represents d
r,s Distance of (n), angle (n)
r ,d
r,s ) Denotes the normal vector as n
r And a distance vector d
r,s The included angle of (n) is analogized in turn
s ,d
r,s ) Represents a normal vector n
s And a distance vector d
r,s Angle (n)
r ,n
s ) Represents a normal vector n
r And the normal vector n
s The included angle of (c). Wherein the first threshold value
d
min And d
med Respectively, the two short edges of the model bounding box. At most viewing angles of a real scene, the visible part length of the model is often smaller than d
m But is greater than d
min Therefore, when constructing features for points pairs with longer distances in a scene, more background point clouds are calculated, the error recognition rate of the algorithm is increased, and the calculation amount of the constructed features is also increased, so that d is selected in the embodiment
range The distance upper limit is used for reducing the influence, namely, the distance range when the point pair characteristics are calculated is limited by the process, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced.
Step S4, namely, the step of storing the extracted model point pair characteristics in a hash table, specifically includes the following steps:
s401, quantizing the extracted model points;
s402, solving a key value of the quantization result through a hash function, and taking the key value as an index of the point pair characteristics in a hash table;
s403, point pair characteristics with the same index are stored in the same bucket of the hash table, and point pair characteristics with different indexes are stored in different buckets of the hash table.
In this embodiment, the extracted model point pair feature F is
r,s =(||d
r,s ||,∠(n
r ,d
r,s ),∠(n
s ,d
r,s ),∠(n
r ,n
s ) Is quantized to obtain
Where Δ d =0.05d is generally set
m ,
The quantized value Q
r,s Solving a key value through a hash function, wherein the key value is a nonnegative integer and is used as an index of the characteristic of the point pair in a hash table, and the key value has the same indexThe indexed pair characteristics are stored in the same bucket of the hash table, and the differently indexed pair 6 characteristics are stored in different buckets of the hash table.
Regarding the on-line matching stage, which requires the input of a color image and a depth image of a scene, respectively, with reference to fig. 3, the stage includes the following steps:
D1. inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
D2. sampling the scene point cloud coordinates and the scene point cloud normal vector;
D3. constructing a feature set in scene point cloud coordinates and scene point cloud normal vectors obtained through sampling, and calculating scene point pair features;
D4. quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
D5. inputting a color image and extracting a scene edge point cloud;
D6. screening the candidate object poses according to the scene edge point cloud;
D7. clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
D8. and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
In step D1, according to a camera imaging formula:
wherein u and v are coordinates of the point in the imaging plane, X, Y, Z is 3-dimensional coordinates of the point in the camera coordinate system, and f
x 、f
y 、c
x 、c
y Is the camera internal reference. Therefore, the 3-dimensional space coordinate corresponding to each pixel point in the input depth image, namely the scene point cloud coordinate, can be calculated according to the camera internal parameters, and the corresponding scene point cloud normal vector can be estimated according to the scene point cloud coordinate.
Regarding step D2, that is, the process of sampling the scene point cloud coordinates and the scene point cloud normal vectors, the same strategy as that adopted in step S2 of offline modeling is adopted in this embodiment to sample the scene point cloud coordinates and the scene point cloud normal vectors. The process can also reduce the number of point clouds, thereby reducing the subsequent calculation amount; but also can keep enough information of the surface change of the object.
Similarly, regarding step D3, the present embodiment calculates the scene point pair characteristics by the same method as step S3 in the offline modeling phase; the method specifically comprises the following steps: (1) Constructing a K-D tree for the scene point cloud coordinates sampled in the step D2; (2) Assuming that the number of scene point cloud coordinates obtained by sampling in the step D2 is N, taking one point from every N points as a reference point, wherein the total number of points is N
A reference point; (3) For each reference point, finding a distance less than D from the reference point in the K-D tree
range And constructed as scene point pair characteristics, d
range The setting and the feature calculation of the scene point are the same as the step S3 in the offline modeling stage, and are not repeated here.
Step D4, namely, the step of quantizing the extracted scene point pair features and matching the scene point pair features with the model point pair features stored in the hash table to obtain a plurality of candidate object poses, specifically includes the following steps:
D401. quantizing the extracted scene points to obtain features;
D402. expanding the quantization result to compensate for the characteristic offset caused by the noise;
D403. using the expanded result values as key values, and searching model point pair characteristics with the same key values in a hash table;
D404. and acquiring a plurality of candidate object poses according to the model point pair characteristics.
In this embodiment, the feature set F = (F) of the scene point pair calculated in step D3
1 ,F
2 ,F
3 ,F
4 ) Quantization is performed in the same way as in the offline modeling step S4,obtaining a quantized 4-dimensional characteristic Q = (Q)
1 ,Q
2 ,Q
3 ,Q
4 ) (ii) a (2) To reduce the effect of noise on the quantization matching, the present embodiment expands the quantized value Q as follows. It is provided that,
i =1,2,3,4, where e
i For quantization error of the ith dimension, F
i Is the value of each dimension of F, Δ is the quantization interval. Take i =1 as an example, when
When is, Q
new =(Q
1 -1,Q
2 ,Q
3 ,Q
4 ) (ii) a When the temperature is higher than the set temperature
When is, Q
new =(Q
1 +1,Q
2 ,Q
3 ,Q
4 ) (ii) a When in use
When the Q is not expanded, the Q is not expanded; in this way, a scene point pair feature F can be quantized to a maximum of 16 quantized features to compensate for noise-induced feature shifts; (3) Let the ith reference point be s
i ,s
i Build up n
i A point pair feature. For s
i A voting matrix is created, each row of the matrix represents a point in the image scene point cloud, each column represents a quantized rotation angle, and the angle interval is generally taken to be
Each coordinate (m, alpha) of the matrix represents that a normal vector of a point m in an image scene rotates to be parallel to an X axis, and then after the normal vector translates to an origin, a connecting line of the point m and other points needs to rotate around the X axis by an angle alpha, and the matrix is initialized to be a full 0 matrix; (4) And taking the expanded multiple Q values as key values, searching in a hash table, and searching for model point pair characteristics with the same key values. For each point pair characteristic found, calculating a point m 'in the image scene and an angle alpha' to be rotated, and voting a matrix
Plus 1; (5) Each reference point s
i After the voting process is finished, extracting row-column coordinates (m, alpha) corresponding to the maximum value in the voting matrix, and calculating to obtain the object pose of the image scene, wherein the value at the (m, alpha) position is recorded as the score of the pose; (6) And reserving the object poses of the image scene calculated by all the reference points as candidate object poses.
Step D5, namely the step of inputting the color image to extract the scene edge point cloud, specifically includes the following steps:
D501. graying the color image;
D502. performing edge detection on the grayed image by using an edge detector;
D503. corresponding pixels at the edge of the image to the depth image one by one, and calculating the spatial coordinates of pixel points according to camera internal parameters;
D504. and extracting the space coordinates to be used as a scene edge point cloud.
In this embodiment, after graying an input color image, edge detection is performed using a Canny edge detector to extract edge pixels of the image, the pixels located at the edges are in one-to-one correspondence with the depth image, spatial coordinates of pixel points are calculated according to camera internal parameters, and the pixel points are referred to as scene edge point clouds.
Step D6, namely the step of screening the candidate object poses according to the scene edge point cloud, specifically comprises the following steps:
D601. projecting the object three-dimensional model corresponding to the candidate object pose to an imaging plane according to camera internal parameters to obtain an edge point cloud of the three-dimensional model;
D602. selecting any point from the edge point cloud of the three-dimensional model as a reference point, and finding out a corresponding matching point from the scene edge point cloud, wherein the matching point is the point closest to the reference point;
D603. calculating a first distance, wherein the first distance is the distance from a matching point to a reference point, if the first distance is smaller than a second threshold value, the matching point is reserved, otherwise, the matching point is discarded;
D604. and calculating the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
In this embodiment, for each candidate object pose obtained in step D4, the three-dimensional model corresponding to the candidate object pose is projected to an imaging plane according to the camera internal parameters, so as to obtain an edge point cloud of the three-dimensional model corresponding to each candidate object pose; for each point in the edge point cloud of the three-dimensional model, searching the point with the shortest distance in the scene edge point cloud extracted in the step D5, and if the shortest distance is less than D ε I.e. said second threshold value, second threshold value d ε Is generally set to 0.1d m If so, the point is called as a correct matching and is called as a matching point; and counting the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
Step D7, namely clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses, and specifically comprises the following steps:
D701. selecting any one of the candidate object poses obtained by screening as a first candidate object pose;
D702. respectively calculating the distance between the first candidate object pose and the other candidate non-object poses obtained by screening;
D703. respectively initializing the candidate object poses obtained by screening into clusters with corresponding numbers;
D704. clustering candidate object poses obtained by screening according to a hierarchical clustering method;
D705. and extracting candidate object poses with the highest voting scores in each cluster to obtain a plurality of preliminary object poses.
In this example, for stepD6, calculating the distance between every two candidate object poses obtained after screening, wherein the distance D between the poses comprises two parts: a displacement amount difference Δ dist and a rotation amount difference Δ rot, where Δ dist<d cluster And Δ rot<rot cluster I.e. the displacement difference Δ dist is smaller than the inter-cluster spatial distance threshold d cluster Spatial distance threshold d cluster Is generally set to be one tenth of the diameter of the object, and the difference of the rotation amount delta rot is smaller than the threshold rot of the rotation angle between clusters cluster Threshold rot of rotation angle cluster Typically set at 30 degrees; if N candidate object poses obtained after screening in the step D6 exist, initializing the N pose into N clusters respectively, and clustering adjacent clusters together according to a hierarchical clustering method until the distances between every two clusters are greater than a threshold value; extracting the candidate object pose with the highest vote number in each cluster to obtain a plurality of preliminary object poses.
Finally, for the plurality of preliminary object poses obtained in step D7, model contour point clouds under each preliminary object pose are further obtained, and ICP registration is performed on each contour point cloud in the scene edge point cloud extracted in step D5, so as to obtain a final object pose with high accuracy.
In summary, the object pose measurement method in the embodiment of the present invention has the following advantages:
(1) The embodiment of the invention provides a more efficient model sampling strategy, and the number of point clouds is reduced, so that the subsequent calculation amount can be reduced; but also can keep enough object surface change information; (2) The distance range when the point pair characteristics are calculated is limited, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced; (3) A quantization expansion method is provided, so that the influence of noise on the deviation of the point on the feature calculation is reduced; (4) Color image information is introduced, input information is added, edge information is extracted from the color image, candidate object poses are screened and ICP registration is carried out, measurement accuracy is improved, and therefore scene recognition rate under the conditions of shielding, gathering, stacking and the like is more accurate.
In this embodiment, the apparatus for object pose measurement includes a memory for storing at least one program and a processor for loading the at least one program to perform the method for object pose measurement.
The memory may also be separately produced and used to store a computer program corresponding to the one method of object pose measurement. When the memory is connected with the processor, the stored computer program is read out by the processor and executed, so that the method for measuring the object pose is implemented, and the technical effect described in the embodiment is achieved.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it can be directly fixed or connected to the other feature or indirectly fixed or connected to the other feature. Furthermore, the descriptions of upper, lower, left, right, etc. used in the present disclosure are only relative to the mutual positional relationship of the constituent parts of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used in this example have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this embodiment, the term "and/or" includes any combination of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "or the like") provided with this embodiment is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, operations of processes described in this embodiment can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described in this embodiment (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described in the present embodiment to convert the input data to generate output data that is stored to a non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.