Disclosure of Invention
Aiming at the problem that a solution facing narrow space and industrial high-precision assembly is lacked in the prior art, the invention provides an optical motion capture-based robot assembly offline example learning system, which collects assembly motion data of arms and hands of skilled operators through optical motion capture equipment to perform offline processing, the demonstration process is flexible and efficient, and the obtained high-precision example learning motion trail has excellent mobility.
The first aspect of the technical scheme of the invention is an off-line example learning system for robot assembly based on dynamic capture, which comprises an optical motion capture platform, a data acquisition module, a data preprocessing module, an off-line robot motion track generation module and a simulation verification module, wherein:
The optical motion capture platform comprises a plurality of optical motion capture cameras, network equipment, computing equipment and a plurality of reflective mark points arranged on the hands of assembly demonstration personnel, wherein the optical motion capture cameras are symmetrically arranged around the assembly demonstration workbench, and each optical motion capture camera is connected to the computing equipment through the network equipment;
The data acquisition module is connected with the optical motion capture platform and is used for acquiring demonstration motion tracks of arms and hands of assembling demonstration personnel;
the data preprocessing module is connected with the data acquisition module and is used for performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing assembly tracks demonstrated for multiple times and obtaining an assembly motion model modeled by a Gaussian mixture model;
The off-line robot motion track generation module is connected with the data preprocessing module and used for outputting a smooth task space robot assembly motion track by a Gaussian mixture regression method;
The simulation verification module is used for transferring the track learned by the offline example to a simulation environment and controlling the robot in the simulation platform to simulate assembly actions.
according to some embodiments, the data acquisition module is configured to:
Calibrating the optical motion capture camera through an application program of a visual calibration algorithm;
After calibration is finished, sending an acquisition command to the optical motion capture camera through computing equipment, and acquiring position and posture data of reflective mark points adhered to the arms and hands of skilled assembly workers;
The data transmission is carried out between the switchboard and the optical motion capture camera, the position and posture data of the motion of the arm and the hand of the user collected by the optical motion capture camera are received, the received data are analyzed in an off-line mode, and the motion trail information which can be executed by the robot is generated.
According to some embodiments, the data preprocessing module comprises:
The anomaly detection unit is used for calculating an anomaly factor of each reflective marker point at each sampling moment, and taking the point with the value of the anomaly factor larger than a given threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set;
The track segmentation unit is used for clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering method;
A trajectory fusion unit configured to: firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
According to some embodiments, the offline robot motion trajectory generation module is configured to:
Obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly track;
Calculating a cost value of the current track through a cost function;
Calculating the selection probability of each frame under the track through a frame selection strategy, selecting partial frames in the joint angle sequence according to the frame selection strategy to form a new sequence, and calculating the cost value again;
Calculating a reward value and updating the probability of each frame being selected in the joint angle sequence according to the reward value;
Updating the whole joint angle trajectory sequence, deleting the frames with the probability lower than a certain threshold value, and forming the rest frames into a new joint angle sequence;
And if the absolute value of the reward value is lower than a first threshold or the iteration number is larger than a second threshold, outputting the current residual frame sequence, performing speed planning interpolation with the shortest time in the joint space, and outputting the motion track with the least motion consumption as the target track.
according to some embodiments, the simulation verification module is configured to:
Importing the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform V-REP;
And matching a robot system model, importing a three-dimensional model of the assembled part, controlling the robot to simulate assembly along an off-line assembly track in a simulation environment, and verifying whether the assembled part reaches an expected position and posture.
According to some embodiments, the robot assembly off-line example learning system further comprises an off-line assembly platform comprising at least one robot body, a tool storage area of the robot gripper, an assembly fixture for positioning the part to be assembled, and an assembly storage area.
The second aspect of the technical scheme of the invention is an off-line robot assembly method based on dynamic capture, which comprises the following steps:
A. collecting demonstration motion tracks of arms and hands of assembling demonstration personnel through an optical motion capture device;
B. Performing off-line preprocessing on the acquired motion tracks of the arm and the hand, processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire the position information of each reflective marker point at each sampling moment in each assembly demonstration, calculating the anomaly factor of each reflective marker point at each sampling moment, regarding the point with the anomaly factor larger than a preset threshold value as a sampling noise point and removing the sampling noise point from the demonstration data set, and then fusing the assembly tracks for multiple times to acquire an assembly motion model modeled by a Gaussian mixture model;
C. Generating a robot assembling motion track according to the obtained assembling motion model, and the actual part initial pose and target pose of robot assembling, wherein the motion information of the hand marking points is analyzed to obtain the assembling clamp pose and state information required by robot assembling, and the generated robot assembling track is adjusted through a post-processing algorithm to obtain the motion track with the shortest time suitable for robot assembling;
D. and guiding the track optimized by the track generation module in an off-line mode into a simulation environment of a robot simulation platform, matching a robot system model, guiding a three-dimensional model of an assembled part, controlling the robot to simulate assembly along the off-line assembly track in the simulation environment, and verifying whether the assembled part reaches an expected position and posture.
According to some embodiments, said step B further comprises the steps of:
Providing a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||)
as the reachable distance, where x(k)is the set { xithe kth sample closest to x in the } k is a manually selected integer parameter;
providing local reachability density
If the sample x therein is such that the local anomaly is a factor
if the rising value exceeds the preset value, the corresponding sample x is eliminated from the demonstration data.
according to some embodiments, said step C further comprises the steps of:
c1, obtaining all joint angle sequences of the robot in the motion process by inverse kinematics solution of the obtained artificial assembly trackwherein the joint angle q at each time pointiDefining a frame, and making k equal to 1;
c2, passing cost functioncalculating the current track xi(0)cost value C ofk;
c3 selection strategy by framecalculating the selected probability P of each frame under the trackiSelecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence and calculating the cost value again;
c4 calculating a prize value Rk=Ck-1-Ck;
c5 updating the probability P that each frame in the sequence of joint angles is selected according to the reward valuei;
C6, updating the whole joint angle track sequence, deleting frames with the probability lower than a certain threshold value, forming a new joint angle sequence by the rest frames, and enabling k to be k + 1;
C7, if the absolute value of the reward value is lower than the first threshold or the iteration number is larger than the second threshold, outputting the current remaining frame sequence, otherwise, returning to execute the step C2.
Detailed Description
the conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention.
FIG. 1 is a diagram illustrating an optical motion capture platform, according to one embodiment. The optical motion capture platform 10 comprises: a set of optical motion capture cameras 11 (e.g., high precision infrared cameras), network devices 12 (e.g., routers, ethernet switches, etc.), and computing devices 13 (e.g., industrial controllers, industrial personal computers, PCs, etc.). As shown, a plurality of (e.g., six) optical motion capture cameras 11 are arranged above a workspace where a presentation is assembled by a support stand, and one workspace is obliquely photographed from a plurality of angles (e.g., 0.4m × 0.6m × 1m in length, width, and height, respectively). Key positions of the joints of the arms, the fingertips of the hands, etc. of an assembler (e.g., a skilled assembler) affix a plurality of retro-reflective marker points 14. Preferably, the cameras are arranged around the workbench in a symmetrical mode shown in fig. 1, the optimized position layout of the cameras in space can ensure the capture precision of fine motions of both hands, and the data fusion of the multi-angle cameras can avoid the problem of mutual shielding of the motions of the hands, thereby ensuring the consistency of the capture motions. Thus, within the workspace, all of the cameras are configured to simultaneously capture visual information of the real-time position and pose reflected by the retro-reflective marker points of the arms and hands of the assembler during the assembly process. Further, each camera 11 may be attached with a light source.
Referring to fig. 2, in an embodiment, a robot assembly offline example learning system includes a data acquisition module, a data preprocessing module, an offline robot motion trajectory generation module, and a simulation verification module. These modules may be integrated in the computer device 13 running in the optical motion capture platform, or may be integrated in other computing devices running in the robot-assembled offline example learning system.
The data acquisition module is configured to: calibrating the optical motion capture camera 11 by an application program of a visual calibration algorithm; after calibration is completed, sending a collecting command to the optical motion capture camera through computing equipment, and collecting position and posture data of the reflective mark points 14 attached to the arms and hands of skilled assembly workers; the switch 12 performs data transmission with the optical motion capture camera 11, receives position and posture data of the motion of the user's arm and hand collected by the optical motion capture camera, and performs off-line analysis on the received data to generate motion trajectory information executable by the robot. In addition, the position change data of the light reflecting mark points captured by the camera is sent to an upper computer, and the upper computer stores double-hand assembling action data.
The data preprocessing module is used for performing off-line processing on the acquired motion data and removing noise so as to reduce redundant data. In one embodiment, the data pre-processing module is configured to perform anomaly detection, trajectory segmentation, and trajectory fusion.
and the abnormality detection is to process the data acquisition module through an abnormality detection algorithm based on local abnormality factors to acquire the position information of each reflective mark point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each adopted moment of each light reflecting mark point. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency.
and the track segmentation is to perform clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm. By the method, the track can be segmented, only the track data related to assembly is reserved, and irrelevant track data (such as the motion track from any position of the arm and the hand of the user to the position of the part to be assembled, the motion track of the arm and the hand of the user away from the part after assembly and the like) are eliminated. To simplify the late learning process.
The track fusion method comprises the following steps: firstly, a Gaussian Mixture Model (GMM) is adopted to Model an assembly track of multiple assembly demonstration obtained by a data preprocessing module, and the number of Gaussian kernels is specified according to a method of maximizing Bayesian Information Criterion (BIC); learning multiple sections of assembly tracks by using an Expectation-Maximization (EM) method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
The off-line robot motion track generation module is used for generating a smooth assembly track. The offline robot motion trajectory generation module is configured to: and outputting a smooth and efficient task space (Cartesian space) robot assembly motion track by a Gaussian Mixture Regression (GMR) method according to the assembly motion modeled by the Gaussian Mixture model obtained by the data preprocessing module. The track generation can be carried out according to the specific initial position and the target position during the robot assembly, so that the robot assembly is not limited to the initial position and the target position of the parts during manual demonstration. The assembly track has high mobility, is not limited by assembly sites and equipment, and can be rapidly deployed in robot assembly systems of different models and different configurations.
And the simulation verification module migrates the trajectory learned by the offline example to a simulation environment, and controls the robot in the simulation platform to complete the same assembly action. The offline example learning trajectory can be migrated to any robot system that meets the assembly freedom and workspace requirements, so that the example learning result is independent of the specific robot system. In addition, the simulation verification module provides a virtual environment to simulate running the assembly instance, wherein the steps comprise:
1) importing the track optimized by the track generation module in an off-line mode into a robot simulation platform (for example, a simulation environment of V-REP);
2) selecting a robot system model, wherein any robot system meeting the assembly freedom degree and the working space requirement can be selected, and the robot system comprises but is not limited to a serial robot, a parallel robot, a single-arm robot system or a double-arm robot system;
3) Through example learning, the robot system can efficiently complete high-precision assembly tasks of a working space in the same assembly scene as a skilled worker.
And when the virtual assembly simulation of the robot passes the verification, the simulation verification module transmits the motion instruction and data of the robot after the debugging is passed to the robot controller for controlling the motion of the robot in the actual assembly.
FIG. 3 is a block flow diagram of a method of a robot assembly offline example learning system, in an embodiment. The method comprises the following steps:
s1: the demonstration motion tracks of the arms and the hands of the assembling demonstration personnel are collected through the optical motion capture device.
s2: and performing off-line preprocessing on the acquired motion tracks of the arms and the hands, eliminating noise data and irrelevant data, fusing multiple demonstration assembly tracks, and obtaining an assembly motion model modeled by adopting a Gaussian mixture model.
S3: and generating an assembling motion track of the robot according to the obtained assembling motion model and the actual part initial pose and target pose of the robot assembly. And analyzing the motion information of the hand mark points to obtain the pose and state information of the assembly fixture required by the robot assembly. And a post-processing algorithm is also applied to reprocess the generated robot assembly track to obtain the motion track with the shortest time suitable for robot assembly.
S4: and controlling the robot to carry out field assembly according to the obtained assembly track and the assembly fixture information, and finishing the whole assembly demonstration learning process.
It can be understood that the above steps S1 and S2 mainly relate to robot assembly demonstration track extraction, and the step S3 mainly relates to robot assembly track optimization. Details of each step are described further below.
In some embodiments, step S1 further includes:
s1.1: and (5) building a dynamic catching platform.
firstly, fixing a camera around an experimental platform, connecting one end of an Ethernet cable with the camera, and connecting the other end of the Ethernet cable with a switch; the switch provides power supply and data communication for the camera, the output acquisition module, the data processing module, the off-line robot motion track generation module and the simulation verification module form a local area network, and the computing equipment sends a control command.
s1.2: user assembly demonstration data is collected.
the key positions (finger tips and finger joints) of the arms and the hands of the user are pasted with light-reflecting mark points, and the same assembly action is repeatedly demonstrated in the appointed assembly working area. The camera is a high-precision infrared motion capturing camera, captures the position and posture information of reflective mark points attached to the two arms and the two hands of a user, emits the mark points to reflect light of a camera flash unit, collects the reflected light in a scene by a camera lens to form an image focusing on a camera sensor plane, and analyzes the position information of each reflective mark point. The information is transmitted to the upper computer through the switch for storage.
In some embodiments, step S2 further includes:
S2.1: provided is an abnormality detection method.
the anomaly detection method comprises the step of processing the data acquisition module through an anomaly detection algorithm based on local anomaly factors to acquire position information of each reflective marker point at each sampling moment in each assembly demonstration. And calculating the abnormal factor of each adopted moment of each light reflecting mark point. And (4) regarding the points with the abnormal factors larger than a given threshold value as sampling noise points and excluding the sampling noise points from the demonstration data set so as to improve the post-processing efficiency. The specific implementation mode is as follows:
Given a set of sample sets xi1,2,3, define:
RDk(x,x′)=max(||x-x(k)||,||x-x′||) (1)
as the reachable distance, where x(k)is the set { xithe k-th sample closest to x in the lattice, k being a manually selected integer parameter.
This is called local reachability density. The local anomaly factor is defined as two definitions (1) and (2)
As the LOF rises, the likelihood of the sample x becoming an outlier rises and is eliminated from the presentation data.
S2.2: a track segmentation method.
the track segmentation method carries out density clustering by taking the speed of each reflecting mark point at each sampling moment in each assembly demonstration as the characteristic of the sampling moment through a density clustering algorithm.
S2.3: a trajectory fusion method.
firstly, modeling assembly tracks of multiple assembly demonstration obtained by a data preprocessing module by adopting a Gaussian mixture model, and specifying the number of Gaussian kernels according to a method of maximizing a Bayesian information criterion; learning multiple sections of assembly tracks by adopting an expectation maximization method to obtain parameters (mean, covariance and prior probability) of each Gaussian kernel; and finally, obtaining an assembly motion model which is modeled by the Gaussian mixture model and can reflect key information of the assembly motion.
in some embodiments, step S2 further includes:
S2.4: and generating an offline track.
And the off-line robot motion trail generation module outputs a smooth and efficient task space (Cartesian space) robot assembly motion trail through a Gaussian mixture regression method according to the assembly action modeled by the Gaussian mixture model obtained by the data preprocessing module. The method can generate the track according to the specific initial and target positions of the robot during assembly, so that the assembly of the robot is not limited by the initial and target positions of parts during manual demonstration.
S2.5: and (5) assembling and analyzing a clamp.
and analyzing the tracks of the light reflecting mark points at the joints and the fingertips of the hands to obtain the assembly action of the hands, and performing data matching on the hand action to obtain the type of the clamp most suitable for the assembly action.
with respect to step S3
Due to the structural difference between the human arm and the robot, the assembly track of the human hand is not very suitable for the robot to execute, and in order to improve the assembly efficiency of the robot, a track post-processing module is added. The introduced post-processing flow is as shown in fig. 4, the assembly track of the human hand is optimized, noise and irrelevant actions in the hand movement process are removed, and the conversion from the hand assembly movement track to the movement track suitable for robot assembly is completed.
thus, in some embodiments, step S3 further includes:
S3.1: firstly, all joint angle sequences of the robot in the motion process are obtained through inverse kinematics solution of the artificial assembly track obtained by the off-line processing moduleJoint angle q at each time pointiDefined as a frame. Let k equal to 1.
S3.2: the cost function is defined as (here, 6-degree-of-freedom robot example):Calculating the current track xi(0)Cost C ofk。
s3.3: defining a frame selection policyCalculating to obtain the selected probability P of each frame under the tracki. And selecting partial frames in the joint angle sequence according to a frame selection strategy to form a new sequence, and calculating the cost function of the new sequence again.
S3.4: calculating a reward value Rk=Ck-1-Ck,CkNamely the cost function of the manual assembly track.
S3.5: updating the probability P that each frame in the sequence of joint angles is selected according to a reward valuei. The update strategy is: each round of selected frames is updated toUnselected frames are updated towherein 0<α<1 is the update rate. And S (xi) is determined according to the iteration result of each time, if the assembly task is well completed, the S (xi) is 1, and if not, the S (xi) is 0.
s3.6: and updating the whole joint angle track sequence. The method is to delete the frames with the probability lower than a certain threshold, and form the rest frames into a new joint angle sequence, and let k be k + 1.
S3.7: and judging whether the absolute value of the reward value is lower than a small constant or the iteration number is larger than a certain threshold, if not, returning to the step S3.2 to continue execution, and if so, outputting the current remaining frame sequence.
s3.8: and (4) taking the frame sequence output in the step (S3.7) as a path, performing speed planning interpolation in the joint space in the shortest time, and outputting a motion track in the shortest time.
As can be seen from the figures 5 and 6, the assembly track obtained by the post-processing module is simpler and more efficient than the assembly track manually demonstrated, and is suitable for being executed by a robot.
In some embodiments, step S4 further includes:
S4.1: and (5) verifying the assembly of the robot. Firstly, selecting a proper assembly fixture according to an assembly task by using fixture state data obtained through an offline data processing module, automatically replacing the fixture through a robot tail end quick-change device, then carrying out field assembly by using a shortest-time assembly track obtained through a post-processing module, and evaluating the assembly effect.
an exemplary rail mounting application of the present invention is described in conjunction with fig. 7 and 1. In one example, in the optical motion capture platform 10 shown in FIG. 1, a skilled assembler picks up the guide rails from the assembly storage area 26, then mounts them to the ready-to-assemble 24 of the assembly jig table 25 and screws them. The process can be repeated for multiple times, so that a data acquisition module of the system can acquire enough movement positions and posture values of arms, palms and fingers, and then a manual assembly track is generated.
For the extracted manual assembly trajectory data, the above step S2 is executed by the computing device 13 to perform exception data elimination, trajectory segmentation, trajectory fusion, offline trajectory generation, and assembly jig analysis. For the manual rail assembly example, the end of the movement path (such as the position and posture of the finger) is limited to the range of the assembly jig table 25 and the assembly storage area 26, and some path data beyond the movement range or path data causing assembly interference may be excluded as abnormal data. In addition, for different assembly steps, the manual assembly track can be segmented, such as a guide rail picking and placing step, a guide rail and to-be-assembled piece positioning step, a screw mounting step and the like, which can be divided by capturing the speed and the motion type of the reflective marker point. For example, when the guide rail is picked and placed by hand, the motion of the fingers is mainly in space translation and the elbow and the shoulder have obvious motion, so the group of motion tracks can be divided into the picking and placing tracks. In addition, when the fingers and the arms clamp the guide rail and keep the guide rail still for a given time, the group of motion tracks can be divided into positioning tracks; when the collected trajectory data indicates that only the palm and fingers operate the wrench tool for partial torquing, the set of motion trajectories is divided into fastener installation trajectories. Then, in the trajectory fusion method, the generated assembly motion model reflects assembly motion key information, such as key points of a reasonable conveying path of the guide rail, the precise direction of installation of the guide rail, the type of the fastener and the installation position thereof, and the like.
Then, in the computing device 13, the above step S3 is executed, the above preprocessed data is matched with a suitable robot (for example, the tandem robot 21 shown in fig. 7 may be adopted), and the joint angle corresponding to each motion frame of the robot is solved inversely by the position of the end, and meanwhile, the velocity planning interpolation is performed in the joint space in the shortest time, and the motion track in the shortest time is output. In addition, the gripper 23 of the robot tip is also configured according to different assembly steps, such as air claw configuration for pick-and-place step and positioning step, and electric screw driver configuration for screw mounting step. Further, by synchronizing with the data of the robot controller and its jig library, it is possible to utilize the shortest assembly trajectory in time obtained through the post-processing module when the above-described step S4 is executed with respect to the computing device 13, and then configure jig nodes for matching or replacing jigs between each assembly step. For example, as shown in fig. 7, before carrying the guide rail from the mount storage area 26 to the to-be-mounted package 24 on the mount table 25, a pneumatic claw gripper for gripping the guide rail to be mounted is arranged before the robot is switched from the previous step to the step of picking up the guide rail. With the gripper deployed at this node, the path of the robot tip to move to the tool storage area 22 to install or replace a gripper can be introduced through the robot controller and the gripper management device of the tool storage area 22. Therefore, the assembly track demonstrated manually can be transited to the robot off-line assembly track (including the clamp) to realize the practical application scene operation of the off-line assembly platform 20.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
It should be recognized that the methods described herein may be implemented or carried out by computing device hardware, a combination of hardware and software, or by computing device instructions stored in a non-transitory computing device readable memory. The method may use standard programming techniques. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computing device system. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computing device systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computing device programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. The computing device program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, a mini-computing device, a mainframe, a workstation, a networked or distributed computing environment, a separate or integrated computing device platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it is readable by a programmable computing device, which when read by the storage medium or device is operative to configure and operate the computing device to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computing device-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computing device itself when programmed according to the methods and techniques described herein.
The computing device program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.