Background
In the field of robot operation, the workpiece position is obtained by a line laser scanning mode, the workpiece position is obtained by a manual teaching programming mode, and the like, so that the problems of long time consumption, low efficiency and the like exist. Meanwhile, as more noise points exist in the workpiece point cloud data acquired by line laser scanning, the accuracy is low, and the operation quality and efficiency of the robot are low; manual teaching programming introduces artificial uncontrollable factors, which often cause accidents.
Along with the rapid development of robot technology and three-dimensional vision technology and the upgrade of industrial intelligent manufacturing, robot vision is increasingly applied to scenes such as industrial production, service industry and the like, and robot operation is guided through a vision system, so that the robot intelligent operation method is an important means for realizing the robot operation. Based on the operation of three-dimensional vision guide robot, efficiency is higher, also more accurate to the location of work piece.
However, in the prior art, the workpiece point cloud data acquired by the camera device is directly matched with the virtual point cloud data corresponding to the workpiece model, the workpiece model comprises thickness data, the camera device can only acquire the workpiece point cloud data in the sight range, the workpiece point cloud data is directly matched with the virtual point cloud data in the workpiece model, the virtual point cloud data matched with the workpiece point cloud data cannot be ensured to be corresponding, and the matching error is large. As shown in fig. 1, a mark a represents a workpiece point cloud contour, a mark b represents a virtual point cloud contour obtained directly from a workpiece model, and the mark a and the mark b are not located on the same plane but have a certain distance; that is, the virtual point cloud data does not exactly correspond to the workpiece point cloud data, which would result in inaccurate matching.
Content of the application
In order to solve the problem of inaccurate workpiece positioning, the application provides a workpiece positioning method, computer equipment, a computer readable storage medium, a robot and a robot operation method.
According to one aspect of the embodiment of the application, a workpiece positioning method is disclosed, and is used for a scene of a robot working on a workpiece, wherein a camera device is arranged on the robot. The workpiece positioning method comprises the following steps:
acquiring an actual point cloud of the workpiece under a base coordinate system of the robot based on the workpiece image acquired by the camera device;
acquiring a corresponding workpiece model of the workpiece under the base coordinate system;
constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene, and enabling the workpiece model to be positioned in the view angle of the virtual camera;
shooting the workpiece model by adopting the virtual camera, and obtaining a virtual point cloud of the workpiece model under the base coordinate system;
and matching the actual point cloud with the virtual point cloud to obtain the position of the workpiece.
In an exemplary embodiment, the camera device is disposed at an end of the robot; the acquiring the actual point cloud of the workpiece under the base coordinate system of the robot based on the workpiece image acquired by the camera device comprises the following steps:
acquiring a workpiece image acquired by the camera device and a pose matrix of the robot when the camera device acquires the workpiece image;
according to the pose matrix and the hand-eye matrix of the robot, converting coordinates of a point cloud in the workpiece image under a camera coordinate system into the base coordinate system, and obtaining the actual point cloud; the hand-eye matrix represents a conversion relation of a camera coordinate system relative to a tool coordinate system of the robot.
In an exemplary embodiment, the constructing a virtual camera includes:
determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix;
configuring parameters of the virtual camera according to the parameters of the camera device to construct the virtual camera; the parameters include field angle and pixels.
In an exemplary embodiment, the configuring the parameters of the virtual camera according to the parameters of the camera device includes:
configuring the angle of view of the virtual camera to be consistent with the angle of view of the camera device;
the pixels configuring the virtual camera are coincident with the pixels of the camera device.
In an exemplary embodiment, the determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix includes:
determining a position of the camera device relative to the robot according to the pose matrix and the hand-eye matrix;
and setting the position of the virtual camera in the virtual scene according to the position of the camera device relative to the robot so that the position of the virtual camera corresponds to the position of the camera device.
In an exemplary embodiment, the acquiring a workpiece model of the workpiece in the base coordinate system includes:
acquiring image data of a plurality of angles of the workpiece;
integrating the image data of the angles into a three-dimensional model by utilizing three-dimensional modeling software;
and D, the three-dimensional digital-to-analog conversion is carried out under the base coordinate system, and the workpiece model is obtained.
In an exemplary embodiment, said matching said actual point cloud and said virtual point cloud to obtain a position of said workpiece comprises:
and registering the actual point cloud and the virtual point cloud by adopting an iterative nearest neighbor algorithm to obtain the position of the workpiece.
According to an aspect of an embodiment of the present application, there is disclosed a robot including:
a robot body having a plurality of axes of motion;
a camera device disposed at the robot body; a kind of electronic device with high-pressure air-conditioning system
And a controller connected to the robot body and the camera device for controlling the robot body and the camera device and performing the steps of the workpiece positioning method as described above.
According to an aspect of an embodiment of the present application, there is disclosed a robot working method provided with an end tool and a camera device, the working method including:
controlling the camera device to acquire a workpiece image;
carrying out workpiece positioning by adopting the workpiece positioning method to obtain the position of the workpiece;
and controlling the end tool to move to the position of the workpiece for operation.
In one exemplary embodiment, the end tool is a welding gun and the camera device is a three-dimensional camera.
According to an aspect of an embodiment of the present application, a computer apparatus is disclosed for a scenario in which a robot is working on a workpiece, the robot being provided with a camera device. The computer device includes:
the first acquisition module is used for acquiring an actual point cloud of the workpiece under a base coordinate system of the robot based on the workpiece image acquired by the camera device;
the first construction module is used for acquiring a corresponding workpiece model of the workpiece under the base coordinate system;
the second construction module is used for constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene and enabling the workpiece model to be located in the view angle of the virtual camera;
the second acquisition module is used for shooting the workpiece model by adopting the virtual camera and acquiring a virtual point cloud of the workpiece model under the base coordinate system;
and the point cloud matching module is used for matching the actual point cloud with the virtual point cloud to obtain the position of the workpiece.
According to an aspect of an embodiment of the present application, there is disclosed a computer apparatus including:
one or more processors;
and a memory for storing one or more programs that, when executed by the one or more processors, cause the computer device to implement the aforementioned workpiece positioning method.
According to an aspect of an embodiment of the present application, a computer-readable storage medium storing computer-readable instructions that, when executed by a processor of a computer, cause the computer to perform the aforementioned workpiece positioning method is disclosed.
The technical scheme provided by the embodiment of the application at least comprises the following beneficial effects:
according to the technical scheme provided by the application, the virtual camera and the workpiece model are constructed, the workpiece model in the virtual scene is shot by the virtual camera, the virtual point cloud of the workpiece model under the base coordinate system is obtained, the virtual point cloud is matched with the actual point cloud obtained by the workpiece image acquired by the camera device, the position of the workpiece is determined based on the matching result, the inaccuracy in matching caused by the way of directly matching the actual point cloud with the workpiece model is avoided, the accuracy of real-time positioning of the workpiece is improved, and the quality and efficiency of robot operation are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Detailed Description
While this application is susceptible of embodiment in different forms, there is shown in the drawings and will herein be described in detail, specific embodiments thereof with the understanding that the present disclosure is to be considered as an exemplification of the principles of the application and is not intended to limit the application to that as illustrated.
Furthermore, references to the terms "comprising," "including," "having," and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. Such as a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the list of steps or modules but may, alternatively, include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more features.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "illustratively" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of the word "exemplary" or "for example" or "such as" is intended to present the relevant concepts in a concrete manner.
Exemplary embodiments will be described in detail below. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the application.
Fig. 2 shows a robot structural diagram of an exemplary embodiment. As shown in fig. 2, the robot 100 includes a robot body 101, a camera device 102, and an end tool 103, the camera device 102 being provided on the robot body 101, the end tool 103 being provided at an end of the robot body 101; the camera device 102 and the end tool 103 are driven to move through the robot body 101, so that the camera device 102 is utilized to shoot the image of the workpiece in a short distance, the position of the workpiece is obtained by analyzing the image of the workpiece, and the end tool 103 is further driven to move to the position of the workpiece for operation.
It can be appreciated that the robot 100 further includes a controller, where the controller is connected to the robot body 101, the camera device 102 and the end tool 103 to control the movement of the robot body 101, and control the camera device 102 to capture images of the workpiece, and the controller further analyzes the images of the workpiece to obtain the position of the workpiece, so as to control the movement of the robot body 101 to drive the end tool 103 to move to the position of the workpiece for performing the operation. It should be understood that the controller may be built into the robot body 101 or may be provided outside the robot body 101.
The end tool 103 may be a welding gun, the workpiece is an object to be welded, and the robot 100 is a welding robot. The end tool 103 may be a glue gun, the workpiece is an object to be glued, and the robot 100 is a glue robot. The end tool 103 may also be a knife, the workpiece is an object to be cut, and the robot 100 is a cutting robot. Of course, the end tool 103 may be another tool that may be provided at the end of the robot 100 and driven by the robot 100 to perform work, and is not limited to the welding gun, the glue gun, the cutter, and the like.
The embodiment of the application provides a workpiece positioning method, computer equipment, a computer readable storage medium, a robot and a robot operation method, which can accurately position a workpiece in real time, thereby improving the quality and efficiency of robot operation.
The workpiece positioning method, the computer device, the computer readable storage medium, the robot and the robot operation method provided by the embodiment of the application are specifically described by the following embodiments, and the workpiece positioning method in the embodiment of the application is described first.
The embodiment of the application firstly provides a workpiece positioning method which is used for a scene that a robot performs three-dimensional positioning on a workpiece to work the workpiece, wherein a camera device is arranged on the robot. The workpiece positioning method comprises the following steps:
acquiring an actual point cloud of a workpiece under a base coordinate system of a robot based on a workpiece image acquired by a camera device;
obtaining a corresponding workpiece model of a workpiece under a base coordinate system;
constructing a virtual camera and a virtual scene, placing a workpiece model in the virtual scene, and enabling the workpiece model to be positioned in the view angle of the virtual camera;
shooting a workpiece model by adopting a virtual camera, and obtaining a virtual point cloud of the workpiece model under a base coordinate system;
and matching the actual point cloud with the virtual point cloud to obtain the position of the workpiece.
According to the technical scheme provided by the application, the virtual camera and the workpiece model are constructed, the workpiece model in the virtual scene is shot by the virtual camera, the virtual point cloud of the workpiece model under the base coordinate system is obtained, the virtual point cloud is matched with the actual point cloud obtained by the workpiece image acquired by the camera device, the position of the workpiece is determined based on the matching result, the inaccuracy in matching caused by the direct matching mode of the actual point cloud and the workpiece model is avoided, the noise interference resistance of the actual point cloud of the workpiece is high, the real-time positioning accuracy of the workpiece is improved, and therefore the quality and efficiency of robot operation are improved. Meanwhile, the robot is prevented from working in a robot teaching mode, the workload of workers is reduced, and the working quality and efficiency of the robot are improved.
Embodiments of the present application are further elaborated below in conjunction with the drawings in the examples of the specification.
Referring to fig. 3, the workpiece positioning method according to an exemplary embodiment of the present application includes the following steps S101 to S105.
S101, acquiring an actual point cloud of a workpiece under a base coordinate system of a robot based on a workpiece image acquired by a camera device.
In one exemplary embodiment, the camera device is mounted at the end of the robot, as shown in fig. 2. In this one exemplary embodiment, as shown in fig. 4, step S101 includes the following steps S1011 to S1012.
S1011, acquiring a workpiece image acquired by the camera device and a pose matrix of the robot when the camera device acquires the workpiece image.
S1012, converting coordinates of the point cloud in the workpiece image under a camera coordinate system into a base coordinate system according to the pose matrix and the hand-eye matrix of the robot, and obtaining an actual point cloud.
The hand-eye matrix represents the conversion relation of a camera coordinate system relative to a tool coordinate system of the robot; specifically including translation of the robot tip to the tip mounted camera device plus rotation of the robot tip to the tip mounted camera device. As to how to obtain the hand-eye matrix is the prior art, the description is not repeated here.
Wherein the pose matrix represents the conversion relation of the tool coordinate system of the robot relative to the base coordinate system of the robot. The tool coordinate system is used to define the center position of the end tool and the pose of the end tool.
In detail, in step S1012, coordinates of a point cloud in the workpiece image in the camera coordinate system are converted into the base coordinate system based on the mapping relationship pcd1=transform (pcd 0, tools×handleeye), and an actual point cloud is obtained.
Wherein pcd1 represents the actual point cloud under the robot base coordinate system after conversion, pcd0 represents the actual point cloud under the camera coordinate system before conversion, transform represents the conversion function, tool pos represents the pose matrix of the current robot, the pose matrix is a 4*4 matrix, and hand eye represents the positional relationship matrix from the terminal TCP of the robot to the origin of the camera coordinate system, namely the hand eye matrix.
S102, acquiring a corresponding workpiece model of the workpiece under the base coordinate system.
In one exemplary embodiment, as shown in fig. 5, step S102 includes the following steps S1021 to S1023.
S1021, image data of a plurality of angles of the workpiece is acquired.
S1022, integrating the image data of the multiple angles into a three-dimensional model by utilizing three-dimensional modeling software.
It will be appreciated that three-dimensional digital-to-analog is a model of a product that is created using three-dimensional modeling software, such as UG, CATIA, etc. Three-dimensional models are polygonal representations of objects, typically displayed with a computer or other video device.
S1023, converting the three-dimensional digital model from the world coordinate system to the base coordinate system to obtain the workpiece model.
The workpiece model may be, for example, an STL profile or the like.
S103, constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene, and enabling the workpiece model to be located in the view angle of the virtual camera.
In detail, the virtual camera may be constructed using software such as Blender.
In one exemplary embodiment, as shown in fig. 6, in step S103, a virtual camera is constructed, including the following steps S1031 to S1032.
S1031, determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix of the robot.
In one exemplary embodiment, as shown in fig. 7, step S1031 includes the following steps S10311 to S10312.
S10311, determining the position of the camera device relative to the robot according to the pose matrix and the hand-eye matrix.
In detail, in step S10311, the position of the camera device with respect to the robot is determined based on the mapping relationship campos=tools.
Wherein, camPos represents the position of camera device relative to the robot, tool pos represents the gesture matrix of robot when camera device gathered work piece image, and handleeye represents the hand eye matrix.
S10312, setting a position of the virtual camera in the virtual scene according to a position of the camera device with respect to the robot, so that the position of the virtual camera corresponds to the position of the camera device.
S1032, configuring parameters of the virtual camera according to the parameters of the camera device, and constructing the virtual camera.
In detail, the aforementioned parameters include a field angle and a pixel. In step S1032, the field angle of the virtual camera is arranged to coincide with the field angle of the camera device, and the pixels of the virtual camera are arranged to coincide with the pixels of the camera device.
By setting the position of the virtual camera in the virtual scene to correspond to the position of the camera device in the robot and setting the parameters of the virtual camera to be consistent with the parameters of the camera device, the virtual point cloud collected by the virtual camera can be ensured to correspond to the position of the actual point cloud collected by the camera device to the greatest extent, and the accuracy of workpiece positioning is improved.
In one exemplary embodiment, the field angle camView of the camera device is 0.87, the resolution size of the camera device is x=1280, y=1024; accordingly, the field angle camView of the virtual camera is 0.87, and the resolution size of the virtual camera is x=1280, y=1024.
An exemplary embodiment virtual camera and workpiece model is shown in fig. 8, where reference numeral 401 represents a virtual camera and reference numeral 402 represents a workpiece model.
It will be appreciated that the foregoing parameters may also include other parameters of the camera, such as the pitch angle, azimuth angle, roll angle, etc. of the camera.
S104, shooting the workpiece model by using a virtual camera, and obtaining a virtual point cloud of the workpiece model under a base coordinate system.
In the embodiment in which the position of the virtual camera corresponds to the position of the camera device relative to the robot and the parameters such as the field angle are consistent with the camera device, the workpiece model surface point cloud obtained in step S104 is the same view angle as the actual point cloud.
S105, matching the actual point cloud with the virtual point cloud to obtain the position of the workpiece.
In an exemplary embodiment, in step S105, the iterative nearest neighbor algorithm is used to register the actual point cloud and the virtual point cloud to obtain the position of the workpiece.
It will be appreciated that the use of an iterative nearest neighbor algorithm, also known as ICP registration (Point Cloud Registration), refers to the input of two point clouds (source data and required registration data), and the output of a transformation (compensation matrix) such that the source data and the required registration data overlap as high as possible. The transformation may or may not be rigid, including rotation and translation. As to how to match the virtual point cloud to the position of the actual point cloud by using the iterative nearest neighbor algorithm is the prior art, and will not be described in detail herein.
FIG. 9 is an interface diagram of matching an actual point cloud with a virtual point cloud in accordance with an example embodiment. As shown in fig. 9, the virtual point cloud and the actual point cloud are attached to the same surface, and the virtual point cloud and the actual point cloud can be completely matched together.
Referring to fig. 2 again, in order to implement the workpiece positioning method provided by the embodiment of the present application, a robot 100 is provided, and the robot 100 includes a robot body 101, a camera device 102, an end tool 103 and a controller (not shown). The robot body 101 has a plurality of moving axes, and the camera device 102 and the end tool 103 are disposed at the end of the robot body 101. The controller is connected to the robot body 101, the camera device 102 and the end tool 103 for controlling the robot body 101, the camera device 102 and the end tool 103 so that the robot 100 can perform all or part of the steps of the workpiece positioning method shown in any one of fig. 3 to 7.
For example, the robot body 101 has six axes of motion, i.e., the robot 100 is a six-axis robot.
For example, the end tool 103 is a welding gun.
Fig. 10 shows a flow chart of a robotic work method of an exemplary embodiment. As shown in fig. 10, the robot working method includes the following steps S201 to S203.
S201, controlling a camera device to collect the workpiece image.
S202, performing workpiece positioning by adopting the workpiece positioning method to obtain the position of the workpiece.
S203, controlling the end tool to move to the position of the workpiece to perform work.
In one exemplary embodiment, as shown in fig. 2, the end tool 103 is a welding gun, the workpiece is an object to be welded, the position of the weld in the workpiece is obtained in step S202, the end tool is controlled to move to the position of the weld in the workpiece in step S203, and the weld of the workpiece is welded using the end tool 103.
Referring next to fig. 11, fig. 11 is a block diagram illustrating a computer apparatus 200, which may be used in a robot to perform all or part of the steps of the workpiece positioning method shown in any of fig. 2-7, according to an exemplary embodiment. As shown in fig. 11, the computer device 200 includes, but is not limited to: the device comprises a first acquisition module 201, a first construction module 202, a second construction module 203, a second acquisition module 204 and a point cloud matching module 205.
The first obtaining module 201 is configured to obtain an actual point cloud of the workpiece under a base coordinate system of the robot based on the image of the workpiece acquired by the camera device.
The first building module 202 is configured to obtain a workpiece model corresponding to the workpiece in the base coordinate system.
The second construction module 203 is configured to construct a virtual camera and a virtual scene, place the workpiece model in the virtual scene, and position the workpiece model within a field angle of the virtual camera.
The second obtaining module 204 is configured to capture a workpiece model with a virtual camera, and obtain a virtual point cloud of the workpiece model in a base coordinate system.
The point cloud matching module 205 is configured to match the actual point cloud and the virtual point cloud, and obtain a position of the workpiece.
The implementation process of the functions and roles of each module in the above-mentioned computer device 200 is specifically described in the implementation process of the corresponding steps in the above-mentioned workpiece positioning method, and will not be described herein again.
The computer device 200 may be any terminal having an information processing function, such as a desktop computer, a notebook computer, or the like.
FIG. 12 schematically shows a block diagram of a computer system of a computer device for implementing a workpiece positioning method according to an embodiment of the application.
It should be noted that, the computer system 300 of the computer device shown in fig. 12 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 12, the computer system 300 includes a central processing unit 301 (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory 302 (ROM) or a program loaded from a storage section 303 into a random access Memory 304 (Random Access Memory, RAM). In the random access memory 304, various programs and data required for the system operation are also stored. The central processing unit 301, the read only memory 302, and the random access memory 304 are connected to each other via a bus 305. An Input/Output interface 306 (i.e., an I/O interface) is also connected to bus 305.
The following components are connected to the input/output interface 306: an input portion 307 including a keyboard, a mouse, and the like; an output section 308 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section 303 including a hard disk or the like; and a communication section 309 including a network interface card such as a local area network card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The driver 310 is also connected to the input/output interface 306 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read out therefrom is installed into the storage section 303 as needed.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311. The computer program, when executed by the central processor 301, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, e.g., divided modularly, divided into only one type of logic functions, and other manners of division are possible in actual practice. For example, multiple units or components may be combined or may be integrated into another device, or some features may be omitted, or not performed.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.