Disclosure of Invention
In order to overcome the problems in the related art, the invention provides a transferable image identification method and a device.
According to a first aspect of the embodiments of the present invention, there is provided a migratable image recognition method, including:
determining an image type of an input image recognition model, wherein the image type comprises a labeled source domain image and an unlabeled target domain image, and the image recognition model comprises a feature extractor, a category predictor and a domain discriminator;
when the input image is a labeled source domain image, enabling the labeled source domain image to pass through the feature extractor and the category predictor, and determining cross entropy loss;
when the input image is an unlabeled target domain image, enabling the target domain image to pass through the feature extractor and the domain discriminator and simultaneously pass through the feature extractor and the class predictor;
determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the central points of the target domain image and each source domain image;
determining information maximization loss according to an output result of the category predictor;
optimizing the image recognition model according to the cross entropy loss, the immunity loss and the information maximization loss.
In one embodiment, preferably, the method further comprises:
acquiring a target image to be identified;
and identifying the target image according to the image identification model so as to determine the category of the target image.
In one embodiment, the cross entropy loss is preferably calculated using the following first formula:
wherein the content of the first and second substances,
D s representing the image of all the source domain(s),
L CE (
D s ) Representing the cross-entropy loss of all source-domain images, E represents the periodThe physician can watch the disease,
x s features that represent the image of the source domain,
y s a label class representing the source domain image,
the indication function is represented by a representation of,
σdenotes the softmax function, log denotes the log function,
and K represents the total class number of the image.
In one embodiment, preferably, determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the target domain image and the center point of each source domain image includes:
determining an initial countermeasure loss from the output result of the domain discriminator, wherein the initial countermeasure loss is calculated using a second calculation formula:
wherein the content of the first and second substances,L d_initial (D i ) Representing the initial countermeasure loss for the ith target field image,D i representing the image of the i-th target domain,x t representing the features of the ith target domain image,D(G(x t ) Represents the output of the ith target domain image through the feature extractor and then through the domain discriminator, which is equivalent to a binary classification problem,d i a binary label representing the ith target domain image, for indicating whether the target domain image belongs to the source domain or the target domain, and maximizingL d_initial (D i ) Enabling the domain discriminator to carry out feature level alignment;
determining the clustering center of each category of image through the features of all source domain images output by the feature extractor, wherein the clustering center is calculated by adopting the following third calculation formula;
wherein the content of the first and second substances,
c k the cluster center of the image representing the kth class,
x s representing the features of the source domain image S,
y s a label class representing the source domain image S,
D s representing the image of all the source domain(s),
the indication function is represented by a representation of,
G(
x s) The feature which represents the feature of the source domain image S and is output after passing through the feature extractor;
calculating the similarity between each target domain image and the cluster center closest to the target domain image, and taking the similarity as the initial weight of the target domain image to resist loss, wherein the weight is calculated by adopting the following fourth calculation formula;
wherein the content of the first and second substances,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,D f the cosine similarity is shown in the figure,c k the cluster center of the image representing the kth class,x t features representing an ith target domain image;
calculating a countermeasure loss corresponding to the target domain image according to the initial countermeasure loss and a weight corresponding thereto, wherein the countermeasure loss is calculated using a fifth calculation formula:
wherein the content of the first and second substances,L d (D i ) Representing the countermeasure loss of the ith target domain image,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,x t representing the features of the ith target domain image,D(G(x t ) Represents the output result of the ith target domain image after passing through the feature extractor and the domain discriminator,d i a binary label representing the ith target domain image.
In one embodiment, preferably, determining the information maximization loss according to the output result of the class predictor comprises:
calculating the entropy minimization loss and the class average entropy maximization loss of the target domain image according to the output result of the class predictor;
calculating the information maximization loss according to the entropy minimization loss and the class average entropy maximization loss;
wherein the entropy minimization loss is calculated using the following sixth calculation formula:
wherein the content of the first and second substances,L ent (D t ) Representing the loss of minimum entropy in the said entropy,D t representing all of the target domain images,σdenotes the softmax function, H (G: (G) (G))x t ) Represents the output result of the target domain image after passing through the feature extractor and the label predictor, K represents the total class number of the image, E represents the expectation,x t representing a target domain image;
calculating the class average entropy maximization loss by adopting the following seventh calculation formula:
wherein the content of the first and second substances,
L div (
D t ) Represents the average entropy maximization penalty of the class,
represents the average probability vector after softmax of all samples of class k;
wherein the information maximization loss is calculated by adopting the following eighth calculation formula:
L
IM =
L
ent +
L
div
L IM indicating that the information is maximally lost,L ent (D t ) Representing the loss of minimum entropy in the said entropy,L div (D t ) Representing the class average entropy maximization loss.
In one embodiment, preferably, optimizing the image recognition model according to the cross-entropy loss, the immunity loss, and the information maximization loss comprises:
determining a model final loss according to the cross entropy loss, the immunity loss and the information maximization loss, wherein the model final loss is calculated by adopting a ninth calculation formula as follows:
L = L CE (D s ) - L d (D t ) + βL IM
l represents the final loss of the model,L CE (D s ) Representing the cross-entropy loss as a function of time,L d (D t ) The loss of confrontation is expressed as,L IM indicating that the information is maximally lost,βindicating the equalization parameters.
According to a second aspect of the embodiments of the present invention, there is provided a migratable image recognition apparatus including:
the image recognition system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining the image type of an input image recognition model, the image type comprises a labeled source domain image and an unlabeled target domain image, and the image recognition model comprises a feature extractor, a category predictor and a domain discriminator;
the first processing module is used for enabling the source domain image with the label to pass through the feature extractor and the class predictor and determining cross entropy loss when the input image is the source domain image with the label;
the second processing module is used for enabling the target domain image to pass through the feature extractor and the domain discriminator and simultaneously pass through the feature extractor and the class predictor when the input image is the target domain image without a label;
the second determining module is used for determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the central points of the target domain image and each source domain image;
the third determining module is used for determining information maximization loss according to the output result of the category predictor;
an optimization module to optimize the image recognition model according to the cross entropy loss, the immunity loss, and the information maximization loss.
In one embodiment, preferably, the apparatus further comprises:
the acquisition module is used for acquiring a target image to be identified;
and the identification module is used for identifying the target image according to the image identification model so as to determine the category of the target image.
In one embodiment, the cross entropy loss is preferably calculated using the following first formula:
wherein the content of the first and second substances,
D s representing the image of all the source domain(s),
L CE (
D s ) Represents the cross-entropy loss of all source domain images, E represents the expectation,
x s features that represent the image of the source domain,
y s to representThe label category of the source domain image,
the indication function is represented by a representation of,
σdenotes the softmax function, log denotes the log function,
and K represents the total class number of the image.
In one embodiment, preferably, the second determining module is configured to:
determining an initial countermeasure loss from the output result of the domain discriminator, wherein the initial countermeasure loss is calculated using a second calculation formula:
wherein the content of the first and second substances,L d_initial (D i ) Representing the initial countermeasure loss for the ith target field image,D i representing the image of the i-th target domain,x t representing the features of the ith target domain image,D(G(x t ) Represents the output of the ith target domain image through the feature extractor and then through the domain discriminator, which is equivalent to a binary classification problem,d i a binary label representing the ith target domain image, for indicating whether the target domain image belongs to the source domain or the target domain, and maximizingL d_initial (D i ) Enabling the domain discriminator to carry out feature level alignment;
determining the clustering center of each category of image through the features of all source domain images output by the feature extractor, wherein the clustering center is calculated by adopting the following third calculation formula;
wherein the content of the first and second substances,
c k the cluster center of the image representing the kth class,
x s representing the features of the source domain image S,
y s a label class representing the source domain image S,
D s representing the image of all the source domain(s),
the indication function is represented by a representation of,
G(
x s) The feature which represents the feature of the source domain image S and is output after passing through the feature extractor;
calculating the similarity between each target domain image and the cluster center closest to the target domain image, and taking the similarity as the initial weight of the target domain image to resist loss, wherein the weight is calculated by adopting the following fourth calculation formula;
wherein the content of the first and second substances,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,D f the cosine similarity is shown in the figure,c k the cluster center of the image representing the kth class,x t features representing an ith target domain image;
calculating a countermeasure loss corresponding to the target domain image according to the initial countermeasure loss and a weight corresponding thereto, wherein the countermeasure loss is calculated using a fifth calculation formula:
wherein the content of the first and second substances,L d (D i ) Representing the countermeasure loss of the ith target domain image,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,x t indicates the ith itemThe characteristics of the domain-marked image,D(G(x t ) Represents the output result of the ith target domain image after passing through the feature extractor and the domain discriminator,d i a binary label representing an ith target domain image;
the third determining module is to:
calculating the entropy minimization loss and the class average entropy maximization loss of the target domain image according to the output result of the class predictor;
calculating the information maximization loss according to the entropy minimization loss and the class average entropy maximization loss;
wherein the entropy minimization loss is calculated using the following sixth calculation formula:
wherein the content of the first and second substances,L ent (D t ) Representing the loss of minimum entropy in the said entropy,D t representing all of the target domain images,σdenotes the softmax function, H (G: (G) (G))x t ) Represents the output result of the target domain image after passing through the feature extractor and the label predictor, K represents the total class number of the image, E represents the expectation,x t representing a target domain image;
calculating the class average entropy maximization loss by adopting the following seventh calculation formula:
wherein the content of the first and second substances,
L div (
D t ) Represents the average entropy maximization penalty of the class,
represents the average probability vector after softmax of all samples of class k;
wherein the information maximization loss is calculated by adopting the following eighth calculation formula:
L
IM =
L
ent +
L
div
L IM indicating that the information is maximally lost,L ent (D t ) Representing the loss of minimum entropy in the said entropy,L div (D t ) Representing the class average entropy maximization loss;
the optimization module is configured to:
determining a model final loss according to the cross entropy loss, the immunity loss and the information maximization loss, wherein the model final loss is calculated by adopting a ninth calculation formula as follows:
L = L CE (D s ) - L d (D t ) + βL IM
l represents the final loss of the model,L CE (D s ) Representing the cross-entropy loss as a function of time,L d (D t ) The loss of confrontation is expressed as,L IM indicating that the information is maximally lost,βindicating the equalization parameters.
According to a third aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any one of the first aspect.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
compared with the prior art, the technical scheme of the invention not only utilizes the weighted countermeasure loss to optimize the feature extractor module, but also utilizes the cross entropy loss and the information maximization loss to optimize the label predictor module, thereby effectively improving the performance of target image identification, effectively reducing the labels for the target image identification and greatly reducing manpower and material resources.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a flow chart illustrating a method of migratable image recognition, as shown in fig. 1, in accordance with an exemplary embodiment, the method comprising:
step S101, determining an image type of an input image recognition model, wherein the image type comprises a labeled source domain image and a non-labeled target domain image, and the image recognition model comprises a feature extractor, a category predictor and a domain discriminator as shown in FIG. 2;
step S102, when the input image is a source domain image with a label, enabling the source domain image with the label to pass through the feature extractor and the category predictor, and determining cross entropy loss;
step S103, when the input image is a label-free target domain image, enabling the target domain image to pass through the feature extractor and the domain discriminator and simultaneously pass through the feature extractor and the class predictor;
step S104, determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the center points of the target domain image and each source domain image;
step S105, determining information maximization loss according to an output result of the category predictor;
and S106, optimizing the image recognition model according to the cross entropy loss, the immunity loss and the information maximization loss.
Compared with the prior art, the technical scheme of the invention not only utilizes the weighted countermeasure loss to optimize the feature extractor module, but also utilizes the cross entropy loss and the information maximization loss to optimize the label predictor module, thereby effectively improving the performance of target image identification, effectively reducing the labels for the target image identification and greatly reducing manpower and material resources.
FIG. 3 is a flow chart illustrating another migratable image recognition method in accordance with an exemplary embodiment.
As shown in fig. 3, in one embodiment, preferably, the method further comprises:
step S301, acquiring a target image to be identified;
step S302, the target image is identified according to the image identification model so as to determine the category of the target image.
In one embodiment, the cross entropy loss is preferably calculated using the following first formula:
wherein the content of the first and second substances,
D s representing the image of all the source domain(s),
L CE (
D s ) Represents the cross-entropy loss of all source domain images, E represents the expectation,
x s features that represent the image of the source domain,
y s a label class representing the source domain image,
the indication function is represented by a representation of,
σdenotes the softmax function, log denotes the log function,
and K represents the total class number of the image.
In one embodiment, preferably, determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the target domain image and the center point of each source domain image includes:
determining an initial countermeasure loss from the output result of the domain discriminator, wherein the initial countermeasure loss is calculated using a second calculation formula:
wherein the content of the first and second substances,L d_initial (D i ) Representing the initial countermeasure loss for the ith target field image,D i representing the image of the i-th target domain,x t representing the features of the ith target domain image,D(G(x t ) Represents the output of the ith target domain image through the feature extractor and then through the domain discriminator, which is equivalent to a binary classification problem,d i represents the ithBinary label of target domain image for indicating whether the target domain image belongs to source domain or target domain, and maximizingL d_initial (D i ) Enabling the domain discriminator to carry out feature level alignment;
determining the clustering center of each category of image through the features of all source domain images output by the feature extractor, wherein the clustering center is calculated by adopting the following third calculation formula;
wherein the content of the first and second substances,
c k the cluster center of the image representing the kth class,
x s representing the features of the source domain image S,
y s a label class representing the source domain image S,
D s representing the image of all the source domain(s),
the indication function is represented by a representation of,
G(
x s) The feature which represents the feature of the source domain image S and is output after passing through the feature extractor;
calculating the similarity between each target domain image and the cluster center closest to the target domain image, and taking the similarity as the initial weight of the target domain image to resist loss, wherein the weight is calculated by adopting the following fourth calculation formula;
wherein the content of the first and second substances,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,D f the cosine similarity is shown in the figure,c k the cluster center of the image representing the kth class,x t features representing an ith target domain image;
calculating a countermeasure loss corresponding to the target domain image according to the initial countermeasure loss and a weight corresponding thereto, wherein the countermeasure loss is calculated using a fifth calculation formula:
wherein the content of the first and second substances,L d (D i ) Representing the countermeasure loss of the ith target domain image,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,x t representing the features of the ith target domain image,D(G(x t ) Represents the output result of the ith target domain image after passing through the feature extractor and the domain discriminator,d i a binary label representing the ith target domain image.
In one embodiment, preferably, determining the information maximization loss according to the output result of the class predictor comprises:
calculating the entropy minimization loss and the class average entropy maximization loss of the target domain image according to the output result of the class predictor;
calculating the information maximization loss according to the entropy minimization loss and the class average entropy maximization loss;
wherein the entropy minimization loss is calculated using the following sixth calculation formula:
wherein the content of the first and second substances,L ent (D t ) Representing the loss of minimum entropy in the said entropy,D t representing all of the target domain images,σdenotes the softmax function, H (G: (G) (G))x t ) Represents the output result of the target domain image after passing through the feature extractor and the label predictor, K represents the total class number of the image, E represents the expectation,x t representing a target domain image;
calculating the class average entropy maximization loss by adopting the following seventh calculation formula:
wherein the content of the first and second substances,
L div (
D t ) Represents the average entropy maximization penalty of the class,
represents the average probability vector after softmax of all samples of class k;
wherein the information maximization loss is calculated by adopting the following eighth calculation formula:
L
IM =
L
ent +
L
div
L IM indicating that the information is maximally lost,L ent (D t ) Representing the loss of minimum entropy in the said entropy,L div (D t ) Representing the class average entropy maximization loss.
In one embodiment, preferably, optimizing the image recognition model according to the cross-entropy loss, the immunity loss, and the information maximization loss comprises:
determining a model final loss according to the cross entropy loss, the immunity loss and the information maximization loss, wherein the model final loss is calculated by adopting a ninth calculation formula as follows:
L = L CE (D s ) - L d (D t ) + βL IM
l represents the final loss of the model,L CE (D s ) Representing the cross-entropy loss as a function of time,L d (D t ) The loss of confrontation is expressed as,L IM indicating that the information is maximally lost,βindicating the equalization parameters.
Fig. 4 is a block diagram illustrating a migratable image recognition device in accordance with an exemplary embodiment.
As shown in fig. 4, according to a second aspect of the embodiments of the present invention, there is provided a migratable image recognition apparatus including:
a first determining module 41, configured to determine an image type of an input image recognition model, where the image type includes a labeled source domain image and an unlabeled target domain image, and the image recognition model includes a feature extractor, a class predictor, and a domain discriminator;
a first processing module 42, configured to, when the input image is a labeled source domain image, pass the labeled source domain image through the feature extractor and the class predictor, and determine a cross entropy loss;
a second processing module 43, configured to, when the input image is an unlabeled target domain image, make the target domain image pass through the feature extractor and the domain discriminator, and pass through the feature extractor and the class predictor at the same time;
a second determining module 44, configured to determine a countermeasure loss according to an output result of the domain discriminator and a similarity between the target domain image and a center point of each source domain image;
a third determining module 45, configured to determine information maximization loss according to an output result of the class predictor;
an optimization module 46 for optimizing the image recognition model based on the cross-entropy loss, the immunity loss, and the information maximization loss.
Fig. 5 is a block diagram illustrating a migratable image recognition device in accordance with an exemplary embodiment.
As shown in fig. 5, in one embodiment, preferably, the apparatus further comprises:
an obtaining module 51, configured to obtain a target image to be identified;
and the identifying module 52 is configured to identify the target image according to the image identification model to determine the category of the target image.
In one embodiment, the cross entropy loss is preferably calculated using the following first formula:
wherein the content of the first and second substances,
D s representing the image of all the source domain(s),
L CE (
D s ) Represents the cross-entropy loss of all source domain images, E represents the expectation,
x s features that represent the image of the source domain,
y s a label class representing the source domain image,
the indication function is represented by a representation of,
σdenotes the softmax function, log denotes the log function,
and K represents the total class number of the image.
In one embodiment, preferably, the second determining module is configured to:
determining an initial countermeasure loss from the output result of the domain discriminator, wherein the initial countermeasure loss is calculated using a second calculation formula:
wherein the content of the first and second substances,L d_initial (D i ) Representing the initial countermeasure loss for the ith target field image,D i representing the image of the i-th target domain,x t representing the features of the ith target domain image,D(G(x t ) Representing the output result of the i-th target domain image passing through the feature extractor and then passing through the domain discriminatorThe output of the classifier is equivalent to a binary classification problem,d i a binary label representing the ith target domain image, for indicating whether the target domain image belongs to the source domain or the target domain, and maximizingL d_initial (D i ) Enabling the domain discriminator to carry out feature level alignment;
determining the clustering center of each category of image through the features of all source domain images output by the feature extractor, wherein the clustering center is calculated by adopting the following third calculation formula;
wherein the content of the first and second substances,
c k the cluster center of the image representing the kth class,
x s representing the features of the source domain image S,
y s a label class representing the source domain image S,
D s representing the image of all the source domain(s),
the indication function is represented by a representation of,
G(
x s) The feature which represents the feature of the source domain image S and is output after passing through the feature extractor;
calculating the similarity between each target domain image and the cluster center closest to the target domain image, and taking the similarity as the initial weight of the target domain image to resist loss, wherein the weight is calculated by adopting the following fourth calculation formula;
wherein the content of the first and second substances,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,D f the cosine similarity is shown in the figure,c k the cluster center of the image representing the kth class,x t features representing an ith target domain image;
calculating a countermeasure loss corresponding to the target domain image according to the initial countermeasure loss and a weight corresponding thereto, wherein the countermeasure loss is calculated using a fifth calculation formula:
wherein the content of the first and second substances,L d (D i ) Representing the countermeasure loss of the ith target domain image,w t the weight corresponding to the initial countermeasure loss representing the ith target domain image,x t representing the features of the ith target domain image,D(G(x t ) Represents the output result of the ith target domain image after passing through the feature extractor and the domain discriminator,d i a binary label representing an ith target domain image;
the third determining module is to:
calculating the entropy minimization loss and the class average entropy maximization loss of the target domain image according to the output result of the class predictor;
calculating the information maximization loss according to the entropy minimization loss and the class average entropy maximization loss;
wherein the entropy minimization loss is calculated using the following sixth calculation formula:
wherein the content of the first and second substances,L ent (D t ) Representing the loss of minimum entropy in the said entropy,D t representing all of the target domain images,σdenotes the softmax function, H (G: (G) (G))x t ) Represents the output result of the target domain image after passing through the feature extractor and the label predictor, K represents the total class number of the image, E represents the expectation,x t representing a target domain image;
calculating the class average entropy maximization loss by adopting the following seventh calculation formula:
wherein the content of the first and second substances,
L div (
D t ) Represents the average entropy maximization penalty of the class,
represents the average probability vector after softmax of all samples of class k;
wherein the information maximization loss is calculated by adopting the following eighth calculation formula:
L
IM =
L
ent +
L
div
L IM indicating that the information is maximally lost,L ent (D t ) Representing the loss of minimum entropy in the said entropy,L div (D t ) Representing the class average entropy maximization loss;
the optimization module is configured to:
determining a model final loss according to the cross entropy loss, the immunity loss and the information maximization loss, wherein the model final loss is calculated by adopting a ninth calculation formula as follows:
L = L CE (D s ) - L d (D t ) + βL IM
l represents the final loss of the model,L CE (D s ) Representing the cross-entropy loss as a function of time,L d (D t ) The loss of confrontation is expressed as,L IM indicating that the information is maximally lost,βindicating the equalization parameters.
According to a third aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any one of the first aspect.
According to a fourth aspect of the embodiments of the present invention, there is provided a migratable image recognition system, the system including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining an image type of an input image recognition model, wherein the image type comprises a labeled source domain image and an unlabeled target domain image, and the image recognition model comprises a feature extractor, a category predictor and a domain discriminator;
when the input image is a labeled source domain image, enabling the labeled source domain image to pass through the feature extractor and the category predictor, and determining cross entropy loss;
when the input image is an unlabeled target domain image, enabling the target domain image to pass through the feature extractor and the domain discriminator and simultaneously pass through the feature extractor and the class predictor;
determining the countermeasure loss according to the output result of the domain discriminator and the similarity of the central points of the target domain image and each source domain image;
determining information maximization loss according to an output result of the category predictor;
optimizing the image recognition model according to the cross entropy loss, the immunity loss and the information maximization loss.
It is further understood that the term "plurality" means two or more, and other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.