Disclosure of Invention
In view of the above, the embodiment of the invention provides a vehicle model identification method, device, equipment and medium based on experience playback, which improves identification accuracy.
The first aspect of the present invention provides a vehicle model identification method based on experience playback, comprising:
acquiring an original vehicle image; the original vehicle image comprises vehicle model information;
performing data expansion on the original vehicle image through a GAN network to obtain model sample data;
inputting the model sample data into an countermeasure network with experience playback for training to obtain a target model;
and identifying the acquired vehicle image to be identified according to the target model, and determining the vehicle model in the vehicle image to be identified.
Optionally, the acquiring the original vehicle image information includes:
crawling an original vehicle image of known vehicle model information through a crawler technology;
carrying out graying treatment, brightness normalization treatment and contrast normalization treatment on the original vehicle image to obtain a target image for representing texture information;
inputting the target image into a pre-training network, and extracting a characteristic block;
inputting the characteristic blocks into an SVM classifier for training to obtain a target SVM classifier;
inputting the target image into the SVM classifier, and outputting probability labels of various recognition results;
and according to the probability tag, calculating a recognition result of the vehicle model as vehicle model information in the original vehicle image.
Optionally, the data expansion is performed on the original vehicle image through a GAN network to obtain model sample data, including:
inputting noise data into the GAN network to obtain a test sample, and taking the original vehicle image as a training sample;
inputting the training sample and the test sample into an initial discriminator of the GAN network to obtain a discrimination result;
training the GAN network through the DQN network to obtain an ideal generator and an ideal discriminator;
generating a vehicle appearance image through the ideal generator, checking the generated vehicle appearance image through the ideal discriminator, and taking the checked vehicle appearance image as an expansion result of the original vehicle image.
Optionally, the inputting the model sample data into the countermeasure network with experience playback to train, to obtain a target model, including:
taking the model sample data as current state data;
inputting the current state data into an countermeasure network with experience playback for training, and updating a Q value function until the Q value function converges to obtain a converged neural network model;
and inputting the test sample into the neural network model, and testing the neural network model to obtain a target model.
Optionally, the training the current state data in the countermeasure network with experience playback, updating the Q-value function until the Q-value function converges, and obtaining a converged neural network model includes:
acquiring sample parameters of a vehicle signal picture, and generating a Markov decision process quadruple;
initializing a Q-Table in Prioritized Replay DQN;
the Q-Table is implemented in Prioritized Replay DQN.
Optionally, the implementing the Q-Table in Prioritized Replay DQN includes:
adopting a deep neural network as a Q-Table, and presetting target parameters;
defining an objective function using a 2-norm in the Q value;
calculating the gradient of the target parameter with respect to a cost function;
according to the gradient, obtaining an optimal Q value by using a random gradient descent method;
performing cyclic training on the deep neural network according to the optimal Q value;
an empirical playback data set is acquired and all super-parameters of the Q-network are updated by gradient back-propagation according to the objective function.
According to another aspect of the present invention, there is provided a vehicle model identification apparatus based on empirical playback, comprising:
the acquisition module is used for acquiring an original vehicle image; the original vehicle image comprises vehicle model information;
the data expansion module is used for carrying out data expansion on the original vehicle image through a GAN network to obtain model sample data;
the training module is used for inputting the model sample data into an countermeasure network with experience playback for training to obtain a target model;
the identification module is used for identifying the acquired vehicle image to be identified according to the target model and determining the vehicle model in the vehicle image to be identified.
According to another aspect of the present invention, there is provided an electronic device including a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing a program that is executed by a processor to implement a method as described above.
The method comprises the steps of firstly, acquiring an original vehicle image; then, carrying out data expansion on the original vehicle image through a GAN network to obtain model sample data; then inputting the model sample data into an countermeasure network with experience playback for training to obtain a target model; and finally, identifying the acquired vehicle image to be identified according to the target model, and determining the vehicle model in the vehicle image to be identified. The embodiment of the invention improves the recognition capability of the vehicle model, designs the expansion method of the vehicle appearance image training sample, and utilizes the generated picture result with experience playback and optimization GAN aiming at the vehicle appearance sample with the non-ideal generated result on the basis.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Aiming at the problems existing in the prior art, the invention provides a vehicle model identification method, device, equipment and medium based on experience playback. The invention provides an expansion method for vehicle appearance image training samples, which utilizes a deep convolutional neural network to increase the capability of vehicle model identification for a vehicle identification system, designs a vehicle appearance image training sample expansion method based on GAN (generation countermeasure network), and utilizes a generated picture result with experience playback optimization for the vehicle appearance samples with non-ideal generated results on the basis.
Referring to fig. 1, the method of the present invention comprises the steps of:
acquiring an original vehicle image; the original vehicle image comprises vehicle model information;
performing data expansion on the original vehicle image through a GAN network to obtain model sample data;
inputting the model sample data into an countermeasure network with experience playback for training to obtain a target model;
and identifying the acquired vehicle image to be identified according to the target model, and determining the vehicle model in the vehicle image to be identified.
Specifically, the acquiring original vehicle image information includes:
crawling an original vehicle image of known vehicle model information through a crawler technology;
carrying out graying treatment, brightness normalization treatment and contrast normalization treatment on the original vehicle image to obtain a target image for representing texture information;
inputting the target image into a pre-training network, and extracting a characteristic block;
inputting the characteristic blocks into an SVM classifier for training to obtain a target SVM classifier;
inputting the target image into the SVM classifier, and outputting probability labels of various recognition results;
and according to the probability tag, calculating a recognition result of the vehicle model as vehicle model information in the original vehicle image.
Specifically, the embodiment of the invention uses a crawler technology to crawl vehicle images and model information with definite models, carries out label classification on the images to form a training sample set, and prepares for training a GAN network and SVM discriminant model, and specifically comprises the following steps of 11-14:
and 11, preprocessing an image, namely only using texture information of the image to judge the model of the vehicle, and performing gray scale processing, brightness, contrast normalization and other processing on an image data set.
And step 12, inputting the image into a pre-training network DenseNet, detecting a target, extracting a characteristic block, and taking the characteristic block as a block characteristic training sample of an SVM support vector machine classifier. For DenseNet networks, the inputs of each layer of the hidden layers are connected to the outputs of all the previous layers, in x l Representing the output of the layer I network, H l The (-) function represents a combined operation comprising a series of BN-ReLU-Conv operations
x l =H l ([x 0 ,x 1 ,…,x l-1 ])
And 13, after the vehicle image of the model to be identified is subjected to the steps 11-12, the obtained output is used as a block feature test sample, the block feature test sample is input into an SVM support vector machine classifier for classification, probability labels of feature block identification results are output, and the identification results are obtained by calculating the block probability labels.
In step 14, the samples supplemented by the gan network enter a sample set of the vehicle image with a definite model, so that the block feature training samples of the SVM classifier need to be continuously updated and trained, and the SVM classifier is iteratively updated and trained by using the feature block recognition result probability labels together so as to maintain good classification performance.
The pre-training network DenseNet of the embodiment of the invention specifically comprises the following steps:
for the pre-training network DenseNet, in each processing module, the characteristic information can be transmitted forward from the lower layer to the higher layer through a direct channel, so that the higher layer can fully acquire the characteristics from the lower layer, the occurrence of a redundant layer is greatly reduced, the characteristic multiplexing is enhanced, and the anti-overfitting capability is stronger.
Optionally, the data expansion is performed on the original vehicle image through a GAN network to obtain model sample data, including:
inputting noise data into the GAN network to obtain a test sample, and taking the original vehicle image as a training sample;
inputting the training sample and the test sample into an initial discriminator of the GAN network to obtain a discrimination result;
training the GAN network through the DQN network to obtain an ideal generator and an ideal discriminator;
generating a vehicle appearance image through the ideal generator, checking the generated vehicle appearance image through the ideal discriminator, and taking the checked vehicle appearance image as an expansion result of the original vehicle image.
Specifically, the GAN network augmenting the vehicle appearance image set includes the following steps 21-25:
step 21, inputting the noise z-P (z) to the GAN generator to obtain the output G (z) as the test sample.
Step 22, taking the real vehicle appearance image x in the training set as a training sample, and inputting the training sample and the test sample G (z) generated by the GAN generator into the GAN discriminator D to obtain a discrimination result D (G (z)).
And 23, training the GAN by the DQN network, and after multiple iterations, obtaining a more ideal generator for generating an image, wherein the false judgment probability of the generator is not approaching 0.5 by the discriminator, and a more ideal discriminator for generating the image, wherein the true judgment probability of the image is approaching 0.5 by the discriminator.
And step 24, continuously learning by using the generator after training and a discriminator, generating a vehicle appearance image by using the generator, and when the discriminator has the false judging probability maintained at a stable level, permitting the sample generated by the generator to be added into the vehicle appearance image set to finish expansion.
And 25, continuously monitoring the states of the generator and the discriminator, calculating a return iteration generator and training the discriminator through the DQN network, and maintaining the ideal states.
Optionally, the inputting the model sample data into the countermeasure network with experience playback to train, to obtain a target model, including:
taking the model sample data as current state data;
inputting the current state data into an countermeasure network with experience playback for training, and updating a Q value function until the Q value function converges to obtain a converged neural network model;
and inputting the test sample into the neural network model, and testing the neural network model to obtain a target model.
Specifically, the invention relates to a strong countermeasure network training method based on deep reinforcement learning with experience playback, which specifically comprises the following steps of 31-34:
step 31, a generator generates and inputs a model sample for training as current state data;
step 32, inputting each parameter into the deep reinforcement learning with experience playback for training; continuously updating the Q value function until the Q value function is converged, and obtaining a converged neural network model;
step 33, inputting the dynamic parameters of the generated sample for testing into the obtained model;
step 34 iterates the GAN network so that the GAN network can output a sample picture close to the real vehicle model.
Optionally, the training the current state data in the countermeasure network with experience playback, updating the Q-value function until the Q-value function converges, and obtaining a converged neural network model includes:
acquiring sample parameters of a vehicle signal picture, and generating a Markov decision process quadruple;
initializing a Q-Table in Prioritized Replay DQN;
the Q-Table is implemented in Prioritized Replay DQN.
Specifically, in the above step 32, specifically, the method includes:
obtaining sample parameters of a vehicle model picture, and generating a Markov decision process four-element group E= < S, A, P and R >, wherein S is a state set describing the probability of the vehicle model sample generated picture output on a neural network, A is a sample picture generated by an opposite generation network, P is a state transfer function and R is a rewarding function;
training data with Prioritized Replay DQN; initializing Q-Table, wherein the row and column are S and A respectively, and the value of Q-Table is used for measuring the quality of action a taken by the current state S; the Bellman equation is used to update the Q-Table during training:
Q(s,a)=r+γ(max(Q(s',a')))
wherein s is a state, a is an action, s 'is a next state, a' is an action which can be taken by the next state, Q (s, a) is a Q value after taking the action a in the current state s, r is an actual rewarding value, gamma is an attenuation rate, and max (Q (s ', a')) is a maximum Q value of the next state;
in Prioritized Replay DQN, Q-Table is realized by a neural network, and the state s is input to output the Q value of different actions a.
Optionally, the implementing the Q-Table in Prioritized Replay DQN includes:
adopting a deep neural network as a Q-Table, and presetting target parameters;
defining an objective function using a 2-norm in the Q value;
calculating the gradient of the target parameter with respect to a cost function;
according to the gradient, obtaining an optimal Q value by using a random gradient descent method;
performing cyclic training on the deep neural network according to the optimal Q value;
an empirical playback data set is acquired and all super-parameters of the Q-network are updated by gradient back-propagation according to the objective function.
Specifically, the implementation of the Q-Table in the embodiment of the invention specifically comprises the following steps (1) - (6):
(1) The deep neural network is adopted as Q-Table, and the parameters are as follows:
Q(s,a,θ)=Qπ(s,a)
(2) The objective function is defined using a 2-norm in the Q value:
L(θ)=||r+γ·maxQ(s',a',θ)-Q(s,a,θ)||2;
(3) Calculating the gradient of the parameter theta with respect to the cost function;
(4) Using a random gradient descent method to realize an end-to-end optimization target;
the gradient described above is calculated and the gradient,calculating from the deep neural network, and updating parameters by using random gradient descent so as to obtain an optimal Q value;
(5) The action at is randomly selected according to the probability epsilon or the action at with the maximum Q value is selected according to the Q value output by the neural network, the rewarding rt after the action at is executed and the input of the next network are obtained, and the neural network calculates the output of the network at the next moment according to the current value, so that the cycle is performed.
The prize value in step (5) includes: probability P of neural network output 1 And the actual probability P 2 The mean square value of the generated sample vehicle model is added with the percentage K of the total vehicle model:
after several iterations and training in the training process, when the Q value representing the prize converges to the maximum value, the allocation strategy is optimized.
(6) Will s t 、a t 、r t 、s t+1 And the termination judgment indexes are sequentially stored into an experience playback data set D, m samples are continuously sampled from the D when the data reach a certain number, the current target Q value is calculated, all super parameters of the Q network are updated through gradient back propagation, meanwhile, the current state s= s T +1 is enabled, if s is in a termination state, the current iteration is completed, or the iteration is completed when the iteration round number T is reached, otherwise, the step (5) is transferred to continue iteration. The specific method comprises the following steps:
in the process of continuously and iteratively updating the data, each segment t is s t 、a t 、r t 、s t+1 And a five-tuple { s t, a t, r t, s t +1, done } consisting of the termination criterion done is stored in the experience playback set D. When the stored quantity reaches the playback set capacity D, old data overflows according to the rolling to store new data, and the validity of samples in D is ensured. Once the number of samples reaches the small training sample number m, randomly sampling m samples (j=1, 2..m) from D is started, and the current target Q value y corresponding to each sample is calculated j . And playing back experience versus state s in the data t Converted into label storage for business data analysis.
All parameters θ of the Q network are updated by gradient back propagation of the neural network using the mean square error loss function L (θ).
The following describes in detail the steps of the vehicle model identification method of the present invention with reference to fig. 2:
step 1, extracting and identifying appearance features of a vehicle:
the embodiment of the invention uses a crawler technology to crawl the vehicle images and model information with definite models, and carries out label classification on the images to form trainingSample ofSet, trainingGAN networkAnd an SVM discriminant model, comprising the following specific steps:
and 11, preprocessing an image, namely only using texture information of the image to judge the model of the vehicle, and performing gray scale processing, brightness, contrast normalization and other processing on an image data set.
And step 12, inputting the image into a pre-training network DenseNet, detecting a target, extracting a characteristic block, and taking the characteristic block as a block characteristic training sample of an SVM support vector machine classifier. For DenseNet networks, the inputs of each layer of the hidden layers are connected to the outputs of all the previous layers, in x l Representing the output of the layer I network, H l The (-) function represents a combined operation comprising a series of BN-ReLU-Conv operations
x l =H l ([x 0 ,x 1 ,…,x l-1 ])
And 13, after the vehicle image of the model to be identified is subjected to the steps 11-12, the obtained output is used as a block feature test sample, the block feature test sample is input into an SVM support vector machine classifier for classification, probability labels of feature block identification results are output, and the identification results are obtained by calculating the block probability labels.
In step 14, the samples supplemented by the gan network enter a sample set of the vehicle image with a definite model, so that the block feature training samples of the SVM classifier need to be continuously updated and trained, and the SVM classifier is iteratively updated and trained by using the feature block recognition result probability labels together so as to maintain good classification performance.
The pre-training network DenseNet in step 12 specifically includes:
for the pre-training network DenseNet, the structure is shown in fig. 3, in each processing module, the characteristic information can be transmitted forward from the lower layer to the higher layer through a direct channel, so that the higher layer can fully acquire the characteristics from the lower layer, the occurrence of a redundant layer is greatly reduced, the characteristic multiplexing is enhanced, and the anti-overfitting capability is stronger.
Step 2, the GAN network expands the vehicle appearance image set
Step 21, inputting the noise z-P (z) to the GAN generator to obtain the output G (z) as the test sample.
Step 22, taking the real vehicle appearance image x in the training set as a training sample, and inputting the training sample and the test sample G (z) generated by the GAN generator into the GAN discriminator D to obtain a discrimination result D (G (z)).
And 23, training the GAN by the DQN network, and after multiple iterations, obtaining a more ideal generator for generating an image, wherein the false judgment probability of the generator is not approaching 0.5 by the discriminator, and a more ideal discriminator for generating the image, wherein the true judgment probability of the image is approaching 0.5 by the discriminator.
And step 24, continuously learning by using the generator after training and a discriminator, generating a vehicle appearance image by using the generator, and when the discriminator has the false judging probability maintained at a stable level, permitting the sample generated by the generator to be added into the vehicle appearance image set to finish expansion.
And 25, continuously monitoring the states of the generator and the discriminator, calculating a return iteration generator and training the discriminator through the DQN network, and maintaining the ideal states.
The method for training the countermeasure network based on the deep reinforcement learning with experience playback in the step 3, as shown in fig. 4, specifically includes:
step 31, a generator generates and inputs a model sample for training as current state data;
step 32, inputting each parameter into the deep reinforcement learning with experience playback for training; continuously updating the Q value function until the Q value function is converged, and obtaining a converged neural network model;
step 33, inputting the dynamic parameters of the generated sample for testing into the obtained model;
step 34 iterates the GAN network so that the GAN network can output a sample picture close to the real vehicle model.
The step 32 specifically includes:
obtaining sample parameters of a vehicle model picture, and generating a Markov decision process four-element group E= < S, A, P and R >, wherein S is a state set describing the probability of the vehicle model sample generated picture output on a neural network, A is a sample picture generated by an opposite generation network, P is a state transfer function and R is a rewarding function;
training data with Prioritized Replay DQN; initializing Q-Table, wherein the row and column are S and A respectively, and the value of Q-Table is used for measuring the quality of action a taken by the current state S; the Bellman equation is used to update the Q-Table during training:
Q(s,a)=r+γ(max(Q(s',a')))
wherein s is a state, a is an action, s 'is a next state, a' is an action which can be taken by the next state, Q (s, a) is a Q value after taking the action a in the current state s, r is an actual rewarding value, gamma is an attenuation rate, and max (Q (s ', a')) is a maximum Q value of the next state;
in Prioritized Replay DQN, the Q-Table is realized through a neural network, the state s is input, the Q values of different actions a are output, and the specific realization process is as follows:
(1) The deep neural network is adopted as Q-Table, and the parameters are as follows:
Q(s,a,θ)=Qπ(s,a)
(2) The objective function is defined using a 2-norm in the Q value:
L(θ)=||r+γ·maxQ(s',a',θ)-Q(s,a,θ)||2;
(3) Calculating the gradient of the parameter theta with respect to the cost function;
(4) Using a random gradient descent method to realize an end-to-end optimization target;
the gradient described above is calculated and the gradient,calculating from the deep neural network, and updating parameters by using random gradient descent so as to obtain an optimal Q value;
(5) The action at is randomly selected according to the probability epsilon or the action at with the maximum Q value is selected according to the Q value output by the neural network, the rewarding rt after the action at is executed and the input of the next network are obtained, and the neural network calculates the output of the network at the next moment according to the current value, so that the cycle is performed.
The prize value in step (5) includes: probability P of neural network output 1 And the actual probability P 2 The mean square value of the generated sample vehicle model is added with the percentage K of the total vehicle model:
after several iterations and training in the training process, when the Q value representing the prize converges to the maximum value, the allocation strategy is optimized.
(6) Will s t 、a t 、r t 、s t+1 And the termination judgment indexes are sequentially stored into an experience playback data set D, m samples are continuously sampled from the D when the data reach a certain number, the current target Q value is calculated, all super parameters of the Q network are updated through gradient back propagation, meanwhile, the current state s= s T +1 is enabled, if s is in a termination state, the current iteration is completed, or the iteration is completed when the iteration round number T is reached, otherwise, the step (5) is transferred to continue iteration. The specific method comprises the following steps:
in the process of continuously and iteratively updating the data, each segment t is s t 、a t 、r t 、s t+1 And a five-tuple { s t, a t, r t, s t +1, done } consisting of the termination criterion done is stored in the experience playback set D. When the stored quantity reaches the playback set capacity D, old data overflows according to the rolling to store new data, and the validity of samples in D is ensured. Once the number of samples reaches the small number of training samples m, it starts to follow from DM samples (j=1, 2..m), and calculating the current target Q value y corresponding to each sample j . And playing back experience versus state s in the data t Converted into label storage for business data analysis.
All parameters θ of the Q network are updated by gradient back propagation of the neural network using the mean square error loss function L (θ).
In summary, the embodiment of the invention improves the recognition capability of the vehicle model, designs the expansion method of the vehicle appearance image training sample, and optimizes the generated picture result of the GAN by using the experience playback aiming at the vehicle appearance sample with the non-ideal generated result on the basis.
The embodiment of the invention also provides a vehicle model identification device based on experience playback, which comprises:
the acquisition module is used for acquiring an original vehicle image; the original vehicle image comprises vehicle model information;
the data expansion module is used for carrying out data expansion on the original vehicle image through a GAN network to obtain model sample data;
the training module is used for inputting the model sample data into an countermeasure network with experience playback for training to obtain a target model;
the identification module is used for identifying the acquired vehicle image to be identified according to the target model and determining the vehicle model in the vehicle image to be identified.
The embodiment of the invention also provides electronic equipment, which comprises a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
The embodiment of the invention also provides a computer readable storage medium storing a program, which is executed by a processor to implement the method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.