Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a flowchart of an implementation of a neural network model training method according to an embodiment of the present invention, which is detailed as follows:
in S101, a neural network model for performing super-resolution processing on an image and an initial image for training the neural network model are acquired.
In this embodiment, the type of the neural network model for performing super-resolution processing on the image may be selected according to the pixel characteristics of the image to be processed, for example, the neural network model may be a convolutional neural network model (e.g., based on ResNet, FSRCNN, GoogleNet or other similar models), RNN or LSTM neural network, and the like, which is not limited herein.
The input to the neural network model is a low resolution image, with the goal of outputting a high quality, high resolution image (image C, size ma mb). For example, the input is image B, size a × B, the output is image C, size ma × mb, where m is the super-resolution coefficient. If m is 2, the image size is expanded to four times of the original size through super-resolution processing; if m is 3, the image size is enlarged to 9 times as large as the original size by the super resolution processing.
A neural network model for performing super-resolution processing on images can be established firstly, and then network parameters of the neural network model are trained through training data and a deep learning algorithm. The training process is to simulate the relationship between learning data with a complex network structure. Therefore, the quality of the training data largely determines the quality of the neural network model.
In this embodiment, the training data comprises at least one image pair of an initial image and a corresponding sampled image. Wherein the sample image is scaled from the initial image. In this step, an initial image for training the neural network model is first acquired, so that a corresponding sampling image is generated from the initial image in a subsequent step to form training data.
In this embodiment, the high resolution and the low resolution are relative and are not divided by the specific resolution value of the image. For convenience of description, the images are referred to as an initial image and a sample image, and are not limited to the images. For example, an image with a higher resolution in one image pair in the training data is referred to as an initial image, and an image with a lower resolution obtained by reducing the initial image is referred to as a sample image.
In S102, noise is added to the initial image.
In practical applications, an image to be processed by the neural network model is generally an image acquired by an image acquisition device such as a camera. The images have noise due to factors such as a lens, signal conversion, ambient light and the like, and even if the image acquisition device adopts an image processing means to eliminate the noise, part of noise information can be contained in the images, so that the effect of performing super-resolution processing on the images is influenced.
In this embodiment, in order to avoid the influence of noise on the image super-resolution processing, the training data of the neural network model may be processed, so that the image input to the neural network model during the training process also contains noise, which is close to the image containing noise in practical application. The neural network model trained by the training data has a better super-resolution processing effect on images in practical application.
In this step, noise is added to the high resolution image, wherein the noise may be image noise such as gaussian noise, poisson noise, multiplicative noise, salt and pepper noise, and is not limited herein. One kind of noise may be added, or a plurality of kinds of noise may be added.
As an embodiment of the present invention, S102 may include:
and adding noise to the initial image and carrying out fuzzy processing.
In this embodiment, in practical application, there may be blur in the image that needs to be processed by the neural network model, so noise may be added to the initial image, and the blur processing may be performed. The noise may be added to the high resolution image first and then the blurring process is performed, or the blurring process may be performed to the high resolution image first and then the noise is added, which is not limited herein.
In the embodiment, the initial image is subjected to fuzzy processing, fuzzy information is added into the training data, and the generated sampling image is further close to the image in practical application, so that the consistency of the super-resolution effect of the neural network model in training and the super-resolution effect of the neural network model in practical application is ensured, and the image super-resolution effect of the trained neural network model in practical application is further improved.
As an embodiment of the present invention, the noise is of at least one type, and S102 may include:
1) and adding various types of noise to the initial image respectively to obtain noise images corresponding to various types of noise.
In this embodiment, one or more types of noise may be added, for example, only gaussian noise may be added, three types of noise, i.e., gaussian noise, poisson noise, salt and pepper noise, or other one or more types of noise may be added. Noise and noise images correspond one-to-one. For example, adding gaussian noise to the initial image to obtain a noise image corresponding to the gaussian noise; adding the Poisson noise to the initial image to obtain a noise image corresponding to the Poisson noise; and adding the salt and pepper noise to the initial image to obtain a noise image corresponding to the salt and pepper noise.
2) And subtracting the pixel value of each pixel point in the noise image corresponding to each type of noise from the pixel value of the corresponding pixel point in the initial image to obtain the pixel difference value of each pixel point corresponding to each type of noise.
In this embodiment, the pixel values of the noise image corresponding to the noise of the same type and the corresponding pixel points in the initial image are subtracted, so as to obtain the pixel difference value of each pixel point corresponding to the noise of the same type. And respectively calculating the pixel difference value of each pixel point corresponding to each type of noise.
3) And acquiring preset weighted values of various noises, and weighting and averaging pixel difference values of the same pixel point corresponding to various noises to obtain a noise mean value corresponding to each pixel point.
For example, a class a noise, a class B noise and a class C noise are shared, and a pixel difference value of 50 pixel points corresponding to each class of noise is obtained through calculation, so that a noise mean value corresponding to an ith pixel point is equal to a weighted average value of the pixel difference value of the ith pixel point corresponding to the class a noise, the pixel difference value of the ith pixel point corresponding to the class B noise and the pixel difference value of the ith pixel point corresponding to the class C noise, and a preset weight value of each class of noise is a weight in a process of calculating the weighted average value.
4) And adding the noise mean value corresponding to each pixel point to the pixel value of the corresponding pixel point of the initial image.
In this embodiment, the noise mean value corresponding to each pixel point may be added to the pixel value of the corresponding pixel point in the initial image to obtain the pixel value corresponding to each pixel point after the noise mean value is added, so as to obtain the initial image after the noise is added.
In practical applications, an image captured by an image capturing device such as a camera may include a plurality of noises, and the noises included in the captured image may be different according to environmental parameters, capturing parameters, and the like of the captured image. The embodiment respectively calculates the pixel difference value of each pixel point corresponding to various types of noise, then calculates the noise mean value corresponding to each pixel point according to the preset weight value of various types of noise, adds noise to the initial image, can make the finally obtained sampling image more close to the image needing super-resolution processing through the adjustment of the preset weight value, and sets the preset weight value according to the acquisition parameters, the environmental parameters and the like of the image needing super-resolution processing, so that the training method provided by the embodiment can carry out targeted training on the super-resolution processing model of the acquired image under different environments, and the applicability of the training method is enhanced.
In S103, the initial image to which the noise is added is down-sampled to generate a sample image corresponding to the initial image.
In the present embodiment, the down-sampling process is an image sampling method for reducing an image. And performing downsampling processing on the initial image added with the noise to generate a corresponding sampled image.
As an embodiment of the present invention, the "down-sampling processing the initial image after adding the noise" in S103 may include:
and carrying out downsampling processing on the initial image added with the noise according to a neighbor sampling method.
The traditional training method adopts a Bicubic interpolation (Bicubic interpolation) sampling method to carry out down-sampling on the high-resolution image to obtain a corresponding low-resolution image. Each pixel point in the low-resolution image is actually obtained by fusing a plurality of pixel points in the high-resolution image, so that the information content contained in the low-resolution image is far more than the information content corresponding to the actual size of the low-resolution image, and information in a plurality of high-resolution images is hidden in the low-resolution image. In the training of the neural network model according to the high-resolution images and the corresponding low-resolution images, the neural network can mine information hidden in the low-resolution images and learn network parameters, so that the effect of performing super-resolution processing on the low-resolution images in the training data by the trained neural network model is good. In the images acquired by the image acquisition devices such as the camera and the like in practical application, a single pixel point has no hidden information so much, so that the effect of the trained neural network model on super-resolution processing of the images in practical application is much worse than that of training data.
In this embodiment, a neighboring sampling method (Nearest Neighbor) is used instead of the conventional bicubic interpolation sampling method to perform downsampling processing on the initial image after the noise is added. Compared with the sampling image obtained by a bicubic interpolation sampling method, the sampling image obtained by the adjacent sampling method contains too much hidden information, the information is lost in the sampling process, the obtained sampling image contains less hidden information, and the method is closer to the image acquired by an image acquisition device in practical application.
In the embodiment, the initial image added with the noise is subjected to down-sampling processing by adopting a neighbor sampling method, so that the generated sampling image and the image closer to the practical application can ensure the consistency of the super-resolution effect of the neural network model during training and the super-resolution effect of the neural network model in the practical application, and further the image super-resolution effect of the neural network model in the practical application after training is improved.
In S104, the initial image and the corresponding sampling image are used as training data to train the neural network model.
In this embodiment, one initial image and the corresponding sampled image are used as one image pair in the training data, and the training data may include one or more image pairs. Inputting a sampling image into the neural network model, comparing the image output by the neural network model with the initial image corresponding to the sampling image, and adjusting the network parameters according to the comparison result. The optimal network parameters of the neural network model can be determined through training.
According to the embodiment of the invention, the noise is added into the initial image used for training the neural network model, then the downsampling processing is carried out on the initial image added with the noise to generate the corresponding sampling image, the initial image and the corresponding sampling image are used as training data to train the neural network model, and the effect of carrying out super-resolution processing on the image in practical application of the trained neural network model can be improved. According to the embodiment of the invention, the noise is added into the initial image, so that the generated sampling image is closer to the image containing the noise in practical application, the super-resolution effect of the neural network model in training is ensured to be consistent with the super-resolution effect of the neural network model in practical application, and the super-resolution processing effect of the trained neural network model on the image in practical application is improved.
Optionally, when the trained neural network model is actually used to perform super-resolution processing on the image acquired by the image acquisition device, the image may be read from the memory of the image acquisition device and input to the neural network model for super-resolution processing, instead of reading the image from the album of the image acquisition device. Because the image in the memory of the image acquisition device is not compressed, and the information is completely stored, the image in the memory of the image acquisition device replaces the image in the album and is input into the neural network model, and the super-resolution processing effect can be improved.
As an embodiment of the present invention, as shown in fig. 2, "training the neural network model" in S104 may include:
in S201, a loss function is established according to the similarity relationship between the brightness, the contrast and the structure between two images in the structural similarity theory.
In this embodiment, the Structural Similarity Index (SSIM) is a method for measuring Similarity between two images, and provides a fully-referenced image quality evaluation Index, which can measure image Similarity from three aspects of brightness, contrast, and structure. The loss function (loss function) is used for estimating the degree of inconsistency between the predicted value and the true value of the model, and is a non-negative true value function, and the smaller the loss function is, the higher the accuracy of the model is.
As an embodiment of the present invention, the loss function is:
Lossnew(A,C)=-l(A,C)-c(A,C)-s(A,C) (1)
wherein A represents a first image, C represents a second image, and l (A, C) is a brightness contrast function between the first image and the second image; c (A, C) is a contrast function between the first image and the second image; s (A, C) is a structural contrast function between the first image and the second image; wherein the first image is the initial image; and the second image is an image output by the neural network model after the sampling image corresponding to the initial image is input into the neural network model in the training process.
In this embodiment, the brightness contrast function is used to characterize the brightness similarity between two images. The contrast function is used for representing the degree of contrast similarity between the two images, and the structure contrast function is used for representing the degree of structure similarity between the two images. In this embodiment, the brightness contrast function, the contrast function, and the structure contrast function are added after being negated, so as to obtain the loss function. In the established loss function, the weights of the brightness, the contrast and the structure are all 1, so that the loss function can more comprehensively and uniformly reflect the similarity of the brightness, the contrast and the structure of the two images, and a neural network trained according to the loss function can realize a better super-resolution effect. It will be appreciated that the brightness, contrast and structural weights may also be adjusted.
As an embodiment of the present invention, the brightness contrast function is
The contrast function is
The structural contrast function is
Wherein, muAIs the pixel average value, mu, of the first imageCIs the pixel average, σ, of the second imageAIs the standard deviation, σ, of the pixels of the first imageCIs the standard deviation, σ, of the pixels of the second imageAσCPixel covariance for the first image and the second image; k1,K2And K3Are all constant.
In S202, the neural network model is trained according to the training data and the loss function.
In this embodiment, after the sampling image in the training data is input to the neural network model, a third image output after the neural network model performs super-resolution processing on the sampling image is obtained. And calculating a loss function value corresponding to the current neural network model according to the image information of the initial image, the image information of the third image and the loss function corresponding to the sampling image. And adjusting parameters of the neural network model according to the loss function value corresponding to the current neural network model. If the super-resolution effect of the adjusted neural network model meets the requirement, the training process of the neural network model can be finished, and if the super-resolution effect of the adjusted neural network model does not meet the requirement, the training data is input again for training.
In the embodiment, the idea of the structural similarity theory is integrated into the process of constructing the loss function, the training effect of the neural network model is improved by improving the loss function, and the image output by the trained neural network model can better accord with the super-resolution image of human vision, so that the super-resolution effect of the neural network model is improved.
Fig. 3 is a schematic diagram illustrating an implementation example of preprocessing an initial image according to an embodiment of the present invention. Fig. 3(a) shows an initial image, and fig. 3(b) shows an image obtained by adding noise to the initial image and performing blurring processing.
Fig. 4 is a schematic diagram illustrating a super-resolution effect of the trained neural network model according to the embodiment of the present invention. Fig. 4(a) and (c) are super-resolution images output by a neural network trained by a conventional training method, and fig. 4(b) and (d) are super-resolution images output by a neural network trained by the training method provided in the present embodiment. As can be seen from fig. 4, the neural network model trained by the training method provided by this embodiment has a better super-resolution effect. First, the resulting line segments from fig. 4(b) and (d) are more clear and sharp. Especially, when fig. 4(b) is compared with fig. 4(a), the size of the circle obtained in fig. 4(b) is much smaller than that obtained in the conventional training method, and the conclusion is more intuitively verified. Secondly, the noise on the background of fig. 4(b) and (d) is also greatly attenuated. These two points exactly coincide with the original intention of the concept of the present scheme, and it is proved that the training method of the present embodiment can achieve the purpose of good anticipation, improve the training effect of the neural network model, and enable the trained neural network model to achieve a good super-resolution effect, so that the human eye perception effect is improved.
The embodiment of the invention has the following advantages:
1. the down-sampling mode is changed from the traditional bicubic interpolation sampling method into the neighbor sampling method for the first time. By adopting a neighbor sampling method, the situation that the picture is really shot by the simulation camera is simulated, so that the network is helped to learn more information, and the more real picture is restored.
2. Noise and blur are added to the training data to simulate the hardware noise and blur that the actual captured image may be subjected to. Therefore, the trained neural network model has the functions of denoising and deblurring, and can restore the image details to the maximum extent according to the minimum information. In practical application, better super-resolution effect can be obtained.
3. In actual application, data is taken from the internal memory of the image acquisition device instead of being read from the photo album. Therefore, the image is not compressed, the information is completely stored, and the super-resolution effect of the neural network model is further improved.
According to the embodiment of the invention, the noise is added into the initial image used for training the neural network model, then the downsampling processing is carried out on the initial image added with the noise to generate the corresponding sampling image, the initial image and the corresponding sampling image are used as training data to train the neural network model, and the effect of carrying out super-resolution processing on the image in practical application of the trained neural network model can be improved. According to the embodiment of the invention, the noise is added into the initial image, so that the generated sampling image is closer to the image containing the noise in practical application, the super-resolution effect of the neural network model in training is ensured to be consistent with the super-resolution effect of the neural network model in practical application, and the super-resolution processing effect of the trained neural network model on the image in practical application is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 5 is a schematic diagram of a neural network model training apparatus according to an embodiment of the present invention, corresponding to the neural network model training method described in the foregoing embodiment. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 5, the apparatus includes an acquisition module 51, a preprocessing module 52, a sampling module 53, and a training module 54.
An obtaining module 51, configured to obtain a neural network model for performing super-resolution processing on an image and an initial image for training the neural network model.
A pre-processing module 52 for adding noise to the initial image.
And the sampling module 53 is configured to perform downsampling processing on the initial image to which the noise is added, and generate a sampled image corresponding to the initial image.
And a training module 54, configured to train the neural network model by using the initial image and the corresponding sampling image as training data.
Optionally, the preprocessing module 52 is configured to:
and adding noise to the initial image and carrying out fuzzy processing.
Optionally, the noise is of at least one type, and the preprocessing module 52 is configured to:
adding various types of noise to the initial image respectively to obtain noise images corresponding to various types of noise;
subtracting the pixel value of each pixel point in the noise image corresponding to each type of noise from the pixel value of the corresponding pixel point in the initial image to obtain the pixel difference value of each pixel point corresponding to each type of noise;
acquiring preset weighted values of various noises, and weighting and averaging pixel difference values of the same pixel point corresponding to the various noises to obtain a noise mean value corresponding to each pixel point;
and adding the noise mean value corresponding to each pixel point to the pixel value of the corresponding pixel point of the initial image.
Optionally, the sampling module 53 is configured to:
and carrying out downsampling processing on the initial image added with the noise according to a neighbor sampling method.
Optionally, the training module 54 is configured to:
establishing a loss function according to the similarity relation between the brightness, the contrast and the structure of two images in the structural similarity theory;
and training the neural network model according to the training data and the loss function.
Optionally, the loss function is:
Lossnew(A,C)=-l(A,C)-c(A,C)-s(A,C)
wherein A represents a first image, C represents a second image, and l (A, C) is a brightness contrast function between the first image and the second image; c (A, C) is a contrast function between the first image and the second image; s (A, C) is a structural contrast function between the first image and the second image; wherein the first image is the initial image; and the second image is an image output by the neural network model after the sampling image corresponding to the initial image is input into the neural network model in the training process.
Optionally, the brightness contrast function is
The contrast function is
The structural contrast function is
Wherein, muAIs the pixel average value, mu, of the first imageCIs the pixel average, σ, of the second imageAIs the standard deviation, σ, of the pixels of the first imageCIs the standard deviation, σ, of the pixels of the second imageAσCPixel covariance for the first image and the second image; k1,K2And K3Are all constant.
According to the embodiment of the invention, the noise is added into the initial image used for training the neural network model, then the downsampling processing is carried out on the initial image added with the noise to generate the corresponding sampling image, the initial image and the corresponding sampling image are used as training data to train the neural network model, and the effect of carrying out super-resolution processing on the image in practical application of the trained neural network model can be improved. According to the embodiment of the invention, the noise is added into the initial image, so that the generated sampling image is closer to the image containing the noise in practical application, the super-resolution effect of the neural network model in training is ensured to be consistent with the super-resolution effect of the neural network model in practical application, and the super-resolution processing effect of the trained neural network model on the image in practical application is improved.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, e.g. a program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various method embodiments described above, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 51 to 54 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6, and does not constitute a limitation of the terminal device 6, and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, a display, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.