Detailed Description
In order to make the above objects, features and advantages of the present disclosure more comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein, and thus the present disclosure is not limited to the specific embodiments disclosed below.
The present disclosure provides a face recognition method and system based on a living body detection technology.
The biological recognition technology is a technology for identity authentication by utilizing human body biological characteristics. Compared with the traditional identity authentication method, which comprises identity identification objects (such as keys, certificates, ATM cards and the like) and identity identification knowledge (such as user names and passwords), the biological identification technology has the advantages of safety, confidentiality and convenience. The biological characteristic recognition technology has the advantages of difficult forgetting, good anti-counterfeiting and anti-theft performance, portability, availability at any time and any place and the like.
Many kinds of biometric technologies such as fingerprint recognition, palm print (palm geometry) recognition, iris recognition, face recognition, voice recognition, signature recognition, gene recognition, etc. have appeared today.
The face recognition method and system based on the living body detection technology of the present disclosure uses the living body detection technology for prosthesis prevention, so that illegal individuals or institutions cannot use fake faces to make financial payments. The biopsy technique may be incorporated into different hardware devices, in particular payment devices, such as ATM machines, POS machines, personal computers, palm devices, etc. Those skilled in the art will appreciate that the face recognition method and system based on the in-vivo detection technique of the present disclosure may be incorporated into other hardware devices as long as the hardware device is capable of incorporating face recognition techniques. The face recognition method and system based on the living detection technology of the present disclosure may be applied by a service institution (such as a payee or other third party institution). In various embodiments of the present disclosure, a payee will be specifically described as an example, but it will be understood by those skilled in the art that the face recognition method and system based on the living detection technique of the present disclosure may be applied by different institutions or individuals and may be applied to different scenarios.
Face recognition method based on living body detection technology
In the present face recognition technology, the prosthesis attack to be prevented by the living body detection technology includes, for example, photo attack, video attack, and the like. Photo attack is usually prevented by an interactive motion living body detection technology, namely, a plurality of motion instructions are issued, so that a user can make a detection mode of corresponding motion. Video attacks are usually prevented by color texture analysis technology, i.e. the picture quality of video frames is lower and the distortion is higher than for a real person.
The above living detection technique works based on visible light, whose performance is affected by a series of factors such as illumination (e.g., day and night, indoor and outdoor, etc.), shading, makeup, etc., and may also vary with changes in expression, posture, hairstyle, etc., and these effects and changes are relatively difficult to model, describe, and analyze.
The technical scheme of the disclosure relates to face recognition by adopting a multi-mode living body detection technology aiming at different scenes by incorporating a face recognition technology based on infrared images. The face recognition technology based on the infrared image is independent of a visible light source, avoids the influence of illumination and is suitable for preventing various prosthesis attacks.
Face recognition technology based on infrared images is classified into near infrared (wavelength of 0.7-1.0 μm) face recognition and far infrared (wavelength of 8-1000 μm) face recognition. Near infrared face recognition is to install a near infrared light emitting diode with intensity higher than that of ambient light on a camera to ensure illumination, and then the camera uses a long-pass filter to allow near infrared light to pass through but filter visible light, so that an environment-independent near infrared face image is obtained. The near infrared face image only changes monotonically with the distance between the person and the camera. Near infrared face recognition can reduce the influence of ambient illumination on images to a great extent.
Unlike near infrared face recognition, far infrared face recognition is imaged by acquiring thermal radiation emitted from the face. Far infrared images are imaged based on the temperature of the target, also known as thermograms. The facial thermogram is determined by infrared thermal radiation of facial tissues and structures (such as blood vessel size and distribution), and is unique because the blood vessel distribution (venous and arterial distribution of the face) of each person is unique, non-reproducible, and does not change with age.
Acquisition of the thermograms may be accomplished by various temperature sensing devices including modes of thermal imagers (such as far infrared cameras), thermopiles, thermometers, and the like. The signals output by the temperature sensing device come in a variety of forms, including dense thermal imaging images, point signals, and the like, which will be collectively referred to hereinafter as facial thermal imaging images.
Fig. 1 shows an example of a cash register incorporating a thermal imaging camera and its acquired grey-scale map. A cash register (as shown on the left side of fig. 1) implanted with a thermal imaging camera is one example of a temperature sensing device that can acquire a thermogram.
In the application scene of the unmanned supermarket, as payment is completed by a user in a self-service way, the cash register implanted with the thermal imaging camera can be utilized to perform face recognition capable of preventing the attack of the prosthesis under the condition of weak light at night or under the condition of weak light caused by severe daytime weather.
Specifically, when a user checks out before a cash register in which a thermal imaging camera is implanted, the cash register may take a thermal imaging image of the user's face, as shown on the right side of fig. 1. As shown by the gray scale plot, the distribution of gray scale (caloric content) levels associated with the face is significantly different from the distribution of background caloric content levels, whereby background removal is facilitated. This is because the thermal emissivity of the skin of a human face is clearly distinguishable from the thermal emissivity of surrounding scenes and is therefore easily distinguishable from surrounding scenes.
In the same way, in the application scene of the entrance guard with higher safety requirement, under the condition of weak light at night or under the condition of weak light caused by severe daytime weather, a thermal imaging camera can be also arranged at the entrance guard to perform face recognition capable of preventing the attack of the prosthesis.
In the use scenario of ATM with higher security requirements, other living detection technologies, such as interactive living detection technology and three-dimensional image acquisition technology, can be incorporated in addition to facial thermal imaging technology to perform fusion type face recognition. Namely, according to different characteristics and complementarity of the thermal imaging face recognition and the visible light face recognition, the classification results and the recognition results of the face recognition methods of different face recognition methods are fused, so that the performance and the recognition rate of the face recognition are improved.
Fig. 2 illustrates a flow chart 200 of a thermal imaging technology based face recognition method according to an embodiment of the present disclosure.
At 202, a real-time thermal imaging image of a face and a facial photograph are acquired.
As described above, the facial thermal imaging image can be obtained in real time through far infrared face recognition based on the temperature sensing device.
The facial photograph may be stored in a database, such as an identification card photograph or passport photograph associated with an electronic account, a bank account. The facial photographs may also be taken in real time, such as a real-time taken facial photograph, a multi-frame facial photograph accompanying an interactive action, a depth image obtained with a three-dimensional camera, and so forth.
The facial photos may be static or dynamic. The facial photograph may also be a plurality of continuous or discrete video frames.
The facial photograph may be obtained by a conventional camera. The facial photos may also be obtained through visible light-based biopsy techniques, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic system), and so forth.
It can be appreciated that when a more complex background is involved in an application scenario, image preprocessing is generally required for the acquired facial thermal imaging image and the facial photo, that is, modeling (for example, a statistical feature-based method and a knowledge modeling-based method) is required to be performed first on a face, and matching degrees between a region to be detected and a face model are compared, so as to obtain a face region that may exist. In the present disclosure, detection and positioning of a face will not be described in detail, but image features are extracted and identified directly based on the acquired facial thermographic image and facial photograph.
At 204, facial thermographic features are extracted and identified based on the facial thermographic image.
Extraction and identification of facial thermographic features can be accomplished in a variety of ways, such as isotherm matching methods, blood flow graph based methods, physiological structure based methods, traditional statistical identification based methods (principal component analysis PCA, linear discriminant analysis LDA, independent component analysis ICA, etc.), and nonlinear feature subspace based methods.
Taking the isotherm matching method as an example, facial isotherm features are extracted. Facial isotherms essentially reflect vascularity information under the skin of a human face. The isotherm region can be extracted by using a standard template, the shape of the isotherm is analyzed by using a geometric analysis method, the analysis result and the centroid of the face image are features, and the isotherm is represented by using a fractal method.
From facial isotherm features, global and local features of the face can be extracted. For facial thermographic images, global features describe the main feature information, including overall information of contours, distribution of facial organs, etc. While local features describe the detailed features of the face, such as organ features, facial singularity features, like scars, moles, dimples, etc. In facial thermographic images, facial singular features such as scars, dimples, etc., can be extracted with vascularity (e.g., intersection) information. Global features are used for coarse matching and local features are used for fine matching.
Those skilled in the art will appreciate that different methods may be employed to extract and identify facial thermal imaging features for different application scenarios.
At 206, facial visible light imaging features are extracted and identified based on the facial photographs.
Common visible light image features are color features, texture features, shape features, and spatial relationship features.
The color feature is a global feature based on pixel points, all pixels belonging to an image or image area having their own contribution. Texture features are also global features that require statistical calculations in areas containing multiple pixels. Shape features are represented in two classes, one is outline features and the other is region features. The contour features of the image are mainly directed to the outer boundary of the object, whereas the region features of the image are related to the whole shape region. The spatial relationship refers to a mutual spatial position or a relative direction relationship between a plurality of objects segmented in an image, and these relationships may be also classified into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. The use of spatial relationship features may enhance the descriptive discrimination of image content, but spatial relationship features are often relatively sensitive to rotation, inversion, scale change, etc. of an image or object.
However, the recognition of facial features of a person is different from general image recognition, which is based on the features of a face. The global features of the face describe main feature information, including overall information of skin color, contour, distribution of facial organs, and the like. And the detail features of the face described by the local features, such as organ characteristics, facial singular features, like scars, moles, dimples, etc. The former is used for coarse matching and the latter is used for fine matching.
Extraction and recognition of facial visible imaging features can be achieved in a variety of ways, such as a fixed template matching method based on geometric features, an algebraic feature-based recognition method (e.g., pattern recognition based on K-L transforms, a fisher linear discriminant algorithm), and a neural network learning method based on connection mechanisms (e.g., pca+nn algorithm), among others.
For extraction and recognition of different facial image features, different methods may be employed, and combinations of different features may be extracted in different scenes. Those skilled in the art will appreciate that different features may be extracted and combined appropriately for different application scenarios.
At 208, a determination is made as to whether the facial thermal imaging features and facial visible light imaging features match.
In one embodiment of the present disclosure, in determining whether facial thermal imaging features and facial visible light imaging features match, global features and local features of a face may be integrated and dimensionality reduced (i.e., image elements are projected into a low-dimensional space using linear or non-linear processing methods), building different global and local classifiers.
And correspondingly classifying the facial thermal imaging features obtained from the facial thermal imaging images and the facial visible light imaging features obtained from the facial visible light images according to the global features and the local features, inputting the global classifier and the local classifier correspondingly, and carrying out weighted summation on the similarity output by each classifier to obtain the final similarity.
According to the final similarity being high, the facial thermal imaging characteristics and facial visible light imaging characteristics can be judged to be matched; from the final similarity being low, it may be determined that the facial thermal imaging features and the facial visible light imaging features do not match. It will be appreciated that a similarity threshold may be set, i.e. above the threshold it is determined that it matches, and below the threshold it is determined that it does not match.
In another embodiment of the present disclosure, a non-linear feature subspace-based approach may be employed to determine whether facial thermal imaging features and facial visible light imaging features match.
Firstly, mapping a sample into a feature space by using a kernel function, performing PCA analysis on the feature space, and solving a kernel feature subspace of each face class. And solving the projection length of the face sample to be identified in the nuclear feature subspace of each class, wherein the larger the projection length value is, the smaller the distance between the sample and the feature subspace is. Classifying and identifying the face sample to be identified by utilizing the nearest neighbor criterion.
Those skilled in the art will appreciate that the extracted features may be aligned in different ways for different application scenarios.
At 210, face recognition is successful if the facial thermal imaging features and facial visible light imaging features match. If the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
The face recognition method based on the thermal imaging technology of the present disclosure is actually a multi-modal face recognition method based on the living body detection technology. Which combines far infrared thermal imaging technology and visible light imaging technology. In one embodiment of the present disclosure, the visible light imaging technique may be a visible light-based biopsy technique for protection against prosthetic attacks, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic vision system), and so forth. The combination of the two greatly improves the success rate of preventing the attack of the prosthesis.
Fig. 3 illustrates a flowchart 300 of a multi-modal face recognition method based on a liveness detection technique in accordance with another embodiment of the present disclosure.
At 302, an application scenario is analyzed.
Different application scenarios may have different lighting conditions, different security requirements, and different device configurations, etc. The lighting conditions may vary from scene to scene, and for a 24 hour business scene it may be desirable to employ detection techniques that are independent or less dependent on lighting. Applications that may be in completely dark scenes or in harsh natural environments require detection techniques that do not rely on illumination.
The security requirements may also vary from scene to scene, and in the application scene of an unmanned supermarket, since there are no sales personnel and cashiers present, the types of prosthesis attacks that need to be dealt with may be many, the security requirements are of course higher than those of the business place where the staff is present. Device configuration may be related by budget, user oriented, intended use time, etc.
At 304, at least one multi-modality biopsy technique is selected based on the application scenario, the multi-modality biopsy technique including at least facial thermal imaging techniques.
In an embodiment of the present disclosure, in an application scenario of an unmanned supermarket, in view of no sales person or cashier being present, there may be more kinds of prosthesis attacks that need to be dealt with. Therefore, at least one facial thermal imaging technique may be selected, and at least one of the interactive motion living detection technique, the three-dimensional image acquisition technique, the near infrared living detection technique, and the like may be further selected and combined. Such a combination may be to select one of an interactive motion biopsy technique and a three-dimensional image acquisition technique, and a near infrared biopsy technique to acquire an image.
Those skilled in the art will appreciate that a decision maker (e.g., an investor or operator) may make other different selections and combinations, such as selecting a combination of three or even four techniques for multi-modal in vivo detection.
In another embodiment of the present disclosure, in an application scenario with higher security requirements (such as high-end conference identity authentication), at least a facial thermal imaging technology may also be selected, and at least one of the interactive motion living detection technology, the three-dimensional image acquisition technology, the near infrared living detection technology, and the like may be further selected and combined.
It will be appreciated that the above multi-modal biopsy is based on image fusion techniques. Image fusion is used as an effective value-added technology, and aims to improve the reliability of interpretation and the robustness of a system by using redundant information; the complementary information is used to enhance useful information in the image, improving system performance, i.e., resolution, coverage, response time, confidence.
Image fusion may be performed at multiple levels, namely pixel level fusion, feature level fusion and decision level fusion.
The pixel level fusion is the lowest level fusion process, which is to fuse the original image data from each sensor and then perform feature extraction and attribute decision based on the fused image data. However, pixel-level fusion often requires a certain similarity between the image data to be fused, and strict registration between the images is required (fusion results are sensitive to mismatch), and the data to be fused has the most serious noise (or interference).
Feature level fusion belongs to the middle hierarchy, and is characterized in that after pre-detection, segmentation and feature extraction, the extracted features are combined in a common decision space on the premise that the detection of each sensor is mutually independent, and then the selected targets are optimally classified based on the combined feature vectors. The feature level fusion is mainly used for fusion among heterogeneous sensor images. Because information loss is introduced when feature vectors are extracted from the original image, the accuracy of the fusion result is reduced to a certain extent.
Decision-level fusion is a high-level information fusion and represents a data increment method, namely, each sensor firstly makes independent decisions based on own image data and then combines the decisions to form a final decision, so that the interpretation capability of the image can be enhanced, and the observed target can be better understood. At this time, the accuracy of the fusion result is the worst, but the fusion method is more suitable for fusion between sensor image data with large characteristic difference, such as fusion of a visible light image and an infrared image, fusion of image data and non-image data, and the like.
Those skilled in the art will appreciate that the multimodal in vivo detection of the present disclosure may employ feature level fusion or decision level fusion. Those skilled in the art will also appreciate that the outputs may be weighted differently, in addition to the number of techniques employed being selected and combined.
At 306, a real-time thermal imaging image of the face and a photograph of the face are received.
As described above, the facial thermal imaging image can be obtained in real time through far infrared face recognition based on the temperature sensing device.
The facial photograph may be stored in a database, such as an identification card photograph or passport photograph associated with an electronic account, a bank account. The facial photographs may also be taken in real time, such as a real-time taken facial photograph, a multi-frame facial photograph accompanying an interactive action, a depth image obtained with a three-dimensional camera, and so forth.
The facial photos may be static or dynamic. The facial photograph may also be a plurality of continuous or discrete video frames.
The facial photograph may be obtained by a conventional camera. The facial photos may also be obtained through visible light-based biopsy techniques, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic system), and so forth.
It can be appreciated that when a more complex background is involved in an application scenario, image preprocessing is generally required for the acquired facial thermal imaging image and the facial photo, that is, modeling (for example, a statistical feature-based method and a knowledge modeling-based method) is required to be performed first on a face, and matching degrees between a region to be detected and a face model are compared, so as to obtain a face region that may exist. In the present disclosure, detection and positioning of a face will not be described in detail, but image features are extracted and identified directly based on the acquired facial thermographic image and facial photograph.
At 308, facial thermographic features are extracted and identified based on the facial thermographic image.
Global features and local features of a face may be extracted from a facial thermographic image. For facial thermographic images, global features describe the main feature information, including overall information of contours, distribution of facial organs, etc. While local features describe the detailed features of the face, such as organ features, facial singularity features, like scars, moles, dimples, etc. In facial thermographic images, facial singular features such as scars, dimples, etc., can be extracted with vascularity (e.g., intersection) information. Global features are used for coarse matching and local features are used for fine matching.
Those skilled in the art will appreciate that different methods may be employed to extract and identify facial thermal imaging features for different application scenarios.
At 310, facial visible light imaging features are extracted and identified based on the facial photographs.
At 312, a determination is made as to whether the facial thermal imaging features and facial visible light imaging features match.
At 314, if the facial thermal imaging features and facial visible light imaging features match, face recognition is successful if the facial thermal imaging features and facial visible light imaging features match. If the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
The face recognition method based on the living body detection technology of the present disclosure can be implemented using a multi-modal living body detection technology. Notably, the thermal imaging living detection technology incorporated in the present disclosure enables the technical solution of the present disclosure to break out of the limitation of illumination conditions, so that the application scene can be expanded into a completely dark scene or a severe natural environment. The visible light imaging techniques incorporated in the present disclosure may be visible light-based in vivo detection techniques for protection against prosthetic attacks. As will be appreciated by those skilled in the art, as face recognition technologies develop and diversify, the application of the living body detection technology-based face recognition method of the present disclosure will also diversify.
Face recognition system based on living body detection technology
Fig. 4 illustrates a block diagram 400 of a thermal imaging technology based face recognition system in accordance with an embodiment of the present disclosure.
The receiving module 402 receives a real-time thermal imaging image of a face and a photograph of the face.
The extraction module 404 extracts and identifies facial thermal imaging features based on the facial real-time thermal imaging images and facial visible light imaging features based on the facial photographs.
The analysis module 406 determines whether the facial thermal imaging features and facial visible light imaging features match.
In one embodiment of the present disclosure, when the analysis module 406 determines whether the facial thermal imaging features and facial visible light imaging features match, the analysis module 406 may integrate the global features and local features of the face and reduce the dimensions, building different global and local classifiers.
The analysis module 406 classifies the facial thermal imaging features from the facial real-time thermal imaging image and the facial visible imaging features from the facial photo according to the global features and the local features, inputs the global classifier and the local classifier, and performs weighted summation on the similarity outputted by each classifier to obtain the final similarity.
The analysis module 406 may determine that the facial thermal imaging features match the facial visible light imaging features based on the final similarity being high; from the final similarity being low, it may be determined that the facial thermal imaging features and the facial visible light imaging features do not match. It will be appreciated that a similarity threshold may be set, i.e. above the threshold it is determined that it matches, and below the threshold it is determined that it does not match.
In another embodiment of the present disclosure, the analysis module 406 may employ a non-linear feature subspace-based approach to determine whether facial thermal imaging features and facial visible light imaging features match.
The analysis module 406 first maps the samples into feature space using kernel functions, performs PCA analysis in the feature space, and solves the kernel feature subspaces for each face class. And solving the projection length of the face sample to be identified in the nuclear feature subspace of each class, wherein the larger the projection length value is, the smaller the distance between the sample and the feature subspace is. Classifying and identifying the face sample to be identified by utilizing the nearest neighbor criterion.
Those skilled in the art will appreciate that the extracted features may be aligned in different ways for different application scenarios.
Further, if the facial thermal imaging features and facial visible light imaging features match, the analysis module 406 determines that face recognition was successful; if the facial thermal imaging features and facial visible light imaging features do not match, the analysis module 406 determines that face recognition failed.
If facial thermal imaging features match facial visible imaging features, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
Fig. 5 illustrates a block diagram 500 of a face recognition system based on a biopsy technique according to another embodiment of the present disclosure.
The selection module 502 analyzes the application scenario. Different application scenarios may have different lighting conditions, different security requirements, and different device configurations.
Further, the selection module 502 selects at least one multi-modality biopsy technique based on the application scenario, the multi-modality biopsy technique including at least facial thermal imaging techniques.
The receiving module 504 receives the real-time thermal imaging image of the face and the facial photograph.
The extraction module 506 extracts and identifies facial thermal imaging features based on the facial thermal imaging images and facial visible light imaging features based on the facial photographs.
The analysis module 508 determines whether the facial thermal imaging features and facial visible light imaging features match.
Further, if the facial thermal imaging features and the facial visible light imaging features match, the analysis module 508 determines that face recognition was successful; if the facial thermal imaging features and facial visible light imaging features do not match, the analysis module 508 determines that face recognition failed.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
Also, the biopsy-based face recognition system of the present disclosure may be implemented using a multi-modal biopsy technique. Notably, the thermal imaging living detection technology incorporated in the present disclosure enables the technical solution of the present disclosure to break out of the limitation of illumination conditions, so that the application scene can be expanded into a completely dark scene or a severe natural environment. The visible light imaging technology incorporated in the present disclosure is a visible light-based in vivo detection technology for protection against prosthetic attacks. As will be appreciated by those skilled in the art, as face recognition technologies develop and diversify, the application of the living body detection technology-based face recognition method of the present disclosure will also diversify.
The steps and modules of the above-described living body detection technology-based face recognition method and system may be implemented in hardware, software, or a combination thereof. If implemented in hardware, the various illustrative steps, modules, and circuits described in connection with this disclosure may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic component, a hardware component, or any combination thereof. A general purpose processor may be a processor, microprocessor, controller, microcontroller, state machine, or the like. If implemented in software, the various illustrative steps, modules, described in connection with this disclosure may be stored on a computer readable medium or transmitted as one or more instructions or code. Software modules implementing various operations of the present disclosure may reside in storage media such as RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, removable disk, CD-ROM, cloud storage, etc. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium, as well as execute corresponding program modules to implement the various steps of the present disclosure. Moreover, software-based embodiments may be uploaded, downloaded, or accessed remotely via suitable communication means. Such suitable communication means include, for example, the internet, world wide web, intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and thermal imaging communications), electronic communications, or other such communication means.
It is also noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Additionally, the order of the operations may be rearranged.
The disclosed methods, apparatus, and systems should not be limited in any way. Rather, the present disclosure encompasses all novel and non-obvious features and aspects of the various disclosed embodiments (both alone and in various combinations and subcombinations with one another). The disclosed methods, apparatus and systems are not limited to any specific aspect or feature or combination thereof, nor do any of the disclosed embodiments require that any one or more specific advantages be present or that certain or all technical problems be solved.
While the embodiments of the present disclosure have been described above with reference to the accompanying drawings, the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many modifications may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which fall within the scope of the present disclosure.