Disclosure of Invention
In view of this, the disclosure provides a positioning result visualization method and device based on a virtual intelligent medical platform.
According to an aspect of the present disclosure, there is provided a positioning result visualization method based on a virtual intelligent medical platform, including:
according to the target object data, obtaining a three-dimensional visual virtual image;
performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
combining the three-dimensional model of the accelerator beam in the virtual image with the registration result, and rendering to obtain a positioning result;
and displaying the positioning result in the radiotherapy positioning process.
In one possible implementation manner, the obtaining a three-dimensional visualized virtual image according to the target object data includes:
obtaining target object DICOM RT (Radiothearapy In DICOM) data through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation manner, the building the accelerator beam three-dimensional model and the target object related three-dimensional model according to the target object data includes:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the radiotherapy related data;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In a possible implementation manner, the registering result is obtained by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene, and the method includes:
acquiring a real-time picture of the real positioning scene;
according to the real-time picture, obtaining characteristic points of the real-time positioning scene;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers that are added to the target subject's skin during a computed tomography (Computed Tomography, CT) scan.
In one possible implementation manner, the displaying the positioning result during the radiotherapy positioning process includes:
determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, planning information, structure set information and dose information;
the target object-related three-dimensional model includes: a target region three-dimensional model, a ROI region three-dimensional model and a dose distribution three-dimensional model.
According to another aspect of the present disclosure, there is provided a positioning result visualization device based on a virtual intelligent medical platform, including:
the virtual image construction module is used for obtaining a three-dimensional visualized virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
the rendering module is used for combining the three-dimensional model of the accelerator beam in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiation treatment positioning process.
According to another aspect of the present disclosure, there is provided a positioning result visualization device based on a virtual intelligent medical platform, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, by combining a mixed reality technology, the information such as tumor, rays and the like is visualized, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and more efficiently, clearly know the positioning condition, confirm the positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, so that the positioning efficiency is improved. In addition, doctors can correct the positioning result through display information, so that the positioning accuracy is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
With the change of disease spectrum, malignant tumors have become the first killer threatening human health; about 2/3 of the patients will inevitably receive radiation therapy for the purpose of eradication or palliative treatment, etc. throughout the course of tumor progression.
In the current clinical actual radiotherapy positioning process, the positioning result is informed to a patient through technician dictation; however, since the tumors, normal tissues and rays in the human body are not visible to the naked eye, and most patients have no medical background, the technician can not clearly understand the specific situation by dictation, and the fear and worry of the patients on the tumors and radiotherapy can not be reduced, so that a plurality of physical and psychological symptoms and adverse psychological reactions are generated in the implementation stage of the radiotherapy, and the life quality and the treatment compliance of the patients, and even the treatment effect are seriously affected.
Therefore, the technical scheme for visualizing the positioning result is provided, and by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of the positioning result are realized; the patient can observe the positioning result more intuitively and efficiently, know the positioning condition clearly, confirm the positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, so that the positioning efficiency is improved. In addition, doctors can correct the positioning result through displaying information, thereby improving the positioning accuracy
Fig. 1 illustrates a flowchart of a method for visualizing a positioning result based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step 10, obtaining a three-dimensional visualized virtual image according to target object data;
step 20, performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
step 30, combining the three-dimensional model of the accelerator beam in the virtual image and the registration result, and rendering to obtain a positioning result;
and step 40, displaying the positioning result in the radiotherapy positioning process.
The virtual intelligent (Virtual Intelligent, VI) medical platform is a medical platform constructed by combining methods of artificial intelligence, big data analysis and the like and is based on holographic technologies of virtual reality, augmented reality, mixed reality and the like, is used for assisting and guiding invasive, minimally invasive and noninvasive clinical diagnosis and treatment processes, assisting diagnosis and treatment of patients, and can be applied to the fields including but not limited to surgery, internal medicine, radiotherapy department, interventional department and the like. The positioning result refers to the result obtained in the radiation therapy positioning process, namely, a doctor firstly draws a tumor on an image of a planning system, so that the tumor center coordinate of a patient is determined, and a physical engineer and an operator put the tumor center of the patient on a treatment center (including an isocenter) of a radiotherapy device according to the tumor center coordinate.
Therefore, based on the virtual intelligent medical platform, the data information of the existing target object in the hospital is analyzed and converted into the three-dimensional visual virtual image, the virtual image is matched with the real scene through the virtual intelligent technology and is displayed on the display terminal, so that three-dimensional holographic display of the medical image, three-dimensional display of a radiotherapy plan and visual display of the positioning result are realized, the target object can observe the positioning result more intuitively and more efficiently, positioning errors are reduced, and positioning efficiency is improved.
The above-mentioned positioning result visualization scheme based on the virtual intelligent medical platform is illustrated in the following with reference to fig. 2 and 3.
Fig. 2 illustrates a schematic diagram of device connection for positioning result visualization according to an embodiment of the present disclosure, as illustrated in fig. 2, the device for positioning result visualization may include: the system comprises an image acquisition device (namely a camera 01, a camera 02 and a camera 03 in the figure), a display device (namely the display device 01 and the display device 02 in the figure), a processing device PC, a server and an in-hospital information system; fig. 3 shows a schematic view of a radiotherapy positioning result visualization scene according to an embodiment of the present disclosure, as shown in fig. 3, including: image acquisition equipment (namely camera 01, camera 02 and camera 03 in the figure), display equipment (namely a display in the figure), a PC, a server, an in-hospital information system and an accelerator.
In fig. 2 and fig. 3, the image acquisition device is configured to acquire a picture in a real-time positioning scene in real time, transmit the picture to the PC in real time in a wired or wireless manner, acquire target object data through the in-hospital information system by the PC and the server, perform positioning result visualization processing on the data, and transmit the obtained processing result to the display device for terminal display after the PC and the server perform processing such as data extraction, three-dimensional reconstruction, virtual-real registration, and the like. In fig. 2 and 3, the number, installation position, connection mode, and the like of the image capturing apparatus, the display apparatus, and the like may be set according to actual needs, which is not limited in the present disclosure.
In one possible implementation manner, in step 10, the obtaining a three-dimensional visualized virtual image according to the target object data may include: obtaining DICOM RT data through a DICOM network; extracting the target object data according to the DICOM RT data; establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
The DICOM RT data is related data acquired from a hospital DICOM network, and DICOM is an international standard (ISO 12052) of medical images and related information. DICOM RT data may be acquired by an in-hospital information system, and may include: CT image data, RT Plan planning information, RT Structure Set information and RT Dose information are obtained by CI scanning, and then related data such as planning information, structure Set information and Dose information can be obtained according to the CT image data. And then, according to the information such as the identity of the target object for radiotherapy, extracting data from the obtained DICOM RT data to obtain corresponding target object data, wherein the target object data can comprise: basic information of a target object, CT image data, planning information, structure set information, dose information and other related information; further, the target object data may be subjected to a segmentation and modeling process, and a plurality of three-dimensional models may be established, where the plurality of three-dimensional models may include: a target object-related three-dimensional model such as a target region three-dimensional model, a region of interest (region of interest, ROI) three-dimensional model, a dose distribution three-dimensional model, and an accelerator beam three-dimensional model; further, according to the established spatial relative position relationship among the three-dimensional models, the three-dimensional models can be combined to obtain a three-dimensional visual virtual image.
For example, the server in fig. 2 or 3 may interface with a DICOM network of a hospital to provide a C-Store network service (relational database for quick query), so as to receive DICOM RT data such as CT image data, RT Plan information, RT Structure Set information, RT Dose information, etc. sent by the DICOM protocol; then, the DICOM RT data can be analyzed and processed to extract target object basic information, target object CT image data, plan information, structure set information, dose information and other target object data; further, according to the target object data, a plurality of three-dimensional models are established, and finally a three-dimensional visualized virtual image is obtained.
In one possible implementation manner, the building the accelerator beam three-dimensional model and the target object related three-dimensional model according to the target object data includes: analyzing the target object data to obtain radiotherapy related data; establishing corresponding three-dimensional model data according to the radiotherapy related data; and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
For example, the extracted target object data may be subjected to data analysis to obtain radiation therapy related data, and a Json file (i.e., a file stored in a Json data format) as description data information may be generated, and the radiation therapy related data (i.e., the Json file) may be imported into a medical image processing software 3D slice, and the target area, the region of interest, the CT image data of the dose distribution, the accelerator beam, and the like of the target object may be segmented and modeled by using a segmentation Editor and modeling Model Maker of the software using Python language, so as to obtain a plurality of corresponding three-dimensional models, such as a target area three-dimensional Model, a region of interest three-dimensional Model, a dose distribution three-dimensional Model, and an accelerator beam three-dimensional Model; and finally, storing the target region three-dimensional model, the region of interest three-dimensional model, the dose distribution three-dimensional model, the accelerator beam three-dimensional model and the like as an OBJ format model data file, and generating a Json file describing the model data file for subsequent processing. Thus, based on 3D slicer software, modeling processing is carried out on target object data to obtain a three-dimensional model, so that automatic and batched processing of the data is realized; meanwhile, the target object can acquire the positioning condition intuitively and efficiently through the three-dimensional visual virtual image obtained by three-dimensional reconstruction of CT image data, subjectively judge, participate in positioning completion confirmation, and reduce errors.
In a possible implementation manner, in step 20, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes: acquiring a real-time picture of the real positioning scene; according to the real-time picture, obtaining characteristic points of the real-time positioning scene; and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In the embodiment of the disclosure, the images of the real positioning scene can be acquired in real time through one or more image acquisition settings arranged on the real positioning scene, and in an exemplary case that the number of the image acquisition devices is greater than 1, the real-time images obtained by the image acquisition devices can be fused, and then the registration result is obtained by performing virtual-real registration on the fused images and the three-dimensional model related to the target object in the three-dimensional visualized virtual image.
Wherein the feature points correspond to position markers that are added to the skin of the target object during computed tomography CT scanning. Illustratively, the CT scan may be performed at a specific location on the skin such as: marks are added in the middle and two sides of the chest, the middle and two sides of the abdomen and the like, and the marks are generated into a two-dimensional code form; the mark corresponds to the space position of the characteristic point of the real scene obtained through the real-time picture, and meanwhile, the position of the virtual image reconstructed by CT scanning data and the position of the mark are relatively unchanged; furthermore, according to the relative relation between the characteristic points obtained by the camera transmitting the characteristic points into the image surface in real time and the established three-dimensional visualized virtual image, the three-dimensional models related to the target objects in a plurality of virtual images can be matched into the real-time image, so that virtual-real registration is realized.
For example, as shown in fig. 3, 3 cameras, a PC and other devices may be used to perform multi-angle real-time image acquisition on a real-time positioning scene through 3 cameras installed at different positions, and the real-time images are transmitted to the PC, and the PC calculates feature points of the real-time positioning scene in the image; then the PC acquires patient data information through the background, and the information of the Json file of the patient obtained by extracting the data is matched; and a corresponding three-dimensional reconstructed model data file is called from a server, and the three-dimensional models related to target objects such as target areas, ROI (region of interest) and the like of interest of patients, technicians and doctors are matched to the corresponding positions of the real positioning scene according to the relative relation between the characteristic points obtained by the real-time transmission of the camera into the image and the established three-dimensional visualized virtual images.
In one possible implementation manner, in step 40, rendering obtains a positioning result by combining the three-dimensional model of the accelerator beam in the virtual image and the registration result; may include: according to the registration result, determining the position of the three-dimensional model (portal model) of the accelerator beam, for example, the position of the three-dimensional model of the accelerator beam can be determined based on the consistency of the three-dimensional model related to the target object and the isocenter of the three-dimensional model of the accelerator beam, and then the three-dimensional model of the accelerator beam and the registration result are rendered to obtain a positioning result; it should be noted that in the embodiment of the present disclosure, the number of the three-dimensional models of the accelerator beam may be one or more, that is, the positioning result may include three-dimensional models of accelerator beams with different angles and different shapes.
In one possible implementation, in step 40, the displaying the positioning result during the radiotherapy positioning process includes: determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene; and displaying the positioning result at the target position through a display device.
In the embodiment of the disclosure, the number of target positions can be set according to the position, the visual angle, the actual environment and other factors of the target object, so that one or more target positions can be obtained, and the registration result can be intuitively displayed; by setting different areas to different colors or different color depths, the display device can distinguish each component element in the registration result, so that the target object can acquire the registration result more intuitively and efficiently, the target object can observe and confirm the positioning result conveniently, and the positioning efficiency is improved.
For example, as shown in fig. 3, the virtual-real registration result may be displayed by a display device (projector, display, etc.), and multiple display devices may be added at different angles at different positions according to the position and viewing angle of the patient in the radiotherapy scene. For example, for a patient lying, a display device may be projected or placed directly over the patient to facilitate the patient's observation and confirmation of the positioning results. Thus, the target object can intuitively and conveniently observe the registration result, and further can combine the self situation to subjectively judge the positioning situation, so that the positioning situation can be corrected; meanwhile, doctors can correct the positioning result by observing the display equipment, so that the positioning accuracy is improved.
It should be noted that, although the above embodiment describes a positioning result visualization method based on a virtual intelligent medical platform as above by way of example, those skilled in the art will understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or practical application scene, so long as the technical scheme of the disclosure is met.
Thus, by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can observe the positioning result more intuitively and efficiently, clearly know the positioning condition, and can perform subjective judgment to participate in positioning and positioning completion confirmation, so that positioning errors are reduced. Meanwhile, the device can be used for assisting the communication between doctors and patients, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the healthy psychological state and good immune function of the patients, more positively matching treatment, reducing treatment errors in the aspect of patients and positively influencing the treatment of tumor radiotherapy patients. In addition, doctors can correct the positioning result through the three-dimensional images, and the positioning accuracy is improved.
Fig. 4 illustrates a block diagram of a positioning result visualization device based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include: the virtual image construction module 41 is configured to obtain a three-dimensional visualized virtual image according to the target object data; the virtual-real registration module 42 is configured to obtain a registration result by performing virtual-real registration on the virtual image and the real positioning scene; a rendering module 43, configured to combine the three-dimensional model of the accelerator beam in the virtual image and the registration result, and render to obtain a positioning result; a display module 44 for displaying the positioning result during the radiotherapy positioning process.
In one possible implementation manner, the virtual image construction module 41 may include: the DICOM RT data acquisition unit is used for acquiring target object DICOM RT data through a DICOM network; a target object data extraction unit, configured to extract the target object data according to the DICOM RT data; the three-dimensional model building unit is used for building an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and the virtual image acquisition unit is used for obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation manner, the three-dimensional model building unit may include: the data analysis subunit is used for obtaining radiation therapy related data by analyzing and processing the target object data; the model data construction subunit is used for establishing corresponding three-dimensional model data according to the radiotherapy related data; and the format conversion subunit is used for converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the virtual-real registration module 42 may include: the real-time picture acquisition unit is used for acquiring a real-time picture of the real positioning scene; the characteristic point obtaining unit is used for obtaining characteristic points of the real positioning scene according to the real-time picture; and the virtual-real registration unit is used for matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers that are added to the skin of the target object during a computed tomography CT scan.
In one possible implementation, the display module 44 may include: the target position selection unit is used for determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene; and the display unit is used for displaying the positioning result at the target position through display equipment.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, planning information, structure set information and dose information; the target object-related three-dimensional model includes: a target region three-dimensional model, a ROI region three-dimensional model, a dose distribution three-dimensional model, and an accelerator beam three-dimensional model.
It should be noted that, although the above embodiment describes a positioning result visualization device based on a virtual intelligent medical platform as an example, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or practical application scene, so long as the technical scheme of the disclosure is met.
Thus, by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can observe the positioning result more intuitively and more efficiently, clearly know the positioning condition, and can perform subjective judgment to participate in positioning and positioning completion confirmation, so that positioning errors are reduced. Meanwhile, the device can be used for assisting the communication between doctors and patients, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the healthy psychological state and good immune function of the patients, more positively matching treatment, reducing treatment errors in the aspect of patients and positively influencing the treatment of tumor radiotherapy patients. In addition, doctors can correct the positioning result through the three-dimensional images, and the positioning accuracy is improved.
Fig. 5 illustrates a block diagram of an apparatus 1900 for virtual intelligent medical platform based positioning result visualization, according to an embodiment of the disclosure. For example, the apparatus 1900 may be provided as a server. Referring to fig. 5, the apparatus 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that are executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The apparatus 1900 may further include a power component 1926 configured to perform power management of the apparatus 1900, a wired or wireless network interface 1950 configured to connect the apparatus 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of apparatus 1900 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.