WO2019014521A1 - Système de reconnaissance d'image dynamique pour la sécurité et la télémédecine - Google Patents

Système de reconnaissance d'image dynamique pour la sécurité et la télémédecine Download PDF

Info

Publication number
WO2019014521A1
WO2019014521A1 PCT/US2018/041958 US2018041958W WO2019014521A1 WO 2019014521 A1 WO2019014521 A1 WO 2019014521A1 US 2018041958 W US2018041958 W US 2018041958W WO 2019014521 A1 WO2019014521 A1 WO 2019014521A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
dynamic
body portion
imaging system
dynamic imaging
Prior art date
Application number
PCT/US2018/041958
Other languages
English (en)
Inventor
Gholam A. Peyman
Original Assignee
Peyman Gholam A
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peyman Gholam A filed Critical Peyman Gholam A
Publication of WO2019014521A1 publication Critical patent/WO2019014521A1/fr
Priority to US16/666,230 priority Critical patent/US11309081B2/en
Priority to US17/723,455 priority patent/US20220240779A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques

Definitions

  • the invention generally relates to a dynamic image recognition system. More particularly, the invention relates to a dynamic image recognition system that may be utilized in a remote recognition of a person and in telemedicine, and which is capable of determining induced dynamic changes in a body portion of a person for identifying the person, which may include any dynamic change occurring after the initial static photo, or evaluating the physical and physiological changes in the body portion to obtain additional data and recognize the changes by subtracting them to achieve a third form of information or third image.
  • conventional patient identification systems are known that are used to verify the identity of a patient.
  • these conventional patient identification systems are not able to take into account the dynamic characteristics of a body portion of a patient for recognition.
  • conventional patient identification systems are limited in their ability to accurately verify the identity of a patient.
  • conventional patient identification systems are unable to be utilized for other important applications, such as the analysis of a disease process.
  • telemedicine is used to triage an accident victim to appropriate specialist via the Internet with mobile systems or situations to avoid spread of a contagious disease or video conferencing.
  • Other existing systems include telecardiology, teledermatology, telepathology, teleophthalmology, and teleradiology, etc. transmitting radiographic images X-ray, CT, MR, PET, CT, MRI , SPECT/CT and health information technology.
  • Presently available social communication systems such as MSN, Yahoo, Skype are not HIPAA approved.
  • a dynamic image recognition system that utilizes a dynamic imaging system for more accurately verifying the identity of a patient or other person in a real-time in order to ensure that the interview or advice is being performed on the proper patient/person and the patient's privacy cannot be violated by a hacker, etc.
  • the present invention is directed to a dynamic image recognition system and a telesystem using the same that substantially obviates one or more problems resulting from the limitations and deficiencies of the related art.
  • a dynamic imaging system that includes an imaging device configured to capture images of a body portion of a person over a predetermined duration of time so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed for the purpose of identifying the person or evaluating physical and physiological changes in the body portion.
  • the imaging device is in the form of a light field camera, the light field camera including a sensor array, a microlens array disposed in front of the sensor array, and an objective lens disposed in front of the microlens array.
  • the objective lens of the light field camera is in the form of a tunable lens.
  • the objective lens of the light field camera is in the form of a fluidic lens, the fluidic lens having an outer housing and a flexible membrane supported within the outer housing, the flexible membrane at least partially defining a chamber that receives a fluid therein.
  • the dynamic imaging system further comprises a fluid control system operatively coupled to the fluidic lens, the fluid control system configured to insert an amount of the fluid into the chamber of the fluidic lens, or remove an amount of the fluid from the chamber of the fluidic lens, in order to change the shape of the fluidic lens in accordance with the amount of fluid therein.
  • the fluid control system comprises a pump and one or more fluid distribution lines, at least one of the one or more fluid distribution lines fluidly coupling the pump to the fluidic lens so that the pump is capable of adjusting concavity and/or convexity of the fluidic lens.
  • the fluidic lens further comprises a magnetically actuated subsystem or servomotor configured to selectively deform the flexible membrane so as to increase or decrease the convexity of the flexible membrane of the fluidic lens.
  • the microlens array and the sensor array are disposed in a concave configuration in a rear of the light field camera relative to the incoming light rays passing through the objective lens so to enable the data processing device to
  • the data processing device utilizes at least two points for the comparison of the displacement of the body portion of the person over the
  • predetermined duration of time to the reference displacement of the body portion of the person in a two-dimensional or three-dimensional manner.
  • the dynamic imaging system is in the form of an independent, standalone system configured to verify the identity of the person for an application selected from the group consisting: (i) security system identification,
  • identification by a customs department or state department (iii) identification by a police department or military branch, (iv) identification at an airport, (v) identification at a banking institution, (vi) identification at a stadium hosting a sporting event, concert, or political event, (vii) identification for use in a smartphone application or a drone application, and (viii) identification of a body's lesion in a two-dimensional or three-dimensional manner.
  • the person being imaged or photographed is an active participant in the process that implies the person in giving his consent by participating in the process and his photograph is not taken without his permission randomly by one or more cameras located in a location.
  • the dynamic imaging system is in the form of a dynamic facial recognition system, and wherein the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner, and wherein the dynamic imaging system is specially programmed to analyze induced changes in the portion of the face of the person over the predetermined duration of time (e.g., induced changes in the portion of the face of the person that enhance wrinkles of the face by the person being instructed to frown or smile, thus displacing the wrinkles for the predetermined duration of time).
  • the dynamic imaging system is provided as part of a telemedicine system, the dynamic imaging system configured to verify the identity of a patient prior to any medical history taken or any recommendation and/or advice given.
  • the dynamic imaging system may also be provided with an additional scanning device for imaging a specific part of the body (e.g., the configuration of the retinal vessels as static image).
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., face or mouth of the person) generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes.
  • a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., face or mouth of the person) generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes.
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech, or sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes and the process may be repeated and recorded for absolute security using the above combinations with other gesture and sound created by the same person.
  • a voice recognition sensor configured to capture speech, or sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes and the process may be repeated and recorded for absolute security using the above combinations with other gesture and sound created by the same person.
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person generated from the captured images acquired using the imaging device, and compare the acquired data with similar data obtained from the same person previously, and analyze changes that might have been induced because of intoxication (e.g., resulting from alcohol or another type of substance abuse, such as heroin, etc.), or resulting from changes in the mood of the person).
  • intoxication e.g., resulting from alcohol or another type of substance abuse, such as heroin, etc.
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body/mouth portion of the person generated from the captured images acquired using the imaging device, and compared with similar data obtained or modified as a result of induced emotional changes.
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., the mouth, etc.) generated from the captured images acquired using the imaging device (e.g., one or more hyperspectral or multispectral cameras), and compared with similar data obtained or modified as a result of induced emotional changes and analyzed with a subtraction algorithm to observe those changes or enhance the results to predict their progress.
  • a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., the mouth, etc.) generated from the captured images acquired using the imaging device (e.g., one or more hyperspectral or multispectral cameras), and compared with similar data obtained or modified as a result of induced emotional changes and analyzed with a subtraction algorithm to observe those changes or enhance the results to predict their progress.
  • the imaging device e.
  • the displacement curve of the body portion of the person comprises a displacement curve for the lips of the person while the person is reciting a series of vowels.
  • the body portion of the person being tracked by the imaging device comprises a tumor, lesion, or retina
  • the dynamic imaging system is configured to dynamically analyze the growth of the tumor or lesion over a period of time by tracking the displacement of the tumor or lesion in a two-dimensional or three- dimensional manner over the period of time.
  • the data processing device is specially programmed to track volumetric or surface changes in the tumor, lesion, or retina over the period of time so that the tracked volumetric or surface changes over a period of time in the tumor, lesion, or retina , or on the mucosa or the skin of the patient are capable of being compared with existing patient data or baseline data for diagnosis or differential diagnosis so as to analyze and predict trends in disease progression or disease improvement.
  • the body portion of the person being tracked by the imaging device is being affected by a disease process, and wherein the dynamic imaging system is configured to track changes in the disease process in a two-dimensional or three- dimensional manner over a period of time.
  • the imaging device is in the form of one or more multispectral or hyperspectral cameras configured to capture multispectral or hyperspectral images of the body portion of the person and changes in the body portion over time so that surface features and subsurface features of the body portion of the person are enhanced (e.g., enhancement of wrinkles, etc.) and are capable of being analyzed.
  • the one or more multispectral or hyperspectral cameras are configured to capture multispectral or hyperspectral images of the body portion in the infrared spectrum so that a temperature of the body portion is capable of being determined during the dynamic changes in a two-dimensional or three-dimensional format.
  • the extent of the motion at a joint is examined and compared in time with existing data and the differences are analyzed.
  • a long- wavelength infrared (LWIR) camera or hyperspectral camera is used to convert an image taken at night into a black and white photo using the capabilities of automatic and human matching, thermal imaging, and polarized thermal radiation where the polarimetric information enhances the geometric and textural details that might be present in pre-existing dynamic identity recognition files or dynamic changes occurring in real-time
  • the body portion of the person is in the form of a finger or a hand of the person
  • the one or more multispectral or hyperspectral cameras are configured to capture multispectral images of the finger or the hand of the person in an initial uncompressed state and a subsequent compressed state
  • the data processing device is specially programmed to verify an identity of the person using the multispectral images of the finger or the hand of the person in both the compressed and uncompressed states by taking into account ridges and/or folds on a surface of the finger or the hand of the person and subsurface blood flow through both compressed and uncompressed capillaries in the finger or the hand of the person.
  • the body portion of the person is in the form of a finger, hand, or face of the person, one or more 3-4 dimensional multispectral or
  • hyperspectral cameras are configured so as to surround the finger, hand, or face (e.g., two cameras top right and left on the upper side of the finger, hand, or face, and two cameras right and left from below the finger, hand, or face) to capture multispectral images of the finger, hand, or the face of the person in an initial state and a subsequent changed state
  • the data processing device is specially programmed to verify an identity of the person using the multispectral images of the finger, the hand, or the face of the person in both the initial and changed states by taking into account ridges and/or folds on a surface of the finger, the hand, or the face of the person and subsurface blood flow through both compressed and uncompressed capillaries in the finger, the hand, or face of the person where the images are stitched to create a 3-D image of 360 degrees of the person that can be rotated in any direction via a software and algorithm for evaluation and recognition of the person.
  • the data processing device is specially programmed to project a grid of points over an area of the body portion of the person so as to determine the displacement of the body portion of the person over the predetermined duration of time in a two-dimensional, three-dimensional manner, or four-dimensional manner, which includes the time as the fourth dimension of the image, and the dynamic changes that can be isolated or evaluated as a whole.
  • the data obtained before a dynamic change is subtracted by a processor to evaluate those changes or compare those changes to the initial data obtained from the patient or any other person confirming the identity of the patient or person.
  • the data processing device is specially programmed to execute a subtraction algorithm for comparing a displacement of a subtracted image of the body portion of the person over the predetermined duration of time to a reference subtracted image of the body portion of the person acquired prior to the displacement in a two- dimensional or three-dimensional manner.
  • the dynamic imaging system is in the form of a dynamic facial recognition system, and the body portion of the person for which the displacement is determined comprises a portion of the face of the person.
  • the dynamic imaging system further comprises a voice recognition sensor configured to capture speech waveforms generated by the person so that the speech waveforms are capable of being superimposed on a displacement curve of the portion of the face of the person generated from the captured images acquired using the dynamic facial recognition system.
  • the dynamic imaging system further comprises a fingerprint or hand sensor configured to capture multispectral images of a finger or hand of the person, the data processing device being specially programmed to determine surface features and subsurface features of the finger or hand of the person using the multispectral images.
  • an identify of the person is verified using identity comparison results generated from the dynamic facial recognition system, the voice recognition sensor, and the fingerprint or hand sensor so as to verify the identity of the person with certainty.
  • the body portion of the person being tracked by the imaging device comprises a retina, a limb, or other part of the body involving a physiological function, and wherein the dynamic imaging system is configured to dynamically analyze changes in the retina, the limb, or the other part of the body over a period of time by tracking the changes in the retina, the limb, or the other part of the body in a two-dimensional or three- dimensional manner over the period of time.
  • the data processing device of the dynamic imaging system is operatively coupled to an artificial intelligence system and/or augmented intelligence system by means of an internet-based network connection so as to allow a user to access information contained on the artificial intelligence system and/or the augmented intelligence system.
  • the data processing device of the dynamic imaging system is operatively coupled to a virtual reality system, an augmented reality system, or other video-based system so that a user is able to view the captured images via a live feed in real-time.
  • the virtual reality system, the augmented reality system, or the other video-based system is configured so as to enable the user to zoom in and out on the captured images of the body portion of the person.
  • FIG. 1 is a schematic illustration of an exemplary embodiment of a telemedicine system, in accordance with the invention, wherein the telemedicine system is provided with a dynamic imaging system that includes an image recognition sensor;
  • FIG. 2a is an illustration of a human face, depicting a grid that is projected over the facial area being analyzed using the dynamic imaging system described herein, and further depicting the positions of two exemplary points being used to track dynamic changes in the lips of the person;
  • FIG. 2b is another illustration of a human face, depicting a grid that is projected over the facial area being analyzed using the dynamic imaging system described herein, and further depicting the positions of two exemplary points being used to track dynamic changes in the lips of the person while the person is speaking the letter "O";
  • FIG. 3a is an illustration of a finger being pressed against a transparent glass surface so that the tip of the finger is capable of being imaged, according to one embodiment of the invention
  • FIG. 3b depicts a finger before touching a transparent glass surface used for the imaging of the finger
  • FIG. 3c depicts the finger touching the transparent glass surface, the finger undergoing imaging that takes into account both surface and subsurface properties of the finger in a two-three dimensional and/or three-dimensional manner;
  • FIG. 4a is an illustration of a finger being pressed against a transparent glass surface so that the tip of the finger is capable of being imaged, according to another embodiment of the invention using multiple cameras for 360 degree image capture;
  • FIG. 4b depicts a finger before touching a transparent glass surface used for the three-dimensional imaging of the finger
  • FIG. 4c depicts the finger touching the transparent glass surface, the finger undergoing imaging that takes into account both surface and subsurface properties of the finger in a three-dimensional manner;
  • FIG. 5a is an illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein, and further depicting distances between facial features being used to track dynamic changes in facial expressions of the person;
  • FIG. 5b is another illustration of the human face of FIG. 5 a, wherein the person is smiling, and dynamic changes in the facial features of the person are being analyzed as well as the trends in those features;
  • FIG. 5c is another illustration of the human face of FIG. 5a, wherein the person is frowning, and dynamic changes in the facial features of the person are being analyzed as well as the trends in those changes;
  • FIG. 5d is an illustration of a first subtracted image of the human face of FIG. 5b, wherein the subtracted image further allows dynamic changes in the mouth of the person and the trends in those changes to be analyzed;
  • FIG. 5e is an illustration of a second subtracted image of the human face of
  • FIG. 5b wherein the subtracted image depicts enhanced folds on the facial area of the person
  • FIG. 6a is an illustration tracking the growth of a tumor over time, wherein dynamic changes in the growth of the tumor are being analyzed;
  • FIG. 6b is a subtracted image of the tumor of FIG. 6a;
  • FIG. 7a is an illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein while the person is speaking one or more words and the sound frequency of the person's voice is being simultaneously analyzed;
  • FIG. 7b is another illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein while the person is speaking one or more words and the sound frequency of the person' s voice is being correlated with the dynamic changes occurring in the person's face.
  • a dynamic image recognition system may include an imaging device configured to capture images of a body portion of a person over a predetermined duration of time so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed and subtracted for the purpose of identifying the person or evaluating physiological changes in the body portion and the trends of those changes.
  • the dynamic imaging system may be provided as an independent system (e.g., with components 59a, 62, 73, and 75 of FIG. 1). Alternatively, in one or more other embodiments, the dynamic imaging system may be incorporated in the illustrative telemedicine system described hereinafter.
  • the dynamic imaging system may include an imaging device configured to capture images of a body portion of a person in his or her (1) normal state without displacement of a body part (e.g. a face), then (2) during the induced displacement over a predetermined duration of time (e.g., while the person is smiling or frowning or lifting his or her brow or "showing his or her teeth", etc.) so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time and 2-dimensional, 3-dimensional, and/or 4-dimensional images are able to be obtained which are time dependent; and a data processing device operatively coupled to the imaging device, including adjusting automatically the angulation of magnification, etc., the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that
  • the dynamic imaging system may include an imaging device configured to capture images of a body portion of a person in his or her (1) normal state without displacement of a body part (e.g., a face), then (2) during the induced displacement over a predetermined duration of time (e.g.
  • a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time and 2-dimensional, 3-dimensional, and/or 4-dimensional images are able to be obtained which are time dependent; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time while some invisible folds and wrinkles are enhanced or the person makes the teeth or the tongue visible, etc., thus producing some more characteristic data of the person for identification that can be obtained if the images prior to displacement and afterward are subtracted to obtain a third image which is still specific to the person, such as position of the teeth and their specific characteristics, or the enhanced position and changes of the wrinkles, etc., thereby enhancing the weight of the data and identity recognition.
  • the displacement or change can be improvement or worsening of a condition, such as part of the body (e.g., the face), a disease process, or a structure or appearance of an agricultural field caused by internal or external factors, such as weather, season, pests or disease process, etc., each of which is considered a dynamic change.
  • a condition such as part of the body (e.g., the face), a disease process, or a structure or appearance of an agricultural field caused by internal or external factors, such as weather, season, pests or disease process, etc.
  • the ability to image or obtain data and follow its dynamic changes and analyze it by a subtraction algorithm by pixelation or data collection mathematically is considered dynamic recognition and creates an almost exact comparison of a structure and its subsequent induced changes (data) as a dynamic identity recognition and can better predict the subsequent changes or direction in a dynamic format, in contrast to the existing static image.
  • the imaging device may be a hyperspectral camera in which the image can be isolated as produced by different wavelengths including infrared or, for example, in the form of infrared sensors as a whole, or selection and enhancing certain wavelengths of infrared to represent better the environment or in the body or the face, skin, or mucosa showing changes in blood circulation under the skin or, for example, an infection, a tumor, etc. in a given time, or showing collapsed capillaries in the wrinkles or fold enhancing alone or with the visible spectrum the weight of the data obtained for dynamic identity recognition of a person or a disease process, etc.
  • other suitable cameras may also be used for the imaging device.
  • the dynamic imaging system may be in the form of a dynamic facial recognition system, and where the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner in time after the person is instructed to perform a certain action in order to enhance changes in the portion of the face (e.g., enhancing wrinkles of the face by asking the person to frown, or enhancing facial features by asking the person to smile, thus showing the teeth and enhancing the appearance of the wrinkles around the mouth, nose, eyes, etc., and displacing the wrinkles for a given time while showing the teeth).
  • enhancing wrinkles of the face by asking the person to frown
  • enhancing facial features by asking the person to smile
  • the dynamic imaging system is in the form of a dynamic facial recognition system, and where the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner, and the system is specially programmed to analyze different, induced changes performed in a sequential manner.
  • a security agency may require the induced changes to be repeated in a different manner (e.g., if the patient initially smiles, he may then be asked to frown), or follow it indefinitely in a different manner and random manner (e.g., ask the person to show his or her teeth, perform another action, perform yet another action, etc.), and the system records the data for analysis to make it almost impossible to be repeated by another person.
  • the imaging device may be equipped with a tracking system to be able to track a relatively stable area of a person's image, such as forehead, while documenting, the dynamic changes of the other areas.
  • FIG. 1 an illustrative embodiment of a telemedicine system is depicted generally at 50' .
  • the telemedicine system 50' is represented in schematic form.
  • the system 50' includes a plurality of local control systems disposed at local sites 51a' , 51b', 51c' ; each system including a telemedicine imaging apparatus 55.
  • each telemedicine imaging apparatus 55 may include a photoacoustic system.
  • each telemedicine imaging apparatus 55 of the system 50' is in communication with a local control module 62 and control-processing means 59a (see FIG. 1).
  • the control-processing means 59a may be embodied as a local personal computer or local computing device that is specially programmed to carry out all of the functionality that is described herein in conjunction with the local control systems of the telemedicine system 50'.
  • each local control system also includes body tracking means 71 (e.g., eye tracking means 71) for measuring position(s) and movement(s) of a body portion of the patient (e.g., an eye of the patient).
  • the body tracking means 71 e.g., eye tracking means 71
  • the body tracking means 71 can be an integral component or feature of the telemedicine imaging apparatus 55 or a separate system or device.
  • each local control system also includes an image recognition sensor 73 for capturing images of a subject or patient that may be used to identify and/or verify the identity of the subject or patient.
  • image recognition sensor 73 for capturing images of a subject or patient that may be used to identify and/or verify the identity of the subject or patient.
  • the positive identification and verification of the identity of the subject or patient receiving treatment prevents mistakes wherein the wrong subject or patient is treated.
  • the image recognition capabilities of the image recognition sensor 73 may also be used to identify and verify that a particular surgical procedure is being performed on the proper body portion of a subject or patient (e.g., to verify that a laser coagulation procedure is being performed on the proper one of the patient's eyes or that a surgical procedure is being performed on the proper one of the patient's limbs).
  • the image recognition sensor 73 or imaging device may be provided as part of a dynamic image recognition system that is operatively coupled to the telemedicine imaging apparatus 55 located at the same or remote location.
  • the image recognition sensor 73 or imaging device may comprise a light field camera that is able to simultaneously record a two- dimensional image and metrically calibrated three-dimensional depth information of a scene in a single shot.
  • the image recognition sensor 73 or imaging device of each local control system may be operatively connected to the local computing device, which forms the control-processing means 59a of the local control system.
  • the local computing device may be specially programmed with image/pattern recognition software loaded thereon, and executed thereby for performing all of the functionality necessary to identify and verify a particular subject or patient, or to identify and verify a body portion of the particular subject or patient that is to be recognized.
  • the local computing device may be specially programmed to capture and store reference dynamic information regarding a body portion of the subject or patient so that the reference dynamic information may be compared to dynamic information pertaining to the same body portion captured at a later time (i.e., just prior to the performance of the surgical procedure).
  • dynamic information pertaining to the same body portion of the subject or patient is captured by the image sensor 73 (i.e., the light field camera) and the local computing device compares the subsequent dynamic information pertaining to the body portion to the reference dynamic information and determines if the subsequent dynamic information of the body portion matches or substantially matches the reference dynamic information.
  • the image sensor 73 i.e., the light field camera
  • this information may relate to the position or size of any part of the body, extremities, or organ (e.g., as seen on an X-ray film, MRI, CT-scan, or gait changes by walking or particular habit of head position, hand, facial changes during speech, or observation of changes due to emotional changes, or medication, or disease or trauma or neurological incidence, etc.).
  • the local computing device determines that the subsequent dynamic information pertaining to the body portion of the subject or patient matches or substantially matches the reference dynamic information, the local computing device is specially programmed to generate a matched identity confirmation notification or the trends of changes, etc.
  • matched identity confirmation notification may also be delivered to the technician or security officers at the local site via the local computing device. Then, after the other safety checks of the system 50' have been performed, the surgical procedure or planned procedure at the hospital is capable of being performed on the patient.
  • the local computing device determines that the subsequent dynamic image recognition information regarding the body portion of the subject or patient does not match or substantially match the reference dynamic information
  • the local computing device is specially programmed to generate a non-matching identity notification that is sent to the remote computing device at the remote site in order to inform the attending physician that the patient or body portion of the patient has not been properly identified and verified.
  • the non-matching identity notification may also be delivered to the technician at the local site via the local computing device.
  • the local computing device When the non-matching identity notification is sent to the attending physician, the local computing device also disables the surgical equipment at the local site in order to prevent the procedure from being performed on the incorrect patient or the incorrect body portion of the patient (e.g., in a laser coagulation procedure, laser firing will be automatically locked out by the local computing device) or, as another example, the person will be rejected for being admitted to the country (e.g., at customs) or to another place, or the cause of the non-matching is evaluated, etc.
  • the surgical equipment at the local site in order to prevent the procedure from being performed on the incorrect patient or the incorrect body portion of the patient (e.g., in a laser coagulation procedure, laser firing will be automatically locked out by the local computing device) or, as another example, the person will be rejected for being admitted to the country (e.g., at customs) or to another place, or the cause of the non-matching is evaluated, etc.
  • each local control system may further include a voice recognition sensor 75 for capturing the speech sound waves generated by the subject or patient so that the speech of the subject or patient may additionally be used to identify and/or verify the identity of the subject or patient.
  • the voice recognition subsystem described herein is capable of precisely reproducing the frequency and amplitude of each spoken word, etc.
  • the speech waveforms captured by the present system may be transmitted for a long distance as a sound wave and their characteristics may be analyzed or amplified over a given time that can be adjustable by the system.
  • the voice recognition sensor 75 may be used in conjunction with the image recognition sensor 73 or imaging device described above to further verify that a surgical procedure is being performed on the correct subject, patient, or person.
  • the voice recognition sensor may be
  • the sound waves generated from the data acquired by the voice recognition sensor 75 may be superimposed on the displacement curve generated from the data acquired by the imaging device (e.g., the light field camera) so that both audial and visual attributes of the person may be taken into account for
  • the dynamic imaging system is used alone or simultaneously with voice overlay, thereby creating 2D or 3D images using multispectral light, which includes IR and mid IR, captured by cameras to measure time dependent changes for creation of a dynamic event made of voice and images or changes thereof.
  • a subtraction algorithm of the system is used to produce a clear subtracted wave/image complex from a person examined a first and second time, and to project the subtracted wave complex on the subtracted value of the person's image so as to evaluate a match or change, and present the changed values or their difference as compared to the sound waves received the first time by the voice recognition sensor 75.
  • the voice recognition sensor 75 may comprise a microphone that captures the speech of the subject or patient over the entire speech frequency range of a human being (e.g., for a frequency range from 50 Hz to 5,000 Hz to encompass the typical frequency range for both males and females).
  • the syntax and sound frequencies generated by the subject or patient are capable of being used by the local control system for verification and identification of the subject prior to a surgical procedure being performed on him or her and eliminate the unfortunate consequences of mistaken one patient for another one.
  • the syntax and sound frequencies generated by the patient also are capable of being used by the local control system for verification and identification of the patient prior to a procedure being performed in a hospital, or for other uses, such as use by custom officials or other security agencies.
  • the voice recognition sensor 75 may be used as a second means of patient/ person identity confirmation in order to confirm the identity of the patient that was previously verified by the image recognition sensor 73 or imaging device.
  • the image recognition sensor 73 or imaging device may comprise a first stage of patient identity confirmation
  • the voice recognition sensor 75 may comprise a second stage of patient identity confirmation.
  • the voice recognition sensor 75 of each illustrative local control system may be operatively connected to the local computing device, which forms the control-processing means 59a of the local control system.
  • the local computing device may be specially programmed with voice recognition software loaded thereon, and executed thereby for performing all of the functionality necessary to identify and verify a particular subject or patient that is to be recognized.
  • the local computing device may be specially programmed to capture and store a first reference speech waveform of the subject or patient so that the first reference speech waveform may be compared to a second speech waveform of the same patient or subject captured at a later time (e.g., just prior to making a decision about a patient or a person).
  • the patient or subject may be asked to say a particular word, a plurality of words, or a series of vowels (i.e., AEIOU) that are captured by the voice recognition sensor 75 so that it can be used as the first reference speech waveform.
  • a particular word a plurality of words, or a series of vowels (i.e., AEIOU) that are captured by the voice recognition sensor 75 so that it can be used as the first reference speech waveform.
  • AEIOU a series of vowels
  • the second speech waveform of the subject or patient is captured by the voice sensor 75 (i.e., the microphone records the same word, plurality of words, or series of vowels repeated by the subject or patient) and the local computing device compares the second speech waveform of the patient or subject to the first reference speech waveform and determines if the second speech waveform of the subject or patient matches or substantially matches the first reference speech waveform (i.e., by comparing the frequency content of the first and second speech sound waves).
  • the voice sensor 75 i.e., the microphone records the same word, plurality of words, or series of vowels repeated by the subject or patient
  • the local computing device compares the second speech waveform of the patient or subject to the first reference speech waveform and determines if the second speech waveform of the subject or patient matches or substantially matches the first reference speech waveform (i.e., by comparing the frequency content of the first and second speech sound waves).
  • the local computing device determines that the second speech waveform of the subject or patient matches or substantially matches the first reference speech waveform
  • the local computing device is specially programmed to generate a matched speech confirmation notification that is sent to the remote computing device at the remote site in order to inform the attending physician that the proper patient has been identified and verified.
  • the matched speech confirmation notification or its discrepancies may also be delivered to the technician at the local site via the local computing device. Then, after the other safety checks of the system 50' have been performed, a surgical procedure is capable of being performed on the patient, or the identity of a person can be confirmed in various security related circumstances.
  • the local computing device determines that the second speech waveform of the subject or patient does not match or substantially match the first reference speech waveform
  • the local computing device is specially programmed to generate a non-matching speech notification that is sent to the remote computing device at the remote site in order to inform (e.g., a physician/an authorized person, security department, etc.) that the patient/person has not been properly identified and verified.
  • the non-matching speech/image notification may also be delivered to the authorized person, technician, etc. at the local site via the local computing device.
  • the local computing device When the non-matching speech notification is sent to the authorized person, the local computing device also disables surgical equipment at the local site in order to prevent the procedure from being performed, or before a medical procedure or instruction is completed on the incorrect patient/ person (e.g., in a laser coagulation procedure, laser firing will be automatically locked out by the local computing device) or the entrance of a person into a secure location will be prevented.
  • the voice changes representing various emotional stages of the patients may be recorded for diagnosis or excessive stimuli, such as emotion, pain, or satisfaction, or if a person is under the influence of a substance, etc.
  • the local computing device is further programmed to ask the patient/person a question so that the patient/person is able to respond to the question posed to him or her using natural language. That way, the system is not only able to analyze the physical surface attributes of the patient, but also analyze the sound of the patient's voice (i.e., by voice recognition recording), and communicate simultaneously with accessible existing data in a computer database to verify the identity of the patient.
  • the dynamic image/voice recognition is a tacit collaboration or consent of the patient or the person being identified in contrast to existing facial recognition systems that are not dynamic and the data obtained from these existing systems is very limited and does not imply a person' s consent.
  • the telemedicine system 50' of the illustrative embodiment also includes a central control system 58 at a remote site having a command computer 59b that is operatively connected to a remote control module 64.
  • an operator e.g., an authorized person or a custom agent, etc.
  • the central control system 58 at the remote site which includes the command computer 59b, is operatively connected to the plurality of local control systems disposed at local sites 51a', 51b' , 51c' via a computer network that uses the Internet 60.
  • the time and the information obtained from the patient/person is recorded and stored for the future recall.
  • the information will be recorded so as to be recognized by the system the next time the patient comes for reexamination.
  • the system can access other computers searching for the existence of similar cases and photos of a surface lesion or marks for a patient (e.g., photograph, X-ray, CT-scan, MRI, PET-scan, etc. of a lesion, etc.).
  • the tele-image recognition system may also be used to access existing data in the published literature, such as an artificial intelligence (AI) system, or augmented reality system (e.g., IBM Watson), to assist the doctor with new information, images, therapies, medications used, predicting the outcome, etc. or a security agency with the person's recognition or lesion/symptoms or recognition for security reasons, diagnosis and further therapy recommendation for the patient.
  • AI artificial intelligence
  • augmented reality system e.g., IBM Watson
  • the computer system functions as an informational unit augmenting the knowledge of the doctor and assists in presenting him or her with similar recorded cases to assist in a better telemedicine diagnosis and management.
  • the system assists the authorized security agent with the recognition of the person or similarity of the existing history related to a subject.
  • the system may augment recognition of the patient/person by using additional information from a fingerprint, etc.
  • information regarding the tip of the patient's finger may be recorded (e.g., the blood circulation or thermal image in the finger differentiating a person from a robot, as well as images of the ridges and minutiae, etc. of the fingerprint).
  • the touching of a removable surface permits obtaining DNA if a crime is involved, or spectroscopy and facial "fingerprint-wrinkles" of the person to be used for identifying the person/patient in conjunction with the dynamic facial recognition of the person in a two-dimensional and/or three-dimensional manner using multiple cameras 142 (e.g., two to four cameras) to observe the object (e.g., a finger 138 on transparent surface 140) 360 degrees in 3D (see FIG. 4a).
  • multiple cameras 142 e.g., two to four cameras
  • the object e.g., a finger 138 on transparent surface 140
  • the system advantageously provides augmented dynamic identification of the person/patient.
  • the system described herein is used in artificial intelligence or augmented intelligence to see and predict the trend of an action for a vehicle, manufacturing, etc.
  • the system is used to recognize a pilot who is flying a commercial plane or other uses of an airplane, or operating a train, driverless car, ship, drone, etc.
  • the system may also be used to recognize personnel in the security department or military, or to recognize a doctor or patient in the operating room.
  • the system works as an augmented intelligence assistant so as to assist a person in to making a proper decision (e.g., proper treatment of a cancer patient).
  • the system can accept or reject a customer, such as in remote banking and/or in an ATM system.
  • the information obtained by the system is encrypted and transmitted so as to make it virtually impossible to be hacked by a third party.
  • the system can differentiate between a human person and robot by its multispectral or hyperspectral camera analysis recording the body's temperature versus the relatively cold body of a robot, a photograph, and the reflective properties of its body surface, etc. and the variation of the temperature induced unknowingly by dynamic displacement of the skin folds or wrinkles instantaneously depending on the induced dynamic changes and analyzed instantaneously to differentiate a living body from a non-living object or robot.
  • the image recognition sensor 73 or imaging device at the remote laser delivery site may be in the form of a digital light field photography (DLFP) camera with microlenses that capture the information about the direction of the incoming light rays and a photosensor array that is disposed behind the microlenses.
  • a specially programmed data processing device e.g., a computer
  • the light field camera may be an integral part of the aforedescribed tele-image recognition system or may alternatively be provided as part of an independent dynamic imaging system.
  • the light field digital camera or digital light field photography (DIFLFP) camera comprises one or more fixed optical element(s) as the objective lens providing a fixed field of view for the camera.
  • a series of microlenses are located at the focal point of the objective lens in a flat plane perpendicular to the axial rays of the objective lens. These microlenses separate the incoming rays of light entering the camera into individual small bundles. The individual small bundles of light are refracted on a series of light sensitive sensors that measure in hundreds of megapixels, which are located behind the plane of the microlenses, thereby converting the light energy into electrical signals.
  • the electronically generated signals convey information regarding the direction of each light ray, view, and the intensity of each light ray to a processor or a computer.
  • Each microlens has some overlapping view and perspective from the next one which can be retraced by an algorithm.
  • Appropriate software and algorithms reconstruct computer-generated 2-3 D images from the objects not only in focus, but also those located in front or in the back of the object from 0.1 mm from the lens surface to infinity, which is being photographed by retracing the rays via the software and algorithm that modifies, or magnifies the image, as desired, while eliminating electronically the image aberrations, reflections, etc.
  • CNN convolutional neural networks
  • the light sensitive sensors behind the lenslets of the camera record the incoming light and forward it as electrical signals to the camera' s processor and act as an on/off switch for the camera' s processor measuring the intensity of the light through its neuronal network and its algorithm to record changes in light intensity, while recording any motion or dynamic displacement of an object or part of an object in front of the camera in a nanosecond to a microsecond of time.
  • the processor of the camera with its neuronal network algorithm processes the images as the retina and brain in a human being functions by finding the pattern in the data and its dynamic changes of the image and its trend over a very short period of time (e.g., nanosecond).
  • the information is stored in the memory system of the camera's processor, as known in the art, as memory resistor (memristor) relating to electric charge and magnetic flux linkage, which can be retrieved immediately or later, and further analyzed by the known mathematical algorithms of the camera and can be used for many different applications, in addition to applications for 2-3 D dynamic image recognition as near or remote subjects recognition, incorporated in the remote telemedicine system described above.
  • memory resistor memristor
  • the light field digital camera described herein has other independent applications, such as in artificial intelligence, smartphones, self-driving vehicles (e.g., self- driving cars), drones, also in crowd sourcing and surveillance, use by security agencies or in home security systems, use by customs officials in airports, or in other applications where large numbers of people gather, such as sporting events, movie theaters, and sports stadiums.
  • self-driving vehicles e.g., self- driving cars
  • drones also in crowd sourcing and surveillance, use by security agencies or in home security systems, use by customs officials in airports, or in other applications where large numbers of people gather, such as sporting events, movie theaters, and sports stadiums.
  • the light field camera may have either a tunable lens or a fluidic lens that will be described hereinafter.
  • the tunable lens may be in the form of a shape-changing polymer lens (e.g., an Optotune ® lens), a liquid crystal lens, an electrically tunable lens (e.g., using electrowetting, such as a Varioptic ® lens).
  • the fluidic lens described hereinafter may be used in the light field camera.
  • the digital in-focus, light field photography (DIFLFP) camera provides a variable field of view and variable focal points from the objective tunable lens, in one second to a millisecond, from an object located just in front of the objective lens to infinity, as the light rays pass through a microlens array in the back of the camera and a layer of sensors made of light sensitive quantum dots, which along with microlens layer, create a concave structure.
  • the lens generates more light and signal information from variable focal points of the flexible fluidic lens that are capable of being used by a software algorithm of a processor so as to produce 2-3-4 D images in real-time or video.
  • the generated images reduce the need for loss of light, which occur in refocusing the rays in standard light field cameras, but use the direction and intensity of light rays to obtain a sharp image from any distance from the lens surface to infinity, thereby producing in one cycle of changing the focal point of a tunable or hybrid fluidic objective lens of the camera electronically, or using simultaneously a microfluidic pump creating a maximum convexity in the lens to least amount and return.
  • the fluidic lens is dynamic because the plane of the image inside the camera moves forward or backward with each electric pulse applied to the piezoelectric or a microfluidic pump motor transmitting a wave of fluid flow inside or aspirating the fluid from the lens cavity so that the membrane returns to the original position, thereby creating either a more or less a convex lens, or a minus lens when the back side has a glass plate with a concave shape.
  • the lens of the light field camera is only a flexible transparent membrane that covers the opening of the camera's cavity in which the fluid or air is injected or removed so as to create a convex or concave surface using a simple piezoelectric attachment that can push the wall of the camera locally inward or outward thereby forcing the transparent membrane that acts like a lens to be convex or concave and changes in the focal point from a few millimeters (mm) to infinity and return while all data points are recorded and analyzed by its software.
  • mm millimeters
  • the light rays entering the camera pass through the microlenses located in the back of the camera directly to the sensors made of nanoparticles, such as quantum dots (QDs) made of graphene, etc.
  • QDs quantum dots
  • the camera obtains a subset of signals from the right or left side of the microlens and sensor array separately to reconstruct the 3-D image from the information.
  • the fluidic lens converts the light rays focused either anterior or posterior of the focal plane of the microlens/sensor plane to electrical signals, which are transmitted to the camera's processor with the software algorithm loaded thereon so that the images may be displayed as static 2-D or 3-D multispectral or hyperspectral images or so that a tomographic image or a video of a moveable object may be created.
  • the right or left portion of the sensors are capable of displaying from either a slightly anterior or posteriorly located focal point to the microlens, thereby providing more depth to the image without losing the light intensity of the camera, as is the case with the standard light field camera having a static objective lens or a static membrane, which is entirely dependent on producing a virtual image obtained from a fixed focal point.
  • a prismatic lens may be disposed between the microlens array and the sensors so that individual wavelengths may be separated to produce color photography or multispectral images including the infrared or near infrared images.
  • the process of focusing and defocusing collects more light rays that may be used to create 2D or 3D or 4D images.
  • the fluidic lens can change its surface by injecting and withdrawing the fluid from the lens and returning to its original shape in a time range of one second to less than a millisecond, thereby allowing the light rays to be recorded that pass through a single row or multiple rows of microlenses before reaching the sensor layer of quantum dots or monolayer of graphene or any semiconductor nanoparticles that absorb the light energy and convert it to an electrical signal.
  • the flexible transparent membrane can change its surface by injecting and withdrawing the fluid/air from the cameras cavity and returning to its original shape in a time range of one second to less than a millisecond, thereby allowing the light rays to be recorded that pass through a single row or multiple rows of microlenses before reaching the sensor layer of quantum dots or monolayer of graphene or any semiconductor nanoparticles that absorb the light energy and convert it to an electrical signal.
  • the field of the view of the lens is expanded and returns to its original position upon its relaxation.
  • the light rays that have entered the system have passed through a series of microlenses which project the rays on a layer of photosensors to become stimulated, thereby creating an electrical current traveling to a processor or computer with a software algorithm loaded thereon to analyze and create a digital image from the outside world.
  • the microlens array of the fluidic lens may include a pluggable adaptor.
  • the microlenses and the layer of sensors extend outward so as to create a concave structure inside the camera, thereby permitting the incoming light rays of the peripheral field of view to be projected on the peripherally located microlens and sensors of the camera so as to be absorbed and transferred to the processor with the algorithm loaded thereon, which mathematically analyzes, manipulates, and records the light data so as to provide a combination of signals that shows the direction from which the rays emanated.
  • the microlens array is in the form of graded-index (GRIN) lens array so as to provide excellent resolution.
  • GRIN graded-index
  • the microlens array is separated from another smaller nanosized lenses array attached to a filter, followed by the sensors to differentiate the color wavelength.
  • the deformable objective lens by changing its lens refractive power, its field of view, and its focus, transmits significantly more information, in one millisecond cycle, to the computer than a single static lens or simple lensless membrane with compressive sensing without microlenses is capable of doing, but also maintains more signals in its unlimited focal points sufficient data that is able to be easily reproduced or refocused instantaneously or later by the camera' s software algorithms so as to create sharp images in 2-3 dimensions or 4 dimensions.
  • the exposure time can be prolonged or shortened, as needed, by repeating the cycle of recording from less than one Hertz to > 30 Hertz to thousands of Hertz or more enough for cinematography while the light rays pass through unlimited focal points of the lens back and forth of the sensors to the back of the lens covering a long distance from, a few mm to infinity achieving fast sharp images by retracing and mathematical reconstruction as compared to a photo taken from a camera with a solid fixed objective lens.
  • the signals also can be analyzed by the algorithm of the computer located outside the camera for any object that is photographed at any given distance.
  • the camera's processor or a computer can retrace the rays toward any direction of the light rays, thereby simultaneously eliminating refractive aberrations or motion blur while the light is focused over any distance before or beyond the focal point of the lens using the computer software.
  • the fluidic light field camera will provide an immense amount of data during the short period of time that the lens membrane is displaced as a result of pumping fluid inside the system and withdrawing it, the forward and backward movement creating three dimensional images with depth of focus, which are easily recreated without sacrificing the resolution of the image or need for "focus bracketing" to extend the re- focusable range by capturing 3 or 5 consecutive images at different depths as is done in standard light field cameras with the complete parameterization of light in space as a virtual hologram.
  • the objective lens of the digital light field will provide an immense amount of data during the short period of time that the lens membrane is displaced as a result of pumping fluid inside the system and withdrawing it, the forward and backward movement creating three dimensional images with depth of focus, which are easily recreated without sacrificing the resolution of the image or need for "focus bracketing" to extend the re- focusable range by capturing 3 or 5 consecutive images at different depths as is done in standard light field cameras with the complete parameterization of light in space as a virtual hologram.
  • DIFLFP photography
  • DIFLFP photography
  • the power of the lens varies from - 3.00 to +30.00 dioptric power depending on the amount of fluid either injected or withdrawn with a micro-pump into the fluidic lens with an aperture of 2 to 10 millimeters (mm) or more.
  • the objective lens is a liquid or tunable lens, such as an electrically and mechanically tunable lens controlling the focal length of the lens.
  • the tunable lens is a liquid crystal, and molecules of the liquid crystal are capable of being rearranged using an electric voltage signal.
  • the digital light field photography (DIFLFP) camera utilizes a hybrid lens, as described in Applicant's U.S. Pat. No. 9,671,607, which is incorporated by reference herein in its entirety.
  • a hybrid lens the increase or decrease of the fluid in the fluidic lens chamber occurs electronically with either a servo motor, or a piezoelectric system for a rapid response.
  • the DIFLFP camera system obtains image and depth information at the same time.
  • the increase or decrease of the fluid in the fluidic lens is done in a high frequency changing the focal plane of the fluidic lens during the time which a million or billion light rays are sensed and recorded for analysis.
  • the rays of the light are collected from a wide concave surface of the sensor arrays located behind hundreds of thousands of microlenses that curve up in the back of the camera during the change in the focal point of the fluidic lens, which also creates a wider field of view, producing millions to billions of electronic pulses from which the sharp wide field images or videos are reconstructed by the specially programmed computer in a 2-3-4 dimensional manner from the objects at any desired distance in the field of view without losing the sharpness of the image.
  • the DIFLFP camera captures light from a wider field that increases or decreases the field of view rather than fixed objective lenses or compressive cameras with their assembly apertures.
  • the objective lens is a composite lens of fluidic and a solid lens, a diffractive lens or a liquid crystal coating with electronic control of its refractive power.
  • the microlenses are replaced with transparent photosensors where the sensors directly communicate with the processor and software algorithm to build desired images.
  • the solid lens is located behind the flexible membrane of the fluidic lens or inside the fluidic lens providing a wider field of view and higher magnification.
  • the additional lens can be a convex or a concave lens to build a Galilean or astronomical telescope.
  • the lens is replaced with a flexible membrane that is capable of moving forward or backward and having on its surface a two dimensional aperture assembly providing a wider field of view when the lens becomes more convex pushing the membrane's surface forward than the standard lensless light field cameras.
  • the objective lens of the light field camera is only a transparent flexible membrane supported by the outer housing of the camera's cavity, or housing defining the camera's chamber which receives a fluid therein (e.g., air or another gas) through a cannula.
  • a fluid e.g., air or another gas
  • the flexible transparent membrane bulges out acting as convex lens
  • the membrane becomes a flat transparent surface, then assumes a concave shape and acts as a minus lens when the light passes through it to reach the lenslets and the sensors in the back of the fluidic field camera that are connected to a processor.
  • the objective lens of the light field camera may use a compressible polymer, such as silicone, etc., that changes its surface curvature based on the physical pressure applied to the lens.
  • a compressible polymer such as silicone, etc.
  • a simple flexible transparent membrane acts like a lens where its surface convexity or concavity can be controlled to act as a lens.
  • the microlenses are 3-D printed to less than 1 micrometer in diameter, lens structure, and are nanolenses of less than 10 nanometers (nm).
  • the microlenses are 3-D printed from silicone, or any other transparent polymer.
  • the sensors are 3-D printed and placed in the camera.
  • the camera wall is 3-D printed.
  • the two dimensional microlens plane ends slightly forward forming a concave plane to capture more light from the peripheral objective lens surfaces areas of the liquid lens as it moves forward and backward.
  • the plane of the sensor array follows the curvature of the forwardly disposed microlens plane for building a concave structure (refer to FIGS. 3 and 4).
  • the light sensors obtain information on the direction and light intensity from a wide field of view.
  • the sensors provide electronic pulse information to a processor or a computer, equipped with a software algorithm to produce desired sharp monochromatic or color 2-4 D images.
  • the computer is powerful enough to obtain a million or billion bits of information, having a software algorithm to provide images from any object located in the field of view before or behind a photographed object ranging from a very short distance from the objective lens surface to infinity.
  • the computer and its software algorithm is capable of producing 2-3-4 dimensional sharp images, with desired
  • magnification and in color form, for any object located in front of the camera.
  • the camera can provide an instant video in a 2-3 D image projected on an LCD monitor located in the back of the camera.
  • the photos or videos captured using the camera are sent electronically via the internet to another computer using the GPU system, etc.
  • time -related images can be presented in the fourth dimension with real-time high speed processing.
  • a graphics processing unit GPU
  • a programmable logic chip or field programmable gate array FPGAs
  • VLIW Very Long Instruction Word
  • DSP digital signal processor
  • the DIFLFP camera is used for visualization of a live surgery that can be projected in 3-4 D using the fluidic lens light field camera in the operating microscope that is simultaneously projected back onto the ocular lenses of the operating microscope or used in robotic surgery of brain, heart, prostate, knee or any other organ with robotic surgery, electronic endoscope system, 3D marking in laser processing systems, barcode scanning, automated inspection with a distance sensor, in neuroscience research, documenting the nerves or in retinal photography where the eye cannot be exposed to the light for a long time or when long exposure time is needed in low light photography, or variable spot size in light emitting diode (LED) lighting.
  • LED light emitting diode
  • the DIFLFP camera has a distance sensor controlling the initial start of the image focused on a certain object in the field of DIFLFP field and can be used in macro or microphotography and having a liquid crystal display (LCD) touch screen.
  • LCD liquid crystal display
  • the wavefront phase and the distance from the object is calculated by the software measuring the degree of focusing required for two rays to focus.
  • the DIFLFP camera is used for the creation of augmented reality and virtual reality.
  • the DIFLFP camera is used with additional lenses in tomographic wavefront sensors, measuring amplitude and phase of the electromagnetic field.
  • the DIFLFP camera can generate stereo-images for both eyes of the user to see objects stereoscopically.
  • the DIFLFP camera is equipped with an auto sensor to focus on a moving object, such as in sport activities or in dynamic facial recognition.
  • the dynamic imaging system may use the computer to verify or identify various changes that happen during the change in physiological function of a person's facial expression (e.g., smiling or frowning) as a computer-generated digital 2D or 3D image or video frame records the dynamic changes of a structure, such as a face, mouth, eyes etc., and the computer analyzes and compares the biometrics as a dynamic physiological fingerprint with existing data of the same image.
  • the computer algorithm analyzes the changes in the relative position of a patient's face, matching points and directions, and compressing the data obtained during the process using dynamic recognition algorithms.
  • One exemplary technique employed is the statistical comparison of the first obtained values with the second values in order to examine the variances using a number of means including multi-spectral light that looks at the various physiological changes of the face.
  • Mathematical patterns of the digital images and statistical algorithms are capable of demonstrating that the images obtained initially and subsequently belong to the same person, etc.
  • one can use sophisticated system sensing visible and infrared light one or more sensors can be placed in a CMOS chip capturing various spectrum of light.
  • a dynamic analysis of the growth of a tumor may be performed using the dynamic imaging system described herein (e.g., by analyzing the increase in the surface area or the volume of the tumor). That is, using the system described herein, the volumetric changes of a tumor or lesion is capable of being measured over a time period by software subtraction algorithms of the system as explained below, and then transmitted to the treating physician.
  • the dynamic changes in a portion of a patient's body may be compared with existing data for diagnosis or differential diagnosis so as to track and analyze trends in disease progression or disease improvement against baseline data for management of diseases.
  • the dynamic image recognition system described herein may be configured to track changes in the disease process (e.g., diabetic retinopathy, another retinal disease, brain, spinal cord vertebrae, prostate, uterus, ovarian, intestine, stomach, extremities, lung, heart, skin disease, eczema, breast cancer, a tumor in the body, etc.) over a period of time so that the disease process is capable of being monitored by a physician, where the image may be obtained using a standard imaging system (e.g., photographs to X-ray image CT-scan, retinal images by the use of a fundus camera, OCT, etc.). Also, follow-up images may be acquired using X-ray, CT-scan, positron, MRI, ultrasound, or photoacoustic imaging, etc.
  • a standard imaging system e.g., photographs to X-ray image CT-scan, retinal images by the use of a fundus camera, OCT, etc.
  • follow-up images may be acquired using X
  • FIG. 5a illustrates the face of a person 150 with folds 152 on the facial area that are being analyzed using the dynamic imaging system described herein.
  • FIG. 5a further depicts distance(s) 154 between facial features that are being used to track dynamic changes in facial expressions of the person 150 (e.g., between the corners of the mouth).
  • FIG. 5b illustrates the face of the person 150 while the person 150 is smiling, and dynamic changes in the facial features of the person 150 are being analyzed as well as the trends in those features.
  • the teeth 155 of the smiling person 150 in FIG. 5b provide new data for analysis. Also, in FIG.
  • FIG. 5b illustrates the trend distance(s) 157 (e.g., between the corners of the mouth) being tracked from non-smiling pose of FIG. 5a to the smiling pose of FIG. 5b have increased and angle "x" has increased to angle "y".
  • FIG. 5c illustrates the face of the person 150 while the person 150 is frowning, and dynamic changes in the facial features of the person 150 are being analyzed as well as the trends in those features. The frowning of the person 150 in FIG. 5c results in enhanced folds 156.
  • FIG. 5d illustrates a first subtracted image of the human face of the person 150, wherein the subtracted image further allows dynamic changes in the mouth 158 of the person 150 and the trends in those changes to be analyzed.
  • FIG. 5e illustrates a second subtracted image of the human face of the person 150, wherein the subtracted image depicts enhanced folds 160 on the facial area of the person 150.
  • FIGS. 6a and 6b depict images of a tumor 162 being analyzed with the dynamic imaging system described herein.
  • FIG. 6a illustrates the direction 164 and growth of a tumor 162 over time.
  • FIG. 6a it can be seen that the image of the tumor 162 contains superpixels 166.
  • FIG. 6b illustrates a subtracted image of the tumor 162.
  • the sound frequency of the person' s voice is being simultaneously analyzed, and the sound frequency is being correlated with the dynamic changes occurring in the person's face (e.g., by overlaying the sound frequency curve on the facial displacement curve).
  • the multispectral camera may be used to obtain photos either in the visible spectrum or infrared to low infrared light spectrum working as a thermographic camera, seeing deep inside the skin to recognize the status of the circulation under the skin.
  • the infrared pattern recognition capabilities of the system record psychological functional changes occurring under the skin (such as an increase or decrease in the circulation due to sympathetic activation-deactivation) together with dynamic changes, which are not achievable with a camera having only visible light capabilities.
  • the visible spectrum provides information from the surface structure during the dynamic facial changes caused by deliberate activation of the facial muscles producing skin grooves around the mouth and the eye demonstrating the physical aspects of changes in a person being photographed in a two-dimensional and/or three-dimensional manner.
  • the computer software of the system analyzes both of the aforedescribed facial changes and presents them as independent values that can be superimposed mathematically by the computer's software creating subtraction data indicating the changes that have occurred serving as the initial face interrogation data.
  • This algorithm may be used subsequently for recognition of a face, body, a tumor located on the surface of the skin or inside the body imaged to recognize the extent of changed values in two or three dimensional format.
  • the dynamic imaging system described herein may also be used also along with standard imaging systems such as X-ray, CT-scan, MRI, positron, or OCT imaging to record changes occurring in the images over a time.
  • the subtraction algorithm executed by the computer presents only the subtracted image and its mathematical weight or value, and compares it with the previously obtained subtracted image of the same structure or face to verify the identity the person and compare the initial values and the extent of the changes between the first and second captured images, and the trend of the changes that have occurred after displacement of the surface or a structure of interest, such as a tumor dimension over time and its growth trend.
  • a conventional existing mathematical algorithm is not used to compare two static images and conclude the possibility or probability of them being the same, but rather the computer of the present system is specially programmed to compare two sets of dynamic images, which are composed of one static and one dynamic, that add significantly more information by dynamic changes in a two-dimensional and/or three-dimensional manner that have occurred as a result of displacements of various points (e.g., in the face of the person) and the trend of the changes obtained by the computer subtraction of the dynamic changes, where to this, two significant values that augment the weight of the computer-generated image by superimposition of the obtained data or pre and post dynamic images for subtraction analysis by adding the voice or sound waves recorded simultaneously or superimposed over the dynamic facial values, and finally confirming the data with dynamic fingerprinting that has two additional components of finger print analysis, and multispectral photography before and after pressing the finger over the transparent glass, and collapsing the capillaries of the hand or the finger that provides practically a complementary algorithm for nearly infallible identity recognition data of
  • a global positioning system (GPS) system is used in transferring data remotely to recognize the time and the location of where the image or voice is transmitted, for example, to a doctor's office or a security organization, smartphone, personal computer etc., and analyzed in real-time by the software of the unit with the preexisting data to verify the identity of the person involved.
  • GPS global positioning system
  • the technology described herein demonstrates a way to subtract the information of a dynamic change mathematically from a dynamic recognition data of not only the face, but also extremities or a moving person or variation of the facial folds measured by a
  • the electronically obtained images are combined with CMOS image sensors (e.g., analyzing a subject's fingerprint can give information on the blood flow of the fingertip before or after applying pressure with the finger that collapse the fingers skin capillaries and the changes may be analyzed in real-time).
  • CMOS image sensors e.g., analyzing a subject's fingerprint can give information on the blood flow of the fingertip before or after applying pressure with the finger that collapse the fingers skin capillaries and the changes may be analyzed in real-time).
  • FIG. 3b depicts the fingertip or ball of the finger 136 with its circulations, ridges, and minutiae, which are able to be imaged using the camera 134 for highly reliable identification of the person.
  • the infrared spectrum of the camera 134 is able to record the warm circulation of blood through the fingertip or ball of the finger 136.
  • 3c shows the ridges of the fingertip of the finger 136, but centrally, the capillaries of the finger 136 are collapsed at the area where the fingertip or finger ball is touching the surface of transparent glass 132, which indicates that a live person is being recorded.
  • the system of FIGS. 3a-3c preserves the folds in the finger 136 or, if the whole hand is placed on the glass 132, the folds in the whole hand.
  • all of this information is recorded before and after placement of the finger 136 or hand on the glass 132, and the changes are subtracted to obtain the verification of the person' s identity, and the physical and physiological changes that have occurred are analyzed to recognize and verify the person's identity.
  • a dynamic fingerprinting and/or dynamic hand recognition system will be described with reference to FIGS. 4a-4c.
  • a finger 138 containing numerous ridges and folds is placed on a surface of transparent glass 140 so that the finger 138 is able to be imaged (i.e., photographed or videoed) with an infrared spectrum and/or visible spectrum by using a plurality of cameras 142 surrounding the finger 138 for 360 degree imaging of the finger 138.
  • the cameras 142 may include two cameras 142 above the surface of transparent glass 140 and one camera 142 below the surface of transparent glass 140 (e.g., a field camera or hyperspectral camera below the glass 140).
  • FIG. 4b a finger 144 is illustrated prior to touching the surface of transparent glass 140 and being imaged by a multispectral camera or hyperspectral camera 146.
  • FIG. 4b depicts the fingertip or ball of the finger 144 with its circulations, ridges, and minutiae, which are able to be imaged using the camera 146 for highly reliable identification of the person.
  • the infrared spectrum of the camera 146 is able to record the warm circulation of blood through the fingertip or ball of the finger 144. Dynamic changes in the circulation and temperature are recorded by the multispectral or hyperspectral imaging system of FIG. 4b.
  • FIG. 4b depicts the fingertip or ball of the finger 144 with its circulations, ridges, and minutiae, which are able to be imaged using the camera 146 for highly reliable identification of the person.
  • the infrared spectrum of the camera 146 is able to record the warm circulation of blood through the fingertip or ball of the finger 144. Dynamic changes in the circulation and temperature are recorded
  • the dynamic image recognition system described herein may also be used for other applications, such as for security system applications, use in HoloLens applications, use in other telemedicine applications (e.g., tele-imaging or tele-diagnostic systems), and use in other patient applications (e.g., two independent systems having the same components for dynamic image recognition are provided so that the patient and doctor recognize each other in a dynamic form and verified from the pre-existing dynamic image recognition obtained during the first examination for a two way communicating system).
  • security system applications e.g., use in HoloLens applications
  • other telemedicine applications e.g., tele-imaging or tele-diagnostic systems
  • patient applications e.g., two independent systems having the same components for dynamic image recognition are provided so that the patient and doctor recognize each other in a dynamic form and verified from the pre-existing dynamic image recognition obtained during the first examination for a two way communicating system.
  • the system can be used in a personal computer to provide a security system that recognizes the person sending an e-mail, and in one embodiment, the computer is equipped with a Global Positioning System (GPS) system so that the location of the e-mail sender is revealed along with images to the receiver live or is recorded with the e- mail or during the telecommunication in the tele-medicine system.
  • GPS Global Positioning System
  • the dynamic imaging system may also be useful at stadiums of competitive sporting events, such as football, soccer, basketball, and hockey stadiums, and at other venues involving large gatherings of people for political or non-political causes, which often require some permission to guarantee the privacy and safety of the people present at the event.
  • the dynamic imaging system may be used in smartphones for remote recognition, and in home security systems, etc.
  • the dynamic image recognition system described herein may replace previous verification/identification systems used in personal life, such as passwords IDs, PINs, smart cards, etc.
  • the dynamic facial recognition system also has unlimited applications in personal security, identification, passports, driver licenses, home security systems, automated identity verification at the airports, border patrol, law enforcement, video surveillance, investigation, operating systems, online banking, railway systems, dams control, medical records, all medical imaging systems, and video systems used during surgery or surgical photography to prevent mistakes in the operating room (e.g., mistaking one patient for another one, or one extremity for the other).
  • the dynamic image recognition system described herein may also be used for comparative image analysis and recognizing the trends in patients during follow-up analyses and outcome prediction.
  • the dynamic image recognition system described herein may also be used with other imaging modalities, such as X-ray, CT-scan, MRI, positron, photoacoustic technology and imaging, ultrasound imaging, video of a surgery, or any other event, etc. Further, the dynamic imaging system may be used for image and data comparison of close or remotely located objects, mass surveillance to document time related changes in the image, and/or recognizing a potential event and its trend.
  • imaging modalities such as X-ray, CT-scan, MRI, positron, photoacoustic technology and imaging, ultrasound imaging, video of a surgery, or any other event, etc.
  • the dynamic imaging system may be used for image and data comparison of close or remotely located objects, mass surveillance to document time related changes in the image, and/or recognizing a potential event and its trend.
  • the aforedescribed dynamic image recognition system enables a physician to perform a remote telemedicine consultation with a patient who is located remotely from the physician and/or to obtain an image or a lesion located on the body using a field camera or record a 2-D or3-D image taken by X-ray, CT-scan, PET-scan, MRI, with or without infusion or ultrasound or a photoacoustic or thermoacoustic system, etc. of a lesion located in the body or outside the body of a patient.
  • the system advantageously obviates the need for the physician to be physically present in the same exact location as the patient.
  • the system enables a physician to perform an examination including conversing with a person or a patient in a different part of the United States or the world without the need for time consuming and costly traveling.
  • the examination may be done by the physician personally, a physician assistant, or by another authorized person. Therefore, it is critical that the telemedicine system accurately verifies the patient that is receiving advice, including prescription(s), etc., so that the advice is given or an image taken from the correct patient.
  • the remote imaging system accurately verifies that the examination is done or photos taken from the correct body portion of the intended patient so that the images and the results can be compared and the changes are subtracted to conclude an improvement or worsening of a condition when compared to previous exam and data, or as such a part of the image might be static and a part dynamic or present a static area that becomes visible after dynamic change has occurred (e.g.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Vascular Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

La présente invention concerne un système d'imagerie dynamique. Le système d'imagerie dynamique comprend un dispositif d'imagerie configuré pour capturer des images d'une partie corporelle d'une personne de telle sorte qu'un déplacement de la partie corporelle de la personne peut être suivi ; et un dispositif de traitement de données couplé au dispositif d'imagerie, et étant programmé pour déterminer le déplacement de la partie corporelle de la personne à l'aide des images capturées, et pour comparer le déplacement de la partie corporelle de la personne à un déplacement de référence de la partie corporelle de la personne acquise avant le déplacement de telle sorte que des changements dynamiques dans la partie corporelle de la personne peuvent être évalués pour identifier la personne ou évaluer des changements physiques et physiologiques dans la partie corporelle. Le système d'imagerie dynamique peut être un système autonome ou fourni en tant que partie d'un système de télémédecine.
PCT/US2018/041958 2010-10-13 2018-07-13 Système de reconnaissance d'image dynamique pour la sécurité et la télémédecine WO2019014521A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/666,230 US11309081B2 (en) 2010-10-13 2019-10-28 Telemedicine system with dynamic imaging
US17/723,455 US20220240779A1 (en) 2010-10-13 2022-04-18 Telemedicine System With Dynamic Imaging

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201762532098P 2017-07-13 2017-07-13
US62/532,098 2017-07-13
US201762549941P 2017-08-24 2017-08-24
US62/549,941 2017-08-24
US201762563582P 2017-09-26 2017-09-26
US62/563,582 2017-09-26
US201862671525P 2018-05-15 2018-05-15
US62/671,525 2018-05-15

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US12/925,518 Continuation-In-Part US8452372B2 (en) 2010-10-13 2010-10-22 System for laser coagulation of the retina from a remote location
US16/666,230 Continuation-In-Part US11309081B2 (en) 2010-10-13 2019-10-28 Telemedicine system with dynamic imaging

Publications (1)

Publication Number Publication Date
WO2019014521A1 true WO2019014521A1 (fr) 2019-01-17

Family

ID=65001485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/041958 WO2019014521A1 (fr) 2010-10-13 2018-07-13 Système de reconnaissance d'image dynamique pour la sécurité et la télémédecine

Country Status (1)

Country Link
WO (1) WO2019014521A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112086193A (zh) * 2020-09-14 2020-12-15 巢自强 一种基于物联网的人脸识别健康预测系统及方法
US20220319674A1 (en) * 2019-07-31 2022-10-06 Crisalix S.A. Consultation Assistant For Aesthetic Medical Procedures
DE202023101613U1 (de) 2023-03-30 2023-06-07 Saheem Ahmad Auf künstlicher Intelligenz basierendes System mit dynamischer Verschlüsselung für die sichere Verwaltung von Gesundheitsdaten in der Telemedizin

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4757541A (en) * 1985-11-05 1988-07-12 Research Triangle Institute Audio visual speech recognition
US20030072479A1 (en) * 2001-09-17 2003-04-17 Virtualscopics System and method for quantitative assessment of cancers and their change over time
US20080205703A1 (en) * 2005-05-27 2008-08-28 International Business Machines Corporation Methods and Apparatus for Automatically Tracking Moving Entities Entering and Exiting a Specified Region
US20080294013A1 (en) * 2007-05-22 2008-11-27 Gobeyn Kevin M Inferring wellness from physiological conditions data
US20100265498A1 (en) * 2004-06-30 2010-10-21 Chemimage Corporation Method and apparatus for microlens array/fiber optic imaging
US20120114195A1 (en) * 2010-11-04 2012-05-10 Hitachi, Ltd. Biometrics authentication device and method
US20130222684A1 (en) * 2012-02-27 2013-08-29 Implicitcare, Llc 360° imaging system
US20140254939A1 (en) * 2011-11-24 2014-09-11 Ntt Docomo, Inc. Apparatus and method for outputting information on facial expression
US20160101358A1 (en) * 2014-10-10 2016-04-14 Livebarn Inc. System and method for optical player tracking in sports venues

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4757541A (en) * 1985-11-05 1988-07-12 Research Triangle Institute Audio visual speech recognition
US20030072479A1 (en) * 2001-09-17 2003-04-17 Virtualscopics System and method for quantitative assessment of cancers and their change over time
US20100265498A1 (en) * 2004-06-30 2010-10-21 Chemimage Corporation Method and apparatus for microlens array/fiber optic imaging
US20080205703A1 (en) * 2005-05-27 2008-08-28 International Business Machines Corporation Methods and Apparatus for Automatically Tracking Moving Entities Entering and Exiting a Specified Region
US20080294013A1 (en) * 2007-05-22 2008-11-27 Gobeyn Kevin M Inferring wellness from physiological conditions data
US20120114195A1 (en) * 2010-11-04 2012-05-10 Hitachi, Ltd. Biometrics authentication device and method
US20140254939A1 (en) * 2011-11-24 2014-09-11 Ntt Docomo, Inc. Apparatus and method for outputting information on facial expression
US20130222684A1 (en) * 2012-02-27 2013-08-29 Implicitcare, Llc 360° imaging system
US20160101358A1 (en) * 2014-10-10 2016-04-14 Livebarn Inc. System and method for optical player tracking in sports venues

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220319674A1 (en) * 2019-07-31 2022-10-06 Crisalix S.A. Consultation Assistant For Aesthetic Medical Procedures
CN112086193A (zh) * 2020-09-14 2020-12-15 巢自强 一种基于物联网的人脸识别健康预测系统及方法
DE202023101613U1 (de) 2023-03-30 2023-06-07 Saheem Ahmad Auf künstlicher Intelligenz basierendes System mit dynamischer Verschlüsselung für die sichere Verwaltung von Gesundheitsdaten in der Telemedizin

Similar Documents

Publication Publication Date Title
US11309081B2 (en) Telemedicine system with dynamic imaging
US10456209B2 (en) Remote laser treatment system with dynamic imaging
US12080393B2 (en) Medical assistant
US20220240779A1 (en) Telemedicine System With Dynamic Imaging
KR102047237B1 (ko) 영상 데이터를 분석하는 인공 지능을 이용한 질병 진단 방법 및 진단 시스템
KR102577634B1 (ko) 증강 현실 및 가상 현실 안경류를 사용한 이미징 수정, 디스플레이 및 시각화
Krišto et al. An overview of thermal face recognition methods
US8918162B2 (en) System and method for using three dimensional infrared imaging to provide psychological profiles of individuals
Hammoud Passive eye monitoring: Algorithms, applications and experiments
da Costa et al. Dynamic features for iris recognition
KR20170047195A (ko) 안구 신호들의 인식 및 지속적인 생체 인증을 위한 시스템과 방법들
CN102037488B (zh) 个人认证方法及用于其的个人认证装置
Ulrich et al. Analysis of RGB-D camera technologies for supporting different facial usage scenarios
WO2019014521A1 (fr) Système de reconnaissance d'image dynamique pour la sécurité et la télémédecine
Crisan et al. A low cost vein detection system using near infrared radiation
KR20210061211A (ko) 영상촬영부를 이용한 임상 대상자의 안면 및 모션인식시스템
Lowe Ocular Motion Classification for Mobile Device Presentation Attack Detection
US20240363209A1 (en) Medical assistant
Koukiou et al. Fusion of Dissimilar Features from Thermal Imaging for Improving Drunk Person Identification
EP4287136A1 (fr) Système de localisation de veine pour interventions médicales et reconnaissance biométrique à l'aide de dispositifs mobiles
KR102035172B1 (ko) 사용자의 신원 파악이 가능한 혈중 산소포화도 모니터링 방법 및 시스템
ROHITH et al. AN APPROACH FOR INTELLIGENT MACHINE WITH HUMAN FACE COMPUTER VISION
Logeshwari et al. Deep Learning Era for Computer Vision-Based Eye Gaze Tracking: An Intensive Model
Choi ◾ Emerging Trends and New Opportunities in Biometrics: An Overview
Chiesa Revisiting face processing with light field images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18832234

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18832234

Country of ref document: EP

Kind code of ref document: A1