WO2019073291A1 - System and device for preliminary diagnosis of ocular disease - Google Patents

System and device for preliminary diagnosis of ocular disease Download PDF

Info

Publication number
WO2019073291A1
WO2019073291A1 PCT/IB2018/000806 IB2018000806W WO2019073291A1 WO 2019073291 A1 WO2019073291 A1 WO 2019073291A1 IB 2018000806 W IB2018000806 W IB 2018000806W WO 2019073291 A1 WO2019073291 A1 WO 2019073291A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
self
eye
color
images
Prior art date
Application number
PCT/IB2018/000806
Other languages
French (fr)
Inventor
Takeshi Eduardo Asahi KODAMA
Maria Eliana Manquez Hatta
Original Assignee
Eyecare Spa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyecare Spa filed Critical Eyecare Spa
Publication of WO2019073291A1 publication Critical patent/WO2019073291A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/036Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters for testing astigmatism
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/16Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring intraocular pressure, e.g. tonometers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/18Arrangement of plural eye-testing or -examining apparatus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present invention relates to the field of ophthalmology, and particularly provides a system and a configured device for preliminary diagnosis of ocular diseases, which is based on imaging of the eyes and on the diagnosis of ocular disorders, on the basis of the processing of these images and the reflection values of the retina and/or the pupil that they provide.
  • the red pupillary reflex is fairly well understood by ophthalmologists and pediatric specialists worldwide, and has been used as a diagnostic instrument around the world since the 60s. Normally, the light reaches the retina and a portion of it is reflected off the pupil by the choroid or posterior uvea, which is a layer of small vessels and pigmented cells located near the retina.
  • the reflected light seen from a coaxial instrument to the optical plane of the eye, normally present a reddish color, due to the color of blood and pigments of the cells, so this color can vary from shiny reddish or yellowish in people with light pigmentation to a more grayish red or dark pigmentation in people with dark pigmentation.
  • Bruckner Bruckner R.
  • US2013235346 Al by Huang points to a smart device application to obtain a set of pictures, at two specified working distances, and four orientations to run of photo-refraction tests, Bruckner and Hirschberg test, but requires masks that function like outlines on the screen, where the patient's face is fixed to obtain working distances, and there is no subsequent processing of images for a higher quality image.
  • the present technology relates to systems and methods for preliminary diagnosis of ocular diseases.
  • One example method comprises obtaining at least one final corrected image of respective pupils of eyes of an individual from an application.
  • the application is configured to process a plurality of digital images of the eyes of the individual to generate the at least one final corrected image.
  • the method also includes color processing of the at least one final corrected image to obtain a color- transformed final corrected image.
  • the color processing can transform a color content in the at least one final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance components.
  • the method also includes representing the color-transformed final corrected image using an HSV color scale, determining a white color content of reflection from each eye of the individual based on the HSV color scale representing the color-transformed final corrected image, and electronically diagnosing at least one ocular disease of the individual based in part on the determined white color content.
  • the luminance- based color space is a YUV color space, or a YCbCr color space.
  • determining the white color content includes calculating an HSV value for at least one region of color-transformed final corrected image and determining an average Saturation (S) value for the at least one region based on the HSV value.
  • the method further comprises: a) upon determining that the white color content of the reflection from each eye includes a red, identifying that the eyes of the individual are normal; b) upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of yellow, identifying that the at least one eye comprises a deformation; and c) upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of white, identifying that the at least one eye includes a tumor.
  • the method further includes storing a plurality of classified images.
  • the plurality of classified images can include at least a first image classified as a normal eye, a second image classified as a deformed eye, and a third image classified as an eye with tumor.
  • the method includes generating a machine-learning model based on the plurality of classified images. Generating the machine learning model can include implementing at least one machine learning technique.
  • electronically diagnosing at least one ocular disease of the individual includes comparing the color-transformed final corrected image to at least one image in the plurality of classified images.
  • An example method for preliminary diagnosis of ocular diseases comprises providing an audible cue to attract a subject's attention toward a camera.
  • the method also includes capturing a sequence of images of eyes of the subject with the camera.
  • the camera includes a flash.
  • the method also includes processing the sequence of images to localize respective pupils of the eyes of the subject to generate a digitally refocused image.
  • the digitally refocused image can be transmitted to a processor via a network.
  • the method also includes receiving from the processor preliminary diagnosis of ocular diseases.
  • the preliminary diagnosis can be based on a white color content of reflection of each eye of the subject in the digitally refocused image.
  • the audible cue includes barking of a dog.
  • the camera is included in a smart phone.
  • the method can further include providing an external adapter to adjust a distance between the flash and the camera based on a type of the smart phone.
  • the sequence of images can include red-eye effect.
  • receiving the preliminary diagnosis can include receiving at least one index value from the processor. The at least one index value can indicate the presence and/or absence of the ocular disease.
  • An example system for preliminary diagnosis of ocular diseases comprises a camera, a flash, a memory for storing a computational application, a processor, and a central server.
  • the processor can be coupled to the camera, the flash and the memory, wherein upon execution of the computational application by the processor, the processor a) provides an audible cue to attract a subject's attention toward the camera, b) processes a plurality of images captured with the camera in order to obtain a final corrected image, c) transmits the final corrected image to the central server, and d) receives electronic diagnosis of ocular disease from the central server.
  • the central server can be communicably coupled to the processor to a) obtain the final corrected image from the processor, b) color process the final corrected image to transform the color content in the final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance component, thereby generating a color-transformed final corrected image, c) conduct preliminary conclusion of abnormalities in at least one eye of the subject based on the color- transformed final corrected image, d) represent the final corrected image using an HSV color scale, e) determine white color content of reflection from each eye of the subject based on the HSV color scale, f) electronically diagnose at least one ocular disease of the subject based on the white color content, and g) transmit at least one index to the processor.
  • the at least one index can be based on the electronic diagnosis of the at least one ocular disease.
  • the central server can be further configured to generate a machine-learning model to classify at least one of: the color-transformed final corrected image, or the final corrected image as at least one of a normal eye, a deformed eye, or an eye with tumor.
  • the central server is configured to generate the machine- learning model based on a database of classified images.
  • the database includes a corresponding classification for each of a plurality of a sample color-transformed final corrected images and each of a plurality of sample final corrected images. The corresponding classification can be provided by an expert.
  • the present invention relates to a system for the preliminary diagnosis of ocular diseases, said system comprising: a device for capturing images or a camera;
  • a memory for storing data
  • a computational application stored in the memory that executes the process of capturing a plurality of images of the eyes of an individual, and a final corrected image obtained through the processing of said plurality of images, by performing a post- processing of the final image corrected by calculating the percentage of the colors that compose the pupillary reflex of each eye and comparing it with the values obtained for previous clinical cases;
  • the memory further includes images for the comparison of the final corrected image with previously diagnosed clinical cases, with ocular diseases.
  • the system can be implemented by using a computational device, a smart phone, or any device with connection to a camera, either an internal camera or a webcam and a system of a lighting device, of a built-in flash type.
  • the present invention also includes an "ex vivo" method for the preliminary diagnosis of ocular diseases, comprising the steps of: focusing the image of the individual's eyes, using a camera and a screen of a computing device; eliminating ambient lighting; in case a light is on in the room, the light is turned off, and if there is natural light, closing the windows or curtains to decrease it;
  • the computational application includes the following steps: i. making a first selection of images from the plurality of images; ii. obtaining an approximation of the area of the individual's face in each image of the said first selection; iii. aligning the mentioned first selection of images, from the edge detection by its spatial translation in each image of the said first selection; iv. determining the area of the two eyes in each image of the said first selection; v. obtaining a determined location of the center of the two eyes from each image of the said first selection; vi. making a second selection of images, from said first selection, to select a single image of the individual's eyes with greater sharpness; vii.
  • the computational application makes the said first selection from the plurality of images obtained by the camera, discriminating on the luminance of the pixels and selecting in said first selection between 1 and 60 images, preferably, between the best 10 images.
  • the computational application obtains the approximation of the individual's face, detecting it in a first image captured from the said first selection, and then cutting the area of all the later images to the first image for further processing.
  • the computational application finds the edges in the first image captured from said first selection, and searches these edges in the later images, to calculate the translation of these images with respect to said first image. Then, it calculates the location of the centers of the pupil of each eye for each image, removing outliers and averaging the position of said centers obtained to get the best determined location of the centers.
  • the computational application makes a second selection with respect to the sharpness of each image of said first selection, obtaining a value which is representative of the sharpness of each image and selecting the one image with greater sharpness, which is corrected in order to obtain the final corrected image with greater focus, using the area of each eye and the determined location of the centers.
  • the computational application performing a post-processing of the final corrected image, by calculating the percentage of the colors that compose the pupillary reflex of each eye, selecting the red, white, orange and yellow colors.
  • red color is in a range greater than 50% of the pixels that compose the area of the pupil, in any of the eyes of the final processed image, the image is considered most likely of a normal eye. • If the red color is in a range greater than 50% of the pixels that compose the area of the pupil, while the yellow and/or orange percentages correspond to a higher range of 10% of the pixels that compose the area of the in one of the eyes of the final processed image, the eye probably presents a type of refractive defect.
  • red color is in a range lower than 50% of the pixels that compose the area of the iris and the pupil, while white corresponds to a percentage higher than 40% in any of the eyes of the final processed image the diagnosis corresponds to a suspicion for organic and/or structural disease.
  • Clinical cases that are previously diagnosed and used as reference for comparison of the images that the system of invention produce consist of a set of three or more images previously obtained by the computing device, which represent normal cases, clinical cases of refractive defects and other ocular diseases.
  • FIG. 1 is a front and rear view of a smart phone, according to the invention.
  • FIG. 2 illustrates the process of acquiring a sequence of images using an application
  • FIG. 3 is an example of using the device while the individual's eyes are focused, in this case, infant eyes.
  • FIG. 4 is a screenshot of the application, running on a device according to the invention.
  • FIG. 5 is a schematic illustration of one implementation of a system for preliminary diagnosis of ocular diseases.
  • FIG. 6 is an example of a diagnosis type, obtained from the device, according to the invention, of the normal pupillary reflex.
  • FIG. 7 is an example of a diagnosis type, of a pupillary reflex with refractive ocular problems.
  • FIG. 8 is an example of a diagnosis type, of a pupillary reflex with serious ocular problems.
  • FIGS. 9A and 9B are a comparison of an image obtained by an electronic device according to the invention (FIG. 9A), compared to the final image processed by the application in the same computing device (FIG. 9B).
  • FIG. 10 shows a flow diagram illustrating a method for preliminary diagnosis of ocular diseases.
  • FIGS. 11- 13 illustrate an example workflow showing transformation of images to YCbCr color space.
  • the present invention is a practical and reliable solution for rapid diagnosis of ocular problems, which allows a preliminary examination only with the use of smart phones or tablet type devices, currently used by millions of people worldwide.
  • the application can be run by parents, paramedics, pediatricians and ophthalmologists without the need for a more complex instrument or experience in the use of these, and effectively allows conducting a test to detect ocular problems.
  • This application prototype was tested in 100 infants, from which 3 children with problems were detected, and which were referred to specialists who positively found ocular problems.
  • the system allows to conduct a preliminary medical test regarding to the pupillary reflex (pupillary red color test or Bruckner test) and corneal reflex (Hirschberg test).
  • the present invention relates essentially to a system and method employing a computational application that can be executed on mobile devices and related devices, which allows obtaining a preliminary examination of ocular conditions, using the pupillary and corneal reflexes obtained from a photograph of the eyes.
  • digital cameras and mobile devices like current smartphones or tablets, are programmed with a temporary setup between camera and flash, in such a way to avoid the reflection of the red pupil in the pictures that are obtained with them.
  • these digital cameras are programmed to avoid red-eye effect in images.
  • this reflex and/or the red-eye effect has important information about ocular diseases, and can be used for their detection as a preliminary screening.
  • the purpose of the present invention is to provide an application easy to use to the general population, without requiring the utilization of complex ophthalmic instruments, and which recreates the effect of old cameras that can capture the reddish reflex of the eyes (e.g., red-eye effect), but also includes a processing of the obtained image, so this reflection to be sharper and more focused.
  • the systems and methods disclosed herein also provide techniques to electronically diagnose ocular diseases from the processed images.
  • this computational application has been particularly useful for avoiding problems associated to perform ocular examinations in infants, since it is not necessary to sleep them, keep them focused, or subject them to long ocular examinations, nor is it necessary to dilate their pupils by using pharmacological drops, with the consequent disadvantages that they usually produce.
  • Infants are the group of greatest need for continuous ocular controls, because at this age they can develop many of the ocular problems that could have on their lives as adults, which often fail to be detected early.
  • the computational application (e.g., mobile application, web application, native application, hybrid applications, and/or the like) of the present invention can be installed in any electronic device.
  • This smartphone has a camera for capturing images (lens 1), a device generating light or flash 2, a screen that allows displaying images 3 and serve to focus on the individual, a memory that stores an application (e.g., mobile application, web application, native application, hybrid applications, and/or the like) and images, and a processor that runs the application to obtain the final images.
  • FIG. 1 A non-limiting example of a smartphone, according to the invention, is an iPhone 4S®, marketed by Apple Inc., and shown in FIG. 1.
  • This smartphone has a camera for capturing images (lens 1), a device generating light or flash 2, a screen that allows displaying images 3 and serve to focus on the individual, a memory
  • the application can automatically detect ambient light. If the detected ambient light exceeds a certain threshold (e.g., a pre-defined threshold, a threshold that can be dynamically updated on the application, a user defined threshold, and/or the like), the application can warn the user that ambient light has exceeded the threshold (e.g., at 202).
  • a certain threshold e.g., a pre-defined threshold, a threshold that can be dynamically updated on the application, a user defined threshold, and/or the like
  • the application can warn the user that ambient light has exceeded the threshold (e.g., at 202).
  • an audible alert can be provided.
  • the audible alert can be timed appropriately with a flash (e.g., at 206) and a camera (e.g., at 208) so as to gain the attention of the individual to stare at the camera for acquiring images (e.g., at 210).
  • the audible alert may be in the form of barking dog - this example is particularly useful as in many cases it is instinctive for a young person to be attracted to the sound of a barking dog, and accordingly turn their gaze and attention to the direction that the barking is coming from (e.g., the speaker of a smart phone being used for acquiring images).
  • the barking dog e.g., at 204 may be timed so that the flash (e.g., at 206) and camera (e.g., at 208) begin the process of image acquisition (e.g., at 210) in tandem with the barking dog audible alert, or shortly thereafter at an appropriate time.
  • the barking dog alert may continue during image acquisition in some cases, or just at the beginning of this process to attract the attention of the individual. It should be appreciated that in some implementations steps 202-210 can occur simultaneously while in other implementations steps 202-210 can occur individually and/or in tandem with one or more steps in any order. It should be appreciated that other forms of audible alerts to attract the attention and gaze of the individual to the camera may be employed, which may include human voice cues, other animal noises, musical tones or portions of well-known songs (e.g., nursery rhymes), etc.
  • the sequence of images 212 that are acquired can be transmitted to a central server (e.g., a web server, a remote server, a remote processor, etc.).
  • a central server e.g., a web server, a remote server, a remote processor, etc.
  • FIG. 3 illustrates how the camera of the device in question is activated, the output of the camera is shown on screen 3.
  • the focus point is marked with respect to the individual's eyes 4 touching the screen 5, in order to proceed, subsequently, to lower the amount of ambient lighting.
  • the pupil dilates naturally being in low light, so, at this moment, the taking of a plurality of images is activated, using the application.
  • FIG. 4 which is a graphical representation of a screenshot of the application, shows the button for the initiation of taking a plurality of images 6, a setting button 7 and a button to display the images obtained 8.
  • FIG. 5 is a schematic illustration of one implementation of the system 500 for preliminary diagnosis of ocular diseases.
  • the system 500 can include an image capturing device (e.g., a mobile device with a camera).
  • An application e.g, a mobile application
  • FIG. 5 shows the application and the image capturing device collectively, for example, as application 502.
  • the application 502 can transmit one or more images that are captured to a central server 504 via a network 506.
  • the application 502 may implement the acquisition process 200 illustrated in FIG. 2 to acquire the images.
  • the images acquired by the application 502 can include the red-eye effect.
  • the application 502 can process the images to center the images and sharpen them. The sharpened images can be transmitted from the application 502 to the central server 504 via the network 506.
  • the central server 504 can be a remote server, a web server, a processor and/or the like.
  • the central server 504 processes the images to transform the images to a luminance-based color space, and determines the white color content in the images.
  • the central server 504 can include a machine-learning module and/or an artificial intelligence module to electronically diagnose ocular diseases based on the processed images.
  • the preliminary diagnosis can be transmitted back to the application 502 via the network 506.
  • network 506 can be based on any suitable technology and can operate according to any suitable protocol.
  • the network 506 can be local area network, a wide area network, such as an intelligent network, or the Internet.
  • the network 506 may be wired. In other implementations, the network 506 may be wireless.
  • the application turns on the flash light 2, but the pictures will begin to be processed when the application estimates that what is being captured is already under the influence of light from the flash.
  • the application estimates the amount of light contained in each image, transforming these to Y'UV color space, which represent a luminance component Y' and two chrominance components UV.
  • the application calculates the average of the component Y ', which represents the luminance of the pixel. Then, calculating the luminance before starting and during the frames, the application discriminates from which frame to start capturing, as it is known from this frame, that the flash 2 is affecting the captured image.
  • Frames containing no flash light 2 are discarded.
  • the application performs it by removing an arbitrary number of frames captured since the flash 2 started to work, so then be able to capture ten images to be used in the process.
  • the capture of the first image of the process is different from the others, since in this frame the approximate area of the individual's face is detected using an appropriate "haar cascade", which is a process that captures the best section of the individual's face, and this section is cropped, obtaining the image to be used; this minimizes the amount of information to be processed by the application.
  • the same detected area is cropped, obtaining images of the same size as the first.
  • the first frames pictures since the flash 2 has an effect, where the greatest effect on the retina reflex occurs, because at that time the pupil is dilated by the little pre-flash light. For this reason, the number of used frames does not exceed ten.
  • a camera stabilization process is performed, which helps to reduce camera shake or movement of the person in the sequence.
  • first the position of the prominent edges of the image ("good features to track") is detected. These same points are then searched in the next image frame by calculating the "optical flow”.
  • the calculation of the translational suffered by the following image, regarding its predecessor is performed. For this, the average of the motion vectors of all the prominent edges of this is calculated by transferring the image by that amount. This allows the eyes to be always in the same position in all the taken pictures, so it is possible, as will be explained later, to perform the detection of the important features, using not one, but several pictures.
  • a defocusing of each of the images, using a Gaussian filter is performed.
  • the fast Fourier transform (FFT) is calculated, and the average of 90% of the highest values are calculated, obtaining a value that estimates how sharper the image is.
  • the chosen image is also passed by another process called "unsharp masking" to focus it digitally, which consists of blurring the image, using a Gaussian blur and subtracting the result to the original image on a weighted basis for a larger focus.
  • the portion of the image is cropped in the best frame obtained in the previous step for each eye, and another image, corresponding to the pupil and iris of the eye is cropped, from the best center also obtained in the previous step.
  • a good reflection on the retina can be obtained, producing a color which allows diagnostic analysis.
  • This color is usually related to the internal condition of the eye. In a normal patient, this will be reddish tonality; and in abnormal cases could detect a white color that may indicate the existence of some abnormal body into the eye, or a yellow color indicating some eye deformation. So a post processing in which it is necessary to detect what color appeared in the pupil reflex shooting takes place. To do this the amount of red, white and yellow color in the image of the pupil is calculated. To do this the image of the pupil of each eye is transformed to HSV color space and passed through a mask that leaves in color white all colors within a specific range.
  • the percentage of white pixels is then calculated, getting the percentage of that color in the image. [0069] ⁇ If the predominant color is red, it is likely that the eye looks normal. FIG. 6 is an example of this case, where the reflection of the red pupil 10, 11, 12 and 13 seen in both eyes is normal.
  • FIG. 7 is an example of this case, where the presence of a yellow reflection in the right eye 14 of the patient may be a sign of refractive errors or strabismus. It is recommended for this patient to request a visit to the ophthalmologist.
  • FIG. 8 is an example of this case, where the reflection of the red reflex seen in the right eye 15 is normal.
  • the white reflection in the left eye 16 may be a sign of a dangerous condition within the patient's eye. It is recommended for the patient to visit an ophthalmologist as soon as possible, urgently.
  • FIGS. 9A and 9B show a comparison between a normally captured image with an electronic device, according to the invention (FIG. 9A) and the final image processed by the computer application (FIG. 9B).
  • FIG. 10 shows a flow diagram illustrating a method 1000 for electronically diagnosing ocular diseases.
  • a central server e.g., web server 504 in FIG. 5 receives digitally refocused sharp images of eyes of a subject.
  • a sequence of images of the eyes can be captured using a camera with a flash.
  • an audible cue e.g., a barking dog
  • the acquisition process to capture the sequence of images is implemented in such a manner so as to not lose red-eye effect.
  • the application may either be installed on a device with the camera or may be communicably coupled to the camera.
  • the application can process the sequence of images in order to localize respective pupils of the eyes of the subject.
  • the application transforms each digital image to a Y'UV color space to determine an average pixel luminance, and any digital image that does not have sufficient luminance is discarded from the sequence - preferably approximately ten or so digital images are maintained in the sequence.
  • a Haar cascade is applied to the first remaining image to identify the subject's face, and this first image is accordingly cropped to provide a cropped image of the subject's face.
  • the remaining images in the sequence are identically cropped to leave the same pixels as the first image.
  • An optical flow is then calculated for the cropped images to determine translational shifts from image to image based on averaged motion vectors, and respective images are shifted relative to each other based on the motion vectors so that the subject's eyes are in a same location in each image.
  • the locations of the subject's eyes are identified in each image again using a Haar cascade, and the center of the pupil of each eye is identified using image gradients.
  • Each image is then defocused using a Gaussian filter, and a Fast Fourier Transform (FFT) of the defocused image is calculated to obtain a value representing image sharpness.
  • the sharpest image is digitally refocused, and then cropped again to provide respective sub-images of the pupil and iris of each eye.
  • FFT Fast Fourier Transform
  • the sharpest digitally refocused image is color processed by the central server to transform the color content in the image from an RGB color space to a luminance- based color space comprising one luma and two chrominance components.
  • the sharpest digitally refocused image can be transformed from an RGB color space to a YUV color space, or a YCbCr color space. This transformation decouples the effect of the brightness of the environment on the images thereby minimizing the effect of environmental conditions in the images.
  • This transformed image can be analyzed to make preliminary conclusions of certain abnormalities in the eye.
  • FIGS. 11-13 illustrate an example transformation of images to a YCbCr color space.
  • FIG. 11 represents a retinoblastoma pattern.
  • FIGS. 12 and 13 represent visio-refraction patterns or refractive errors indicating an abnormality in the eye (e.g., astigmatism, etc.).
  • an expert may analyze an initial set of transformed images in YUV, and/or YCbCr color space and may classify the images as representing normal eye, deformed eye, or eye with tumor. These initial set of classified images form a knowledge base for a machine-learning module included in the central server. The initial set of classified images can be saved in a database and/or memory that is coupled to the central server.
  • the sharpest refocused image in the RGB color space and/or the image transformed to a luminance-based color space can be represented using a HSV color scale.
  • the white color content of reflection from each eye can be determined based on the HSV color scale.
  • HSV value for the pupil portion of the eye in the RGB color space and/or luminance-based color space can be calculated.
  • An average Saturation (S) value for at least the pupil portion of the eye can then be determined.
  • the average Saturation (S) value represents how much content of pure color (e.g., 100% color) and how much content of grey (e.g., 0% color) is present in that portion of the image.
  • a special case is white, where saturation (S) is closer to 0%.
  • Value or Luminance should be higher in order to obtain white. The latter depends on the lighting conditions. Bright white cannot always be achieved, but in real conditions lighter versions of gray may be obtained.
  • an expert may analyze the white-color content in an initial set of images and may classify these initial set of images as representing normal eye, deformed eye, or eye with tumor. These classified images can also be a part of the knowledge base for the machine-learning module included in the central server.
  • the central server can include a machine-learning and/or artificial intelligence module to classify the images based on the white color content.
  • a machine learning model is generated based on the knowledge base by applying one or more machine-learning techniques.
  • an initial conclusion of ocular diseases can be determined by comparing the images in the luminance-based color space to the images in the knowledge base (luminance-based color space) that are classified by an expert.
  • ocular diseases can be electronically diagnosed by comparing HSV values of the images to HSV values of images in the knowledge base that are classified by experts. By performing this comparison, the white color content of reflection from each eye can be determined.
  • the machine-learning module classifies the eyes of the subject as normal. If the white color content includes a tint of yellow, the machine-learning module classifies the eyes of the subject as comprising a deformation. If the white color content, includes a tint of white, the machine-learning modules classifies the eyes of the subject as including a tumor.
  • the machine-learning module may implement one or more classification algorithms (e.g., algorithms based on distance, clustering, SVM, etc.) to determine an appropriate classification.
  • an index value is generated based on the classification to indicate the presence and/or absence of ocular diseases in the eyes. For example, an index value of 1 can indicate normal eyes and an index value closer to 0 can indicate that the subject has at least one abnormal eye and will need to see a specialist. This index value is transmitted from the central server back to the mobile device and/or the application. Thus, ocular diseases can be diagnosed in a reliable, automated, and user-friendly manner.
  • an external adaptor may be employed (e.g., Prisma) to be employed in connection with the flash and camera of a smart phone, to allow different versions of smart phones (e.g., iPhone 4S, iPhone 5-series, iPhone 6, iPhone 7, etc.) to be used to implement the various concepts disclosed herein.
  • an external adaptor may be used to adjust for different distances between the flash and the camera on different smart phones, so as to have similar results on the different smart phones in implementing the concepts disclosed herein.
  • the adapter may comprise a macro and zoom lens.
  • Appendix A An example implementation of methods for preliminary diagnosis of ocular disease is included in Appendix A.
  • the underlying method implemented as code represented in Appendix A is robust and can be implemented in multiple programming languages.
  • inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
  • inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
  • embodiments can be implemented in any of numerous ways. For example, embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
  • PDA Personal Digital Assistant
  • a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
  • Such computers may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet.
  • networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
  • the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
  • inventive concepts may be embodied as one or more methods, of which an example has been provided.
  • the acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • a reference to "A and/or B", when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
  • At least one of A and B can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
  • UIDevice *currentDevice [UIDevice currentDevice ] ;
  • UIDevice ⁇ currentDevice [UIDevice currentDevice] ;
  • NSString * versionBuild [NSString stringWithFormat : @"v%@", version] ;
  • versionBuild [NSString stringWithFormat: @"%@(%@)", versionBuild, build] ;
  • NSMutableDictionary *returnValue [NSMutableDictionary new] ;
  • NSURLRequest ⁇ request [NSURLRequest requestWithURL : [NSURL URLWithString : ur1 ] cachePolicy : 0 timeoutlnterval : 8 ] ;
  • sendSynchronousRequest request returningResponse : Sresp error:
  • NSDictionary *responseData [NSJSONSerialization JSONObj ectWithData : response options: NSJSONReadingMutableContainers error : &err ] ;
  • NSMutableDictionary ⁇ parameters [AppManager obtainAppData ] ; [parameters addEntries FromDictionary : userParameters ] ;
  • NSMutableDictionary ⁇ parameters [AppManager obtainAppData ] ; [parameters addEntries FromDictionary : userParameters ] ;
  • globalFirstTimeUse [(NSNumber*) [userDefaults objectForKey: @ "globalFirstTimeUse " ] boolValue] ;
  • globalFirstTimeUseExamples [(NSNumber*) [userDefaults objectForKey: @"globalFirstTimeUseExamples " ] boolValue] ;
  • optionUseAutoCrop [(NSNumber*) [userDefaults obj ectForKey : @ "optionUseAutoCrop" ] boolValue] ;
  • _nameString [NSString stringWithFormat : @ "Name : %@", [_docDic objectForKey: @"name”] ] ;
  • _commentString [NSString stringWithFormat : @ "% @ " , [_docDic objectForKey: @"comment”] ] ;
  • _countryString [NSString stringWithFormat : @ "Country : %@", [_docDic objectForKey: @"country”] ] ;
  • NSArray* phones [_docDic obj ectForKey : @ "phone "] ;
  • NSSortDescriptor *sortDescriptor [ [NSSortDescriptor alloc] initWithKey : @ "name " ascending : YES ] ;
  • dequeueReusableCe11WithIdentifier Cell Identifier
  • UILabel* label (UILabel*) [cell viewWithTag : 10 ] ;
  • canEditRowAtlndexPath (NSIndexPath *) indexPath
  • canMoveRowAtlndexPath (NSIndexPath *) indexPath
  • dequeueReusableCe11WithIdentifier Cell Identifier
  • UILabel* label (UILabel*) [cell viewWithTag : 10 ] ;
  • canMoveRowAtlndexPath (NSIndexPath *) indexPath
  • _countryName [_countryData obj ectAtlndex : indexPath . row]
  • dirPaths NSSearchPathForDirectories InDomains
  • NSFileManager *filemgr [NSFileManager defaultManager ] ;
  • const char *sql_stmt "create table md_eye (id integer primary key AUTOINCREMENT , patient_name text, image_path text, create_time double)";

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A sequence of images of a subject's face is acquired in rapid succession by a camera. An audible cue (e.g., a barking dog) is provided at the outset of sequenced imaging to attract the subject's attention to the camera. Each image is processed to localize pupils of the subject's eye to obtain a corrected image of the pupil. Each corrected image is transformed to a RGB color space. The corrected images are then converted from the RGB color space to a luminance-based color space comprising one luma and two chrominance components. The corrected images can also be represented as a HSV color scale. The white color content of reflection from each eye is calculated based on the luminance-based color space and/or the HSV color scale. An electronic preliminary diagnosis of ocular disease of the subject is then determined based on the white color content.

Description

SYSTEM AND DEVICE FOR PRELIMINARY DIAGNOSIS OF OCULAR DISEASE
BACKGROUND
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the priority to and the benefit of U.S. Provisional Application No. 62/570,979 entitled "SYSTEM AND DEVICE FOR PRELIMINARY DIAGNOSIS OF OCULAR DISEASES," filed October 11, 2017, the disclosure of which is incorporated herein by reference in its entirety.
[0002] This application is also related to U.S. Patent Application No. 14/597,213, now U.S. Publication no. 2015-0257639 entitled "SYSTEM AND DEVICE FOR PRELIMINARY DIAGNOSIS OF OCULAR DISEASES," filed January 14, 2015, the entire disclosure of which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0003] The present invention relates to the field of ophthalmology, and particularly provides a system and a configured device for preliminary diagnosis of ocular diseases, which is based on imaging of the eyes and on the diagnosis of ocular disorders, on the basis of the processing of these images and the reflection values of the retina and/or the pupil that they provide.
BACKGROUND
[0004] An estimate based on the National Health Survey (ENS 2007) indicates that at least 1.5% to 2.6% of the Chilean population has some visual impairment, of this percentage is estimated that at least ¼ of them has chronic defects classified as blindness. The world situation is not so different, and this reveals that there are at least 12 million children under the age of 10, which is the age group of preventive control, suffer from visual impairment due to refractive error (myopia, strabismus or astigmatism) in addition there are more severe cases like ocular cancer that affects 1 in 12,000 live births, which is usually seen in children up to 5 years old. All of these conditions and others, in most cases can be corrected without major complications with a preventive diagnosis and effective treatment in infants from birth to about 5 years old, preventing these disorders getting worse with time and treatment being too expensive, ineffective or simply being too late to be implemented.
[0005] Most of these problems could be detected at an early age, but require continuous medical supervision and examinations which are carried out with high-cost instruments that also require the presence of specialists to use them.
[0006] For the group of infants (0-5 years) which is the control group and primary diagnosis, there are two key problems in performing these tests: it is difficult to make that infants focus their gaze intently to any device that performs the test and also the ophthalmologist or pediatrician has only fraction of a second to capture the image before the pupil shrinks in response to the bright flash light. These problems lead to pediatricians are unable to detect ocular problems early, and therefore cannot effectively take preventive measures before the problem getting worse.
[0007] The red pupillary reflex is fairly well understood by ophthalmologists and pediatric specialists worldwide, and has been used as a diagnostic instrument around the world since the 60s. Normally, the light reaches the retina and a portion of it is reflected off the pupil by the choroid or posterior uvea, which is a layer of small vessels and pigmented cells located near the retina. The reflected light, seen from a coaxial instrument to the optical plane of the eye, normally present a reddish color, due to the color of blood and pigments of the cells, so this color can vary from shiny reddish or yellowish in people with light pigmentation to a more grayish red or dark pigmentation in people with dark pigmentation. In 1962, Bruckner (Bruckner R. Exakte Strabismus diagnostic bei 1 / 2-3 jahrigen Kindern mit einem einfachen Verfahren, dem "Durchleuchtungstest." Ophthalmologica 1962; 144: 184-98) described abnormalities in the pupillary reflex as well as in quality, intensity, symmetry or presence of abnormal figures, therefore, pupilar red color test is also known as Bruckner test. Another similar test is the Hirschberg test, which uses the corneal reflex to detect misalignment of the eyes, which enables to diagnose some degree of strabismus (Wheeler, M. "Objective Strabismometry in Young Children." Trans Am Ophthalmol Soc 1942; 40:. 547-564). In summary, these tests are used to detect misalignment of the eyes (strabismus), different sizes of the eyes (anisometropy), abnormal growths in the eye (tumors), opacity (cataract) and any abnormalities in the light refraction (myopia, hyperopia, astigmatism). [0008] The evaluation of the pupillary and corneal reflexes is a medical procedure that can be performed with an ophthalmoscope, an instrument invented by Francis A. Welch and William Noah Allyn in 1915 and used since the last century. Today, his company Welch Allyn, has products that follow this line as Pan Optic™. There are also photographic screening type portable devices for the evaluation of pupilar red color as Plusoptix (Patent application No. W09966829) or Spot™ Photoscreener (Patent Application No. EP2676441 A2), but the cost ranges between USD 100 to 500, they weigh about 1 kg and also require experience in interpreting the observed images.
[0009] With regard to state of the art, concerning computer applications for the prompt diagnosis of ocular diseases, the application No. FR2876570 Al by Mawas, presents a process to obtain a picture with a camera and remit it in negative to a specialist ophthalmologist via email, in order to detect strabismus (Hirschberg test); however, this lacks the processing of the image, the immediately preliminary diagnosis and its application to mobile devices. The patent application No. US2013235346 Al by Huang, points to a smart device application to obtain a set of pictures, at two specified working distances, and four orientations to run of photo-refraction tests, Bruckner and Hirschberg test, but requires masks that function like outlines on the screen, where the patient's face is fixed to obtain working distances, and there is no subsequent processing of images for a higher quality image.
SUMMARY OF THE I VENTION
[0010] The present technology relates to systems and methods for preliminary diagnosis of ocular diseases. One example method comprises obtaining at least one final corrected image of respective pupils of eyes of an individual from an application. The application is configured to process a plurality of digital images of the eyes of the individual to generate the at least one final corrected image. The method also includes color processing of the at least one final corrected image to obtain a color- transformed final corrected image. The color processing can transform a color content in the at least one final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance components. The method also includes representing the color-transformed final corrected image using an HSV color scale, determining a white color content of reflection from each eye of the individual based on the HSV color scale representing the color-transformed final corrected image, and electronically diagnosing at least one ocular disease of the individual based in part on the determined white color content.
[0011] In one aspect, the luminance- based color space is a YUV color space, or a YCbCr color space. In one aspect, determining the white color content includes calculating an HSV value for at least one region of color-transformed final corrected image and determining an average Saturation (S) value for the at least one region based on the HSV value. The method further comprises: a) upon determining that the white color content of the reflection from each eye includes a red, identifying that the eyes of the individual are normal; b) upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of yellow, identifying that the at least one eye comprises a deformation; and c) upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of white, identifying that the at least one eye includes a tumor.
[0012] In one aspect, the method further includes storing a plurality of classified images. The plurality of classified images can include at least a first image classified as a normal eye, a second image classified as a deformed eye, and a third image classified as an eye with tumor. The method includes generating a machine-learning model based on the plurality of classified images. Generating the machine learning model can include implementing at least one machine learning technique. In one aspect, electronically diagnosing at least one ocular disease of the individual includes comparing the color-transformed final corrected image to at least one image in the plurality of classified images.
[0013] An example method for preliminary diagnosis of ocular diseases comprises providing an audible cue to attract a subject's attention toward a camera. The method also includes capturing a sequence of images of eyes of the subject with the camera. The camera includes a flash. The method also includes processing the sequence of images to localize respective pupils of the eyes of the subject to generate a digitally refocused image. The digitally refocused image can be transmitted to a processor via a network. The method also includes receiving from the processor preliminary diagnosis of ocular diseases. The preliminary diagnosis can be based on a white color content of reflection of each eye of the subject in the digitally refocused image. [0014] In one aspect, the audible cue includes barking of a dog. In one aspect, the camera is included in a smart phone. The method can further include providing an external adapter to adjust a distance between the flash and the camera based on a type of the smart phone. In one aspect, the sequence of images can include red-eye effect. In one aspect, receiving the preliminary diagnosis can include receiving at least one index value from the processor. The at least one index value can indicate the presence and/or absence of the ocular disease.
[0015] An example system for preliminary diagnosis of ocular diseases comprises a camera, a flash, a memory for storing a computational application, a processor, and a central server. The processor can be coupled to the camera, the flash and the memory, wherein upon execution of the computational application by the processor, the processor a) provides an audible cue to attract a subject's attention toward the camera, b) processes a plurality of images captured with the camera in order to obtain a final corrected image, c) transmits the final corrected image to the central server, and d) receives electronic diagnosis of ocular disease from the central server. The central server can be communicably coupled to the processor to a) obtain the final corrected image from the processor, b) color process the final corrected image to transform the color content in the final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance component, thereby generating a color-transformed final corrected image, c) conduct preliminary conclusion of abnormalities in at least one eye of the subject based on the color- transformed final corrected image, d) represent the final corrected image using an HSV color scale, e) determine white color content of reflection from each eye of the subject based on the HSV color scale, f) electronically diagnose at least one ocular disease of the subject based on the white color content, and g) transmit at least one index to the processor. The at least one index can be based on the electronic diagnosis of the at least one ocular disease.
[0016] In one aspect, the central server can be further configured to generate a machine-learning model to classify at least one of: the color-transformed final corrected image, or the final corrected image as at least one of a normal eye, a deformed eye, or an eye with tumor. In one aspect, the central server is configured to generate the machine- learning model based on a database of classified images. In one aspect, the database includes a corresponding classification for each of a plurality of a sample color-transformed final corrected images and each of a plurality of sample final corrected images. The corresponding classification can be provided by an expert.
[0017] The present invention relates to a system for the preliminary diagnosis of ocular diseases, said system comprising: a device for capturing images or a camera;
a device generating light or a flash;
a screen for displaying the image;
a memory for storing data
a computational application stored in the memory that executes the process of capturing a plurality of images of the eyes of an individual, and a final corrected image obtained through the processing of said plurality of images, by performing a post- processing of the final image corrected by calculating the percentage of the colors that compose the pupillary reflex of each eye and comparing it with the values obtained for previous clinical cases;
and a processor functionally attached to the camera, the flash, the screen and the memory, such that runs the application.
[0018] In the system of the invention, the memory further includes images for the comparison of the final corrected image with previously diagnosed clinical cases, with ocular diseases.
[0019] The system can be implemented by using a computational device, a smart phone, or any device with connection to a camera, either an internal camera or a webcam and a system of a lighting device, of a built-in flash type.
[0020] The present invention also includes an "ex vivo" method for the preliminary diagnosis of ocular diseases, comprising the steps of: focusing the image of the individual's eyes, using a camera and a screen of a computing device; eliminating ambient lighting; in case a light is on in the room, the light is turned off, and if there is natural light, closing the windows or curtains to decrease it;
capturing a plurality of images of the individual's eyes with said camera, using the flash; processing the plurality of images, by using a computational application in order to obtain a final corrected image of the individual's eyes;
displaying the said final corrected image on the screen and visually comparing it with clinical cases previously diagnosed with ocular diseases.
[0021] For the processing of the plurality of images, in order to obtain a final corrected image, the computational application includes the following steps: i. making a first selection of images from the plurality of images; ii. obtaining an approximation of the area of the individual's face in each image of the said first selection; iii. aligning the mentioned first selection of images, from the edge detection by its spatial translation in each image of the said first selection; iv. determining the area of the two eyes in each image of the said first selection; v. obtaining a determined location of the center of the two eyes from each image of the said first selection; vi. making a second selection of images, from said first selection, to select a single image of the individual's eyes with greater sharpness; vii. processing that said single image to obtain a final corrected image with greater focus; viii. cutting the eyes of the individual from that final corrected image, from the determined location of the centers and calculating the area of the two eyes; and ix. post-processing the final corrected image, to detect the percentage of the colors that compose the pupillary reflex.
[0022] The computational application makes the said first selection from the plurality of images obtained by the camera, discriminating on the luminance of the pixels and selecting in said first selection between 1 and 60 images, preferably, between the best 10 images.
[0023] The computational application obtains the approximation of the individual's face, detecting it in a first image captured from the said first selection, and then cutting the area of all the later images to the first image for further processing.
[0024] On the other hand, for aligning the said first selection of the plurality of images, the computational application finds the edges in the first image captured from said first selection, and searches these edges in the later images, to calculate the translation of these images with respect to said first image. Then, it calculates the location of the centers of the pupil of each eye for each image, removing outliers and averaging the position of said centers obtained to get the best determined location of the centers.
[0025] The computational application makes a second selection with respect to the sharpness of each image of said first selection, obtaining a value which is representative of the sharpness of each image and selecting the one image with greater sharpness, which is corrected in order to obtain the final corrected image with greater focus, using the area of each eye and the determined location of the centers.
[0026] Finally, the computational application performing a post-processing of the final corrected image, by calculating the percentage of the colors that compose the pupillary reflex of each eye, selecting the red, white, orange and yellow colors.
[0027] In this process, the following three cases are defined:
• If the red color is in a range greater than 50% of the pixels that compose the area of the pupil, in any of the eyes of the final processed image, the image is considered most likely of a normal eye. • If the red color is in a range greater than 50% of the pixels that compose the area of the pupil, while the yellow and/or orange percentages correspond to a higher range of 10% of the pixels that compose the area of the in one of the eyes of the final processed image, the eye probably presents a type of refractive defect.
• If the red color is in a range lower than 50% of the pixels that compose the area of the iris and the pupil, while white corresponds to a percentage higher than 40% in any of the eyes of the final processed image the diagnosis corresponds to a suspicion for organic and/or structural disease.
[0028] Clinical cases that are previously diagnosed and used as reference for comparison of the images that the system of invention produce, consist of a set of three or more images previously obtained by the computing device, which represent normal cases, clinical cases of refractive defects and other ocular diseases.
[0029] Within the cases of refractive defects of the group that can be diagnosed with the system of the invention, we can found hyperopia, astigmatism and myopia; and it is also possible to make a fast screening of other ocular diseases, such as organic diseases and ocular functional diseases, Including tumors, malformations, strabismus, cataracts, etc.
[0030] It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
[0031] Other systems, processes, and features will become apparent to those skilled in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, processes, and features be included within this description, be within the scope of the present invention, and be protected by the accompanying claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0032] The skilled artisan will understand that the drawings primarily are for illustrative purposes and are not intended to limit the scope of the inventive subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the inventive subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
[0033] FIG. 1 is a front and rear view of a smart phone, according to the invention.
[0034] FIG. 2 illustrates the process of acquiring a sequence of images using an application
[0035] FIG. 3 is an example of using the device while the individual's eyes are focused, in this case, infant eyes.
[0036] FIG. 4 is a screenshot of the application, running on a device according to the invention.
[0037] FIG. 5 is a schematic illustration of one implementation of a system for preliminary diagnosis of ocular diseases.
[0038] FIG. 6 is an example of a diagnosis type, obtained from the device, according to the invention, of the normal pupillary reflex.
[0039] FIG. 7 is an example of a diagnosis type, of a pupillary reflex with refractive ocular problems.
[0040] FIG. 8 is an example of a diagnosis type, of a pupillary reflex with serious ocular problems.
[0041] FIGS. 9A and 9B are a comparison of an image obtained by an electronic device according to the invention (FIG. 9A), compared to the final image processed by the application in the same computing device (FIG. 9B). [0042] FIG. 10 shows a flow diagram illustrating a method for preliminary diagnosis of ocular diseases.
[0043] FIGS. 11- 13 illustrate an example workflow showing transformation of images to YCbCr color space.
DETAILED DESCRIPTION
[0044] The present invention is a practical and reliable solution for rapid diagnosis of ocular problems, which allows a preliminary examination only with the use of smart phones or tablet type devices, currently used by millions of people worldwide. The application can be run by parents, paramedics, pediatricians and ophthalmologists without the need for a more complex instrument or experience in the use of these, and effectively allows conducting a test to detect ocular problems. This application prototype was tested in 100 infants, from which 3 children with problems were detected, and which were referred to specialists who positively found ocular problems. The system allows to conduct a preliminary medical test regarding to the pupillary reflex (pupillary red color test or Bruckner test) and corneal reflex (Hirschberg test).
[0045] Illustrative embodiments of the preferred embodiment are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developer's specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
[0046] The present invention relates essentially to a system and method employing a computational application that can be executed on mobile devices and related devices, which allows obtaining a preliminary examination of ocular conditions, using the pupillary and corneal reflexes obtained from a photograph of the eyes. [0047] Unlike the old cameras of the non-digital age, digital cameras and mobile devices, like current smartphones or tablets, are programmed with a temporary setup between camera and flash, in such a way to avoid the reflection of the red pupil in the pictures that are obtained with them. Put differently, these digital cameras are programmed to avoid red-eye effect in images. However, it is well known that this reflex and/or the red-eye effect has important information about ocular diseases, and can be used for their detection as a preliminary screening.
[0048] Since mobile devices are currently used by millions of people around the world, the purpose of the present invention is to provide an application easy to use to the general population, without requiring the utilization of complex ophthalmic instruments, and which recreates the effect of old cameras that can capture the reddish reflex of the eyes (e.g., red-eye effect), but also includes a processing of the obtained image, so this reflection to be sharper and more focused. The systems and methods disclosed herein also provide techniques to electronically diagnose ocular diseases from the processed images.
[0049] Furthermore, this computational application has been particularly useful for avoiding problems associated to perform ocular examinations in infants, since it is not necessary to sleep them, keep them focused, or subject them to long ocular examinations, nor is it necessary to dilate their pupils by using pharmacological drops, with the consequent disadvantages that they usually produce. Infants are the group of greatest need for continuous ocular controls, because at this age they can develop many of the ocular problems that could have on their lives as adults, which often fail to be detected early.
[0050] The computational application (e.g., mobile application, web application, native application, hybrid applications, and/or the like) of the present invention can be installed in any electronic device. A non-limiting example of a smartphone, according to the invention, is an iPhone 4S®, marketed by Apple Inc., and shown in FIG. 1. This smartphone has a camera for capturing images (lens 1), a device generating light or flash 2, a screen that allows displaying images 3 and serve to focus on the individual, a memory that stores an application (e.g., mobile application, web application, native application, hybrid applications, and/or the like) and images, and a processor that runs the application to obtain the final images. [0051] FIG. 2 illustrates the process 200 of acquiring a sequence of images 212 using an application (e.g., mobile application, web application, native application, hybrid applications, and/or the like) described herein. In one implementation, the application can automatically detect ambient light. If the detected ambient light exceeds a certain threshold (e.g., a pre-defined threshold, a threshold that can be dynamically updated on the application, a user defined threshold, and/or the like), the application can warn the user that ambient light has exceeded the threshold (e.g., at 202).
[0052] At 204, in order to facilitate attracting the attention of the individual whose images are being captured (which in the examples above may be a child or youth) to the camera/image acquisition device, an audible alert can be provided. In one aspect, the audible alert can be timed appropriately with a flash (e.g., at 206) and a camera (e.g., at 208) so as to gain the attention of the individual to stare at the camera for acquiring images (e.g., at 210).
[0053] In one instance, the audible alert may be in the form of barking dog - this example is particularly useful as in many cases it is instinctive for a young person to be attracted to the sound of a barking dog, and accordingly turn their gaze and attention to the direction that the barking is coming from (e.g., the speaker of a smart phone being used for acquiring images). The barking dog (e.g., at 204) may be timed so that the flash (e.g., at 206) and camera (e.g., at 208) begin the process of image acquisition (e.g., at 210) in tandem with the barking dog audible alert, or shortly thereafter at an appropriate time. The barking dog alert may continue during image acquisition in some cases, or just at the beginning of this process to attract the attention of the individual. It should be appreciated that in some implementations steps 202-210 can occur simultaneously while in other implementations steps 202-210 can occur individually and/or in tandem with one or more steps in any order. It should be appreciated that other forms of audible alerts to attract the attention and gaze of the individual to the camera may be employed, which may include human voice cues, other animal noises, musical tones or portions of well-known songs (e.g., nursery rhymes), etc
[0054] In one aspect, the sequence of images 212 that are acquired can be transmitted to a central server (e.g., a web server, a remote server, a remote processor, etc.).
[0055] FIG. 3 illustrates how the camera of the device in question is activated, the output of the camera is shown on screen 3. The focus point is marked with respect to the individual's eyes 4 touching the screen 5, in order to proceed, subsequently, to lower the amount of ambient lighting. The pupil dilates naturally being in low light, so, at this moment, the taking of a plurality of images is activated, using the application.
[0056] FIG. 4, which is a graphical representation of a screenshot of the application, shows the button for the initiation of taking a plurality of images 6, a setting button 7 and a button to display the images obtained 8.
[0057] FIG. 5 is a schematic illustration of one implementation of the system 500 for preliminary diagnosis of ocular diseases. In one implementation, the system 500 can include an image capturing device (e.g., a mobile device with a camera). An application (e.g, a mobile application) can be installed on the image capturing device and/or the image capturing device can be in digital communication with the application. The example implementation in FIG. 5 shows the application and the image capturing device collectively, for example, as application 502.
[0058] The application 502 can transmit one or more images that are captured to a central server 504 via a network 506. In one aspect, the application 502 may implement the acquisition process 200 illustrated in FIG. 2 to acquire the images. In one aspect, the images acquired by the application 502 can include the red-eye effect. In one aspect, the application 502 can process the images to center the images and sharpen them. The sharpened images can be transmitted from the application 502 to the central server 504 via the network 506.
[0059] The central server 504 can be a remote server, a web server, a processor and/or the like. The central server 504 processes the images to transform the images to a luminance-based color space, and determines the white color content in the images. In one implementation, the central server 504 can include a machine-learning module and/or an artificial intelligence module to electronically diagnose ocular diseases based on the processed images. In one aspect, the preliminary diagnosis can be transmitted back to the application 502 via the network 506.
[0060] It should be appreciated that network 506 can be based on any suitable technology and can operate according to any suitable protocol. The network 506 can be local area network, a wide area network, such as an intelligent network, or the Internet. In some implementation, the network 506 may be wired. In other implementations, the network 506 may be wireless.
[0061] To begin capturing pictures, the application turns on the flash light 2, but the pictures will begin to be processed when the application estimates that what is being captured is already under the influence of light from the flash. The application estimates the amount of light contained in each image, transforming these to Y'UV color space, which represent a luminance component Y' and two chrominance components UV. The application calculates the average of the component Y ', which represents the luminance of the pixel. Then, calculating the luminance before starting and during the frames, the application discriminates from which frame to start capturing, as it is known from this frame, that the flash 2 is affecting the captured image.
[0062] Frames containing no flash light 2 are discarded. The application performs it by removing an arbitrary number of frames captured since the flash 2 started to work, so then be able to capture ten images to be used in the process.
[0063] The capture of the first image of the process is different from the others, since in this frame the approximate area of the individual's face is detected using an appropriate "haar cascade", which is a process that captures the best section of the individual's face, and this section is cropped, obtaining the image to be used; this minimizes the amount of information to be processed by the application. In order to obtain the rest of the images, the same detected area is cropped, obtaining images of the same size as the first. Notably, the first frames pictures since the flash 2 has an effect, where the greatest effect on the retina reflex occurs, because at that time the pupil is dilated by the little pre-flash light. For this reason, the number of used frames does not exceed ten.
[0064] By completing the capture of the ten images, a camera stabilization process is performed, which helps to reduce camera shake or movement of the person in the sequence. To achieve this, first the position of the prominent edges of the image ("good features to track") is detected. These same points are then searched in the next image frame by calculating the "optical flow". After obtaining the points of the first image in the following one, the calculation of the translational suffered by the following image, regarding its predecessor, is performed. For this, the average of the motion vectors of all the prominent edges of this is calculated by transferring the image by that amount. This allows the eyes to be always in the same position in all the taken pictures, so it is possible, as will be explained later, to perform the detection of the important features, using not one, but several pictures.
[0065] With the images already aligned the area enclosed by each eye of each of the images is calculated, using a "haar cascade" both for the right eye and the left eye. Afterwards, using the image gradients the center of the pupil of each eye is obtained in each image. After, the images where all the features could not be detected are discarded.
[0066] At this time, a set of positions of centers and a set of areas where the eyes are, are available. With this, it is possible to calculate a position that represents better the eye center. To do this, first all outliers from the set of centers are eliminated and then position of all of them are averaged. A similar process is applied to each of the squares enclosing the eyes, getting the best square enclosing each eye. At the end, the process selects the picture that is less blurry as the final image.
[0067] To perform the above, a defocusing of each of the images, using a Gaussian filter is performed. Then, the fast Fourier transform (FFT) is calculated, and the average of 90% of the highest values are calculated, obtaining a value that estimates how sharper the image is. The chosen image is also passed by another process called "unsharp masking" to focus it digitally, which consists of blurring the image, using a Gaussian blur and subtracting the result to the original image on a weighted basis for a larger focus. Then, the portion of the image is cropped in the best frame obtained in the previous step for each eye, and another image, corresponding to the pupil and iris of the eye is cropped, from the best center also obtained in the previous step.
[0068] By the process just described, a good reflection on the retina can be obtained, producing a color which allows diagnostic analysis. This color is usually related to the internal condition of the eye. In a normal patient, this will be reddish tonality; and in abnormal cases could detect a white color that may indicate the existence of some abnormal body into the eye, or a yellow color indicating some eye deformation. So a post processing in which it is necessary to detect what color appeared in the pupil reflex shooting takes place. To do this the amount of red, white and yellow color in the image of the pupil is calculated. To do this the image of the pupil of each eye is transformed to HSV color space and passed through a mask that leaves in color white all colors within a specific range. The percentage of white pixels is then calculated, getting the percentage of that color in the image. [0069] · If the predominant color is red, it is likely that the eye looks normal. FIG. 6 is an example of this case, where the reflection of the red pupil 10, 11, 12 and 13 seen in both eyes is normal.
[0070] · If the predominant color is red, with a percentage of orange or yellow color, is likely to have a common problem in sight. FIG. 7 is an example of this case, where the presence of a yellow reflection in the right eye 14 of the patient may be a sign of refractive errors or strabismus. It is recommended for this patient to request a visit to the ophthalmologist.
[0071] · If the predominant color is white, there is probably a problem with a tumor disease in the eye. FIG. 8 is an example of this case, where the reflection of the red reflex seen in the right eye 15 is normal. The white reflection in the left eye 16 may be a sign of a dangerous condition within the patient's eye. It is recommended for the patient to visit an ophthalmologist as soon as possible, urgently.
[0072] · In case none of the above situations occurs, the analysis does not reach a conclusive result, so the patient should require an expert and conduct more complex tests with the proper equipment, to obtain a more accurate diagnosis.
[0073] Finally, the final processed image (FIG. 9B) can be seen, to be used for diagnosing ocular diseases. FIGS. 9A and 9B show a comparison between a normally captured image with an electronic device, according to the invention (FIG. 9A) and the final image processed by the computer application (FIG. 9B).
[0074] FIG. 10 shows a flow diagram illustrating a method 1000 for electronically diagnosing ocular diseases. At step 1002, a central server (e.g., web server 504 in FIG. 5) receives digitally refocused sharp images of eyes of a subject. In order to obtain sharpened images of eyes of the subject, a sequence of images of the eyes can be captured using a camera with a flash. In some aspects, an audible cue (e.g., a barking dog) is provided at the outset of capturing the images to attract the subject's attention to the camera. The acquisition process to capture the sequence of images is implemented in such a manner so as to not lose red-eye effect. These acquired sequence of images are initially processed on an application that is in communication with the camera. The application may either be installed on a device with the camera or may be communicably coupled to the camera. The application can process the sequence of images in order to localize respective pupils of the eyes of the subject. In one aspect, the application transforms each digital image to a Y'UV color space to determine an average pixel luminance, and any digital image that does not have sufficient luminance is discarded from the sequence - preferably approximately ten or so digital images are maintained in the sequence. A Haar cascade is applied to the first remaining image to identify the subject's face, and this first image is accordingly cropped to provide a cropped image of the subject's face. The remaining images in the sequence are identically cropped to leave the same pixels as the first image. An optical flow is then calculated for the cropped images to determine translational shifts from image to image based on averaged motion vectors, and respective images are shifted relative to each other based on the motion vectors so that the subject's eyes are in a same location in each image. The locations of the subject's eyes are identified in each image again using a Haar cascade, and the center of the pupil of each eye is identified using image gradients. Each image is then defocused using a Gaussian filter, and a Fast Fourier Transform (FFT) of the defocused image is calculated to obtain a value representing image sharpness. The sharpest image is digitally refocused, and then cropped again to provide respective sub-images of the pupil and iris of each eye.
[0075] At step 1006, the sharpest digitally refocused image is color processed by the central server to transform the color content in the image from an RGB color space to a luminance- based color space comprising one luma and two chrominance components. For instance, the sharpest digitally refocused image can be transformed from an RGB color space to a YUV color space, or a YCbCr color space. This transformation decouples the effect of the brightness of the environment on the images thereby minimizing the effect of environmental conditions in the images. This transformed image can be analyzed to make preliminary conclusions of certain abnormalities in the eye. FIGS. 11-13 illustrate an example transformation of images to a YCbCr color space. By analyzing the refracting pattern in terms of color and position in these transformed images, preliminary conclusions can be made about abnormalities in the eye. For example, FIG. 11 represents a retinoblastoma pattern. FIGS. 12 and 13 represent visio-refraction patterns or refractive errors indicating an abnormality in the eye (e.g., astigmatism, etc.). In one aspect, an expert may analyze an initial set of transformed images in YUV, and/or YCbCr color space and may classify the images as representing normal eye, deformed eye, or eye with tumor. These initial set of classified images form a knowledge base for a machine-learning module included in the central server. The initial set of classified images can be saved in a database and/or memory that is coupled to the central server.
[0076] In one implementation, at step 1008, the sharpest refocused image in the RGB color space and/or the image transformed to a luminance-based color space can be represented using a HSV color scale. At step 1008, the white color content of reflection from each eye can be determined based on the HSV color scale. In order to determine the white color content, HSV value for the pupil portion of the eye in the RGB color space and/or luminance-based color space can be calculated. An average Saturation (S) value for at least the pupil portion of the eye can then be determined. The average Saturation (S) value represents how much content of pure color (e.g., 100% color) and how much content of grey (e.g., 0% color) is present in that portion of the image. Therefore, using the HSV format the color in that portion of the image can be described as an average expression of Hue (for example Yellow: Hue=60, Red: Hue=0). However, a special case is white, where saturation (S) is closer to 0%. In this case, Value or Luminance should be higher in order to obtain white. The latter depends on the lighting conditions. Bright white cannot always be achieved, but in real conditions lighter versions of gray may be obtained. In one aspect, an expert may analyze the white-color content in an initial set of images and may classify these initial set of images as representing normal eye, deformed eye, or eye with tumor. These classified images can also be a part of the knowledge base for the machine-learning module included in the central server.
[0077] In one implementation, the central server can include a machine-learning and/or artificial intelligence module to classify the images based on the white color content. In one aspect, a machine learning model is generated based on the knowledge base by applying one or more machine-learning techniques. In one aspect, an initial conclusion of ocular diseases can be determined by comparing the images in the luminance-based color space to the images in the knowledge base (luminance-based color space) that are classified by an expert. In another aspect, ocular diseases can be electronically diagnosed by comparing HSV values of the images to HSV values of images in the knowledge base that are classified by experts. By performing this comparison, the white color content of reflection from each eye can be determined. If the white color content includes a hue of red, the machine- learning module classifies the eyes of the subject as normal. If the white color content includes a tint of yellow, the machine-learning module classifies the eyes of the subject as comprising a deformation. If the white color content, includes a tint of white, the machine-learning modules classifies the eyes of the subject as including a tumor. For example, the machine-learning module may implement one or more classification algorithms (e.g., algorithms based on distance, clustering, SVM, etc.) to determine an appropriate classification.
[0078] In one implementation, an index value is generated based on the classification to indicate the presence and/or absence of ocular diseases in the eyes. For example, an index value of 1 can indicate normal eyes and an index value closer to 0 can indicate that the subject has at least one abnormal eye and will need to see a specialist. This index value is transmitted from the central server back to the mobile device and/or the application. Thus, ocular diseases can be diagnosed in a reliable, automated, and user-friendly manner.
[0079] In yet another example implementation, an external adaptor may be employed (e.g., Prisma) to be employed in connection with the flash and camera of a smart phone, to allow different versions of smart phones (e.g., iPhone 4S, iPhone 5-series, iPhone 6, iPhone 7, etc.) to be used to implement the various concepts disclosed herein. In one aspect, such an external adaptor may be used to adjust for different distances between the flash and the camera on different smart phones, so as to have similar results on the different smart phones in implementing the concepts disclosed herein. In one instantiation, the adapter may comprise a macro and zoom lens.
[0080] An example implementation of methods for preliminary diagnosis of ocular disease is included in Appendix A. The underlying method implemented as code represented in Appendix A is robust and can be implemented in multiple programming languages.
[0081] While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
[0082] The above-described embodiments can be implemented in any of numerous ways. For example, embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
[0083] Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
[0084] Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
[0085] Such computers may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
[0086] The various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
[0087] Also, various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[0088] All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety.
[0089] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
[0090] The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
[0091] The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[0092] As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives {i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.
[0093] As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. [0094] In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of shall be closed or semi- closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.
Appendix A
//
// AppDelegate . h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
@interface AppDelegate : UIResponder <UIApplicationDelegate>
@property (strong, nonatomic) UlWindow *window;
@end
//
// AppDelegate . m
// EyeArtifact
//
//
#import "AppDelegate . h"
(^implementation AppDelegate
- (BOOL) application : (UIApplication *) application
didFinishLaunchingWithOptions : (NSDictionary * ) launchOptions
{
// Override point for customization after application launch.
return YES;
}
(void) applicationWillResignActive : (UIApplication *) application
{
// Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state.
// Use this method to pause ongoing tasks, disable timers, and throttle down OpenGL ES frame rates. Games should use this method to pause the game.
}
- (void) applicationDidEnterBackground : (UIApplication *) application
{
// Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later . // If your application supports background execution, this method is called instead of applicationWillTerminate : when the user quits.
}
- (void) applicationWillEnterForeground : (UIApplication *) application {
// Called as part of the transition from the background to the inactive state; here you can undo many of the changes made on entering the background.
}
- (void) applicationDidBecomeActive : (UIApplication *) application
{
// Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
}
- (void) applicationWillTerminate : (UIApplication *) application
{
// Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground: .
}
@end
//
// AppManager.h
// JooyCar
//
#import <Foundation/Foundation . h>
#define osKey @"os"
#define verOSKey @"verOS"
#define appVerKey @"appVer"
@interface AppManager : NSObject
+ (NSString*) obtainOSVersion;
+ (NSString*) obtainOSName;
+ (NSString* ) obtainAppVersion;
+ (NSMutableDictionary* ) obtainAppData ;
@end
//
// AppManager. m
// JooyCar //
//
#import "AppManager . h"
(^implementation AppManager
+ (NSString* ) obtainOSName {
return
/*
UIDevice *currentDevice = [UIDevice currentDevice ] ;
return [currentDevice systemName] ;
*/
}
+ (NSString*) obtainOSVersion{
UIDevice ^currentDevice = [UIDevice currentDevice] ;
return [currentDevice systemVersion] ;
}
+ (NSString* ) obtainAppVersion {
return [ [NSBundle mainBundle] obj ectForlnfoDictionaryKey :
@"CFBundleShortVersionString" ] ;
}
+ (NSString *) build
{
return [[NSBundle mainBundle] obj ectForlnfoDictionaryKey : (NSString *) kCFBundleVersionKey] ;
}
+ (NSString *) versionBuild
{
NSString * version = [AppManager obtainAppVersion] ;
NSString * build = [AppManager build] ;
NSString * versionBuild = [NSString stringWithFormat : @"v%@", version] ;
if (! [version isEqualToString : build]) {
versionBuild = [NSString stringWithFormat: @"%@(%@)", versionBuild, build] ;
}
return versionBuild;
}
+ (NSMutableDictionary* ) obtainAppData {
NSMutableDictionary *returnValue = [NSMutableDictionary new] ;
[returnValue setObject: [AppManager obtainOSName]
forKey: osKey] ; [returnValue setObject: [AppManager obtainOSVersion] forKey: verOSKey] ;
[returnValue setObject: [AppManager obtainAppVersion] forKey: appVerKey] ;
return returnValue;
}
@end
//
// CAboutController .h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CAboutController : UlViewController
{
IBOutlet UILabel *mAboutText;
}
- ( IBAction) onBackTap : (UIButton *) sender;
@end
//
// CAboutController .m
// EyeArtifact
//
//
#import "CAboutController . h"
Sinterface CAboutController ()
Send
(^implementation CAboutController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle
*) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization return self;
- (void) iewDidLoad
[super viewDidLoad];
// Do any additional setup after loading the view.
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
@end
CAppMgr . h
EyeArtifact
//
//
#import <Foundation/Foundation . h>
#define feedbackKey feedback
#define mailKey email "
Sinterface CAppMgr NSObject
Sproperty (nonatomic) bool optionUseAutoCrop;
Sproperty (nonatomic) bool optionExportWithData ; Sproperty (nonatomic) bool globalFirstTimeUse ;
Sproperty (nonatomic) bool globalFirstTimeUseExamples;
Sproperty (nonatomic) bool globalDisclaimerAgreed;
Sproperty (nonatomic, strong) NSDictionary *globalCentersData ;
+ (CAppMgr*) getPtr;
- (void) loadData;
- (void) saveData;
- (void) loadWebData;
- (void) showTutorialVideo : (UlViewController* ) caller ;
- (void) playTakePictureAudio : (UlViewController* ) caller ;
- (void) sendUserFeedbackText : (NSDictionary*) userParameters
onCompleted : (void ( ) (id response )) completeBlock onFailure : (void ( ) (NSError *error) ) failureBlock;
- (void) sendUserMailText : (NSDictionary*) userParameters
onCompleted: (void ( ) (id response )) completeBlock
onFailure: (void ( ) (NSError *error) ) failureBlock;
@end
//
// CAppMgr. m
// EyeArtifact
//
//
#import "CAppMgr. h"
#import "XCDYouTubeVideoPlayerViewController . h"
#include <AVFoundation/AVFoundation . h>
#import "AFHTTPRequestOperationManager .h"
#import "AppManager . h"
#import "constants . h" static CAppMgr *sReference = nil;
(^implementation CAppMgr
{
NSString *_databasePath;
AVAudioPlayer *takePictureAudioPlayer ;
} + (CAppMgr*) getPtr;
{
if ( ! sReference)
{
sReference= [[super allocWithZone : NULL] init] ;
}
return sReference;
- ( id) init {
self = [super init];
if (self) {
DLog (@"_init") ;
self . optionUseAutoCrop true ;
self . globalFirstTimeUse true ;
self . optionExportWithData true ;
self . globalFirstTimeUseExamples true ;
self . globalDisclaimerAgreed false ;
NSString *soundFilePath = [ [NSBundle mainBundle]
pathForResource : @"sound_l" ofType : @"mp3"];
NSURL *fileURL = [ [NSURL alloc]
initFileURLWithPath : soundFilePath ] ;
takePictureAudioPlayer = [ [AVAudioPlayer alloc]
initWithContentsOfURL : fileURL
error : nil ] ;
}
return (self) ;
}
- (void) loadWebData
{
DLog(@"Load web data");
NSString *url = [NSString
stringWithFormat : @ "http : //app9. mo idreams . cl /eyecare /ws_centers . j son"
] ;
NSURLRequest ^request = [NSURLRequest requestWithURL : [NSURL URLWithString : ur1 ] cachePolicy : 0 timeoutlnterval : 8 ] ;
NSURLResponse *resp = nil;
NSError *err = nil; if (request)
{
NSData ^response = [NSURLConnection
sendSynchronousRequest : request returningResponse : Sresp error:
&err] ;
if (response)
{ NSDictionary *responseData = [NSJSONSerialization JSONObj ectWithData : response options: NSJSONReadingMutableContainers error : &err ] ;
if ( ! responseData)
{
DLog(@"Json Parsing error");
} else
{
self . globalCentersData = responseData;
}
}
}
DLog(@"Load web data fin") ;
}
- (void) sendUserFeedbackText : (NSDictionary*) userParameters
onCompleted : ( oid ( ) (id response )) completeBlock onFailure : (void ( ) (NSError *error) ) failureBlock{
// NSString *url =
@ "http : / /dev. movidrearns . cl /eyecare /ws /feedback . php" ;
NSString *url = [NSString
stringWithFormat : @"%@%@" , BASE_URL, @ "feedback . php " ] ;
NSMutableDictionary ^parameters = [AppManager obtainAppData ] ; [parameters addEntries FromDictionary : userParameters ] ;
DLog ( @ "parameters %@", parameters);
AFHTTPRequestOperationManager ^manager =
[AFHTTPRequestOperationManager manager] ;
[manager POST:url parameters : parameters
success : (AFHTTPRequestOperation ^operation, id responseObj ect) {
DLog(@"JSON: %@", responseObj ect) ;
completeBlock (responseObj ect) ;
} failure : (AFHTTPRequestOperation ^operation, NSError *error) { DLog ( @"Error : %@", error);
failureBlock (error ) ;
}] ;
}
- (void) sendUserMailText : (NSDictionary*) userParameters
onCompleted: (void ( ) (id response )) completeBlock
onFailure: (void ( ) (NSError *error) ) failureBlock{ NSString *url = [NSString
stringWithFormat : @"%@%@" , BASE_URL, @ "email .php"] ;
AFHTTPRequestOperationManager ^manager =
[AFHTTPRequestOperationManager manager] ;
NSMutableDictionary ^parameters = [AppManager obtainAppData ] ; [parameters addEntries FromDictionary : userParameters ] ;
DLog ( @ "parameters %@", parameters);
[manager POST:url parameters : parameters
success : (AFHTTPRequestOperation ^operation, id responseObj ect) {
DLog(@"JSON: %@", responseObj ect) ;
completeBlock (responseObj ect) ;
} failure : (AFHTTPRequestOperation ^operation, NSError *error) { DLog ( @"Error : %@", error);
failureBlock (error ) ;
}] ;
}
- ( oid) showTutorialVideo : (UlViewController* ) caller {
XCDYouTubeVideoPlayerViewControiler * ideoPlayerViewController = [ [XCDYouTubeVideoPlayerViewController alloc]
initWithVideoIdentifier : NSLocalizedString ( @"VIDEO_CODE" , nil) ] ;
[caller
presentMoviePlayerViewControllerAnimated : videoPlayerViewController ] ; }
- (void) playTakePictureAudio : (UlViewController* ) caller {
[takePictureAudioPlayer play] ;
}
- (void) loadData;
{
NSUserDefaults *userDefaults= [NSUserDefaults
standardUserDefaults ] ;
if ( [userDefaults obj ectForKey : @ "globalFirstTimeUse " ] !=nil)
self . globalFirstTimeUse = [(NSNumber*) [userDefaults objectForKey: @ "globalFirstTimeUse " ] boolValue] ;
if ( [userDefaults
objectForKey: @ "globalFirstTimeUseExamples " ] ! =nil )
self . globalFirstTimeUseExamples = [(NSNumber*) [userDefaults objectForKey: @"globalFirstTimeUseExamples " ] boolValue] ;
if ( [userDefaults obj ectForKey : @ "optionUseAutoCrop " ] !=nil)
self . optionUseAutoCrop = [(NSNumber*) [userDefaults obj ectForKey : @ "optionUseAutoCrop" ] boolValue] ;
if ( [userDefaults obj ectForKey: @ "globa1DisciaimerAgreed" ] ! =nil ) self . globalDisclaimerAgreed = [(NSNumber*) [userDefaults obj ectForKey : @"globalDisclaimerAgreed" ] boolValue] ;
if ( [userDefaults obj ectForKey : @ "optionExportWithData " ] !=nil) self . optionExportWithData = [(NSNumber*) [userDefaults obj ectForKey : @ "optionExportWithData " ] boolValue] ;
[self loadWebData] ;
}
- (void) saveData;
{
NSUserDefaults *userDefaults= [NSUserDefaults
standardUserDefaults ] ;
[userDefaults setObject: [NSNumber
numberWithBool : self . optionUseAutoCrop ] forKey : @ "optionUseAutoCrop" ] ;
[userDefaults setObject: [NSNumber
numberWithBool : self . globalFirstTimeUse ]
forKey: @ "globalFirstTimeUse " ] ;
[userDefaults setObject: [NSNumber
numberWithBool : self . globalFirstTimeUseExamples ]
forKey: @ "globalFirstTimeUseExamples " ] ;
[userDefaults setObject: [NSNumber
numberWithBool : self . globalDisclaimerAgreed]
forKey: @ "globalDisclaimerAgreed" ] ;
[userDefaults setObject: [NSNumber
numberWithBool : self . optionExportWithData]
forKey: @ "optionExportWithData " ] ;
[userDefaults synchronize];
}
@end
//
// CCenterDetailController . h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
#import <Mes sageUI /Mes sageUI . h>
Sinterface CCenterDetailController :
UIViewController<MFMes sageComposeViewControilerDelegate>
{
IBOutlet UILabel *mTitleNameLabel ;
IBOutlet UILabel *mMainLabel;
}
- ( IBAction) onBackTap : (id) sender;
- ( IBAction) onSendTap : (id) sender; - ( oid) setDocName : (NSDictionary *)docDic;
@end
//
// CCenterDetailController .m
// EyeArtifact
//
//
#import "CCenterDetailController . h"
#import "CAppMgr.h"
#import "CUtil.h"
S interface CCenterDetailController ()
{
NSDictionary *_docDic;
NSString *_sourceString ;
NSString *_countryString ;
NSString *_commentString ;
NSString *_nameString;
NSString *_emailString ;
NSString *_phoneString ;
NSString *_adressString;
}
Send
@implementation CCenterDetailController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil bundle : nibBundleOrNil] ;
if (self) {
}
return self;
}
- (void) viewDidLoad
{
[super viewDidLoad];
mMainLabel . text = @""; if ( [_docDic obj ectForKey : @ " source " ] && ! [ [_docDic objectForKey: @ " source " ] isEqualToString: @ " " ] )
{
_sourceString = [NSString stringWithFormat : @ " Source :
%@", [_docDic objectForKey: @"source"] ] ;
// [mSourceLabel sizeToFit] ;
} else
{
_sourceString = @"";
}
if ( [_docDic obj ectForKey : @ "name " ] && ! [ [_docDic
objectForKey: @"name"] isEqualToString : @""] )
{
_nameString = [NSString stringWithFormat : @ "Name : %@", [_docDic objectForKey: @"name"] ] ;
// [mNameLabel sizeToFit] ;
} else
{
nameString = @"";
}
if ( [_docDic obj ectForKey : @ "email " ] && ! [ [_docDic
objectForKey: @ "email"] isEqualToString : @""] )
{
_emailString = [NSString stringWithFormat : @ "Email :
%@", [_docDic objectForKey: @"email"] ] ;
// [mEmailLabel sizeToFit];
} else
{
_emailString = @"";
}
if ( [_docDic obj ectForKey : @ "addres s " ] && ! [ [_docDic
objectForKey: @"address"] isEqualToString : @""] )
{
_adressString = [NSString stringWithFormat : @ "Addres s :
%@", [_docDic objectForKey: @"address"] ] ;
// [mAddressLabel sizeToFit];
} else
{
_adressString = @"";
}
if ( [_docDic obj ectForKey : @ "comment" ] && ! [ [_docDic
objectForKey: @ "comment" ] isEqualToString : @ " " ] )
{
_commentString = [NSString stringWithFormat : @ "% @ " , [_docDic objectForKey: @"comment"] ] ;
//[mAddressLabel sizeToFit];
} else {
_commentString = @"";
}
if ( [_docDic obj ectForKey : @ "country" ] && ! [ [_docDic
objectForKey : @"country"] isEqualToString : @""] )
{
_countryString = [NSString stringWithFormat : @ "Country : %@", [_docDic objectForKey: @"country"] ] ;
// [mCountryLabel sizeToFit] ;
} else
{
_countryString = @"";
//[mCountryLabel sizeToFit] ;
}
NSArray* phones = [_docDic obj ectForKey : @ "phone "] ;
if ( [phones count] >1 && phones)
{
_phoneString = [NSString stringWithFormat : @"Phone : %@", [phones obj ectAtIndex : 0 ] ] ;
} else
{
_phoneString = @"";
}
if ( ! [_nameString isEqualToString : @""] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" , _nameString ] ] ;
if ( ! [_countryString isEqualToString : @ ""] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" , _countryString ] ] ;
if (! [_emailString isEqualToString : @" "] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" ,_emailString] ] ;
if ( ! [_adressString isEqualToString : @ ""] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" , _adres s String ] ] ;
if ( ! [_commentString isEqualToString : @ ""] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" , _commentString ] ] ; if ( ! [_phoneString isEqualToString : @" " ] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"%@\r\n\r\n" ,_phoneString] ] ;
if ( ! [_sourceString isEqualToString : @ " " ] )
mMainLabel . text = [mMainLabel . text
stringByAppendingString : [NSString
stringWithFormat : @"\r\n\r\n%@" ,_sourceString] ] ;
[mMainLabel sizeToFit] ;
//puede haber 0 o N nA°meros de telAOfono*/
// Do any additional setup after loading the view.
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- ( IBAction) onBackTap : (id) sender
[self.navigationController popViewControllerAnimated : true] ;
- ( IBAction) onSendTap : (id) sender
[CUtil openEmailWithEmail : [_docDic obj ectForKey : @ "email " ] andTitle : @"" forController : self] ;
- (void) setDocName : (NSDictionary *) docDic;
docDic = docDic; }
-(void) mailComposeController : (MFMailComposeViewController *) controller didFinishWithResult : (MFMailComposeResult) result error: (NSError *) error
{
[self dismissViewControllerAnimated: YES completion : nil ] ;
}
@end
//
// CCentersController . h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CCentersController : UlViewController
- ( IBAction) onBackTap : (UIButton *) sender;
@end
//
// CCentersController . m
// EyeArtifact
//
//
#import "CCentersController . h"
Sinterface CCentersController ()
Send
@implementation CCentersController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle
*) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) viewDidLoad
{ [super viewDio!Load] ;
// Do any additional setup after loading the view.
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
@end
//
// CCentersCountryTableControiler. h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CCentersCountryTableController : UITableViewController
- (void) setData : (NSString* ) data ;
@end
//
// CCentersCountryTableController . m
// EyeArtifact
//
//
#import "CCentersCountryTableController. h"
#import "CCenterDetailControiler . h" #import "CAppMgr.h"
Sinterface CCentersCountryTableController ()
{
NSMutableArray * _countryData ;
NSMutableArray * _docData;
NSString *_countryName ;
}
@end
@implementation CCentersCountryTableController
- (id) initWithStyle : (UITableViewStyle ) style
{
self = [super initWithStyle : style ] ;
if (self) {
// Custom initialization
}
return self;
}
- ( oid) setData : (NSString* ) data ;
{
_countryName = data;
}
- (void) viewDidLoad
{
[super viewDidLoad];
_countryData = [ [NSMutableArray alloc] init] ;
_docData = [ [NSMutableArray alloc] init] ;
for (NSDictionary *doc in [ [ [CAppMgr getPtr] . globalCentersData obj ectForKey : @"centers"] obj ectForKey : _countryName ] )
{
NSString*sds= [doc obj ectForKey : @ "name " ] ;
[_docData addObj ect : doc ] ;
[_countryData addObj ect : sds ] ;
}
[_countryData
sortUsingSelector : @selector ( localizedCase Insens itiveCompare : ) ] ;
NSSortDescriptor *sortDescriptor = [ [NSSortDescriptor alloc] initWithKey : @ "name " ascending : YES ] ;
NSArray *sortDescriptors = [NSArray
arrayWithObj ect: sortDescriptor] ;
[_docData sortUsingDescriptors : sortDescriptors ] ; // Uncomment the following line to preserve selection between presentations .
// self . clearsSelectionOnViewWillAppear = NO;
// Uncomment the following line to display an Edit button in the navigation bar for this view controller.
// self . navigationltem . rightBarButtonltem = self . editButtonltem;
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#pragma mark - Table view data source
- (NSInteger) numberOfSections InTableView : (UITableView *) tableView {
// Return the number of sections,
return 1;
}
- (NSInteger) tableView: (UITableView *)tableView
numberOfRows InSection : (NSInteger) section
{
// Return the number of rows in the section,
return [_countryData count] ;
}
- (UITableViewCell *) tableView : (UITableView *) tableView
cellForRowAtlndexPath : (NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"CCountryCenterCell" ;
UITableViewCell *cell = [tableView
dequeueReusableCe11WithIdentifier : Cell Identifier
forlndexPath : indexPath] ;
if (cell)
{
UILabel* label = (UILabel*) [cell viewWithTag : 10 ] ;
label . text= [_countryData obj ectAtIndex : indexPath . row] ;
}
return cell;
}
/*
// Override to support conditional editing of the table view.
- (BOOL) tableView: (UITableView *) tableView
canEditRowAtlndexPath : (NSIndexPath *) indexPath
{ // Return NO if you do not want the specified item to be editable .
return YES;
}
*/
/*
// Override to support editing the table view.
- (void) tableView: (UITableView *) tableView
commitEditingStyle : (UITableViewCellEditingStyle ) editingStyle forRowAtlndexPath : (NSIndexPath *)indexPath
{
if (editingStyle == UITableViewCellEditingStyleDelete) {
// Delete the row from the data source
[tableView deleteRowsAtlndexPaths : @ [indexPath]
withRowAnimation : UITableViewRowAnimationFade ] ;
}
else if (editingStyle == UITableViewCellEditingStylelnsert) { // Create a new instance of the appropriate class, insert into the array, and add a new row to the table view
}
}
*/
/*
// Override to support rearranging the table view.
- (void) tableView: (UITableView *) tableView
moveRowAtlndexPath : (NSIndexPath * ) fromlndexPath
toIndexPath: (NSIndexPath * ) toIndexPath
{
}
*/
/*
// Override to support conditional rearranging of the table view.
- (BOOL) tableView: (UITableView *) tableView
canMoveRowAtlndexPath : (NSIndexPath *) indexPath
{
// Return NO if you do not want the item to be re-orderable . return YES;
}
*/
#pragma mark - Table view delegate
- (void) tableView: (UITableView *) tableView
didSelectRowAtlndexPath : (NSIndexPath *) indexPath
{
// Navigation logic may go here. Create and push another view controller . CCenterDetailController *detailViewController = [self . storyboard instantiateViewControllerWithldentifier : @ "CCenterDetail " ] ;
[detailViewController setDocName : [_docData
obj ectAtIndex : indexPath . row] ] ;
// ...
// Pass the selected object to the new view controller,
[self. navigationControiler
pushViewController : detailViewController animated : YES ] ;
}
@end
//
// CCentersCountyController .h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CCentersCountyController : UlViewController
{
IBOutlet UILabel *mTitleLabel ;
}
- (void) sendCountryName : (NSString* ) name ;
- ( IBAction) onBackTap : (id) sender;
@end
//
// CCentersCountyController .m
// EyeArtifact
//
//
#import "CCentersCountyController .h"
#import "CCentersCountryTableController. h"
Sinterface CCentersCountyController ()
{
NSString *_countryName ;
}
@end
@implementation CCentersCountyController - (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle
*) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
(void) iewDidLoad
[super viewDidLoad];
mTitleLabel . text=_countryName ;
// Do any additional setup after loading the view.
(BOOL) shouldAutorotate
return YES;
(NSUInteger) supportedInterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
(void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated,
(void) sendCountryName : (NSString* ) name ;
_countryName = name;
( IBAction) onBackTap : (id) sender
[self.navigationController popViewControllerAnimated : true] ;
(void) prepareForSegue : (UlStoryboardSegue *) segue sender: (id) sender if ([ segue . identifier isEqualToString : @"CEmbed" ] )
{
CCentersCountryTableController * vc =
CCentersCountryTableControiler * ) segue. destinationViewControiler ;
[vc setData :_countryName] ;
} }
@end
//
// CCentersTableController .h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CCentersTableController : UITableViewController
@end
//
// CCentersTableController .m
// EyeArtifact
//
//
#import "CCentersTableController . h"
#import "CCentersCountyController.h"
#import "CAppMgr.h"
Sinterface CCentersTableController ()
{
NSMutableArray *_countryData ;
NSString *_countryName ;
}
@end
@implementation CCentersTableController
- (id) initWithStyle : (UITableViewStyle ) style
{
self = [super initWithStyle : style ] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) iewDidLoad
{
[super viewDidLoad];
_countryData = [ [NSMutableArray alloc] init] ; for (NSString *countryName in [ [ [CAppMgr getPtr] . globalCentersData obj ectForKey : @"centers"] allKeys] )
{
[_countryData ado!Obj ect : countryName ] ;
}
[_countryData
sortUsingSelector : @selector ( localizedCase Insens itiveCompare : ) ] ;
// Uncomment the following line to preserve selection between presentations .
// self . clearsSelectionOnViewWillAppear = NO;
// Uncomment the following line to display an Edit button in the navigation bar for this view controller.
// self . navigationltem . rightBarButtonltem = self . editButtonltem;
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#pragma mark - Table view data source
- (NSInteger) numberOfSections InTableView : (UITableView *) tableView {
return 1;
}
- (NSInteger) tableView: (UITableView *)tableView
numberOfRows InSection : (NSInteger) section
{
return [_countryData count] ;
}
- (UITableViewCell *) tableView : (UITableView *) tableView
cellForRowAtlndexPath : (NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"CCenterCell" ;
UITableViewCell *cell = [tableView
dequeueReusableCe11WithIdentifier : Cell Identifier
forlndexPath : indexPath] ;
if (cell)
{
UILabel* label = (UILabel*) [cell viewWithTag : 10 ] ;
label . text= [_countryData obj ectAtIndex : indexPath . row] ;
}
// Configure the cell...
return cell;
} /*
// Override to support conditional editing of the table view.
- (BOOL) tableView: (UITableView *)tableView
canEditRowAtlndexPath : (NSIndexPath *)indexPath
{
// Return NO if you do not want the specified item to be editable .
return YES;
}
*/
/*
// Override to support editing the table view.
- (void) tableView: (UITableView *) tableView
commitEditingStyle : (UITableViewCellEditingStyle ) editingStyle forRowAtlndexPath : (NSIndexPath *)indexPath
{
if (editingStyle == UITableViewCellEditingStyleDelete) {
// Delete the row from the data source
[tableView deleteRowsAtlndexPaths : @ [indexPath]
withRowAnimation : UITableViewRowAnimationFade ] ;
}
else if (editingStyle == UITableViewCellEditingStylelnsert) { // Create a new instance of the appropriate class, insert into the array, and add a new row to the table view
}
}
*/
/*
// Override to support rearranging the table view.
- (void) tableView: (UITableView *) tableView
moveRowAtlndexPath : (NSIndexPath * ) fromlndexPath
toIndexPath: (NSIndexPath * ) toIndexPath
{
}
*/
/*
// Override to support conditional rearranging of the table view.
- (BOOL) tableView: (UITableView *) tableView
canMoveRowAtlndexPath : (NSIndexPath *) indexPath
{
// Return NO if you do not want the item to be re-orderable . return YES;
}
*/
#pragma mark - Table view delegate - (void) tableView: (UITableView *)tableView
didSelectRowAtlndexPath : (NSIndexPath *)indexPath
{
// Navigation logic may go here. Create and push another controller .
_countryName = [_countryData obj ectAtlndex : indexPath . row]
CCentersCountyController *detailViewController =
[self.storyboard
instantiateViewControllerWithldentifier : @ "CCenterCountry" ] ;
[detailViewController sendCountryName :_countryName] ;
// Pass the selected object to the new view controller, [self. navigationControiler
pushViewController : detailViewController animated : YES ] ;
}
@end
//
// CDataMgr.h
// EyeArtifact
//
//
#import <Foundation/Foundation . h>
#import <sqlite3.h>
#import "CRegister . h"
ginterface CDataMgr : NSObject
+ (CDataMgr*) getPtr;
- (BOOL) createDB;
- (BOOL) saveDataWithPatientName : (NSString* ) name
withlmagePath : (NSString* ) path andDate : (NSDate * ) inDate ;
- (BOOL) deleteRegisters : (NSArray* ) inArray;
- (NSArray*) getGallery;
Send
//
// CDataMgr. m
// EyeArtifact
//
//
#import "CDataMgr.h"
#import "CRegister . h"
#import "constants . h"
static CDataMgr *sReference
static sqlite3 *sDatabase static sqlite3_stmt *sStatement = nil;
@implementation CDataMgr
{
NSString *_databasePath;
}
/*This application was co-developed with Maria Manquez. MD eye specialist with extensive experience in Ocular Oncology. Doctor Manquez is a member of the International Society of Ocular Oncology and also part of the Eye Cancer Network (Eye Cancer Foundation) . */ + (CDataMgr*) getPtr;
{
if ( ! sReference)
{
sReference= [[super allocWithZone : NULL] init] ;
[sReference createDB] ;
}
return sReference;
}
- (BOOL) createDB ;
{
NSString *docsDir;
NSArray *dirPaths;
// Get the documents directory
dirPaths = NSSearchPathForDirectories InDomains
(NSDocumentDirectory, NSUserDomainMask, YES) ;
docsDir = dirPaths [0];
// Build the path to the database file
_databasePath = [[NSString alloc] initWithString :
[docsDir stringByAppendingPathComponent :
@"eyes .db"] ] ;
BOOL isSuccess = YES;
NSFileManager *filemgr = [NSFileManager defaultManager ] ;
if ( [filemgr fileExistsAtPath : _databasePath ] == NO)
{
const char *dbpath = [_databasePath UTF8String];
if (sqlite3_open (dbpath, SsDatabase) == SQLITE_OK) {
char *errMsg;
const char *sql_stmt = "create table md_eye (id integer primary key AUTOINCREMENT , patient_name text, image_path text, create_time double)";
if ( sqlite3_exec ( sDatabase , sql_stmt, NULL, NULL,
SerrMsg)
!= SQLITE_OK)
{
isSuccess = NO;
DLog (@"Failed to create table");
} else
{
DLog (@"Database Created");
} sqlite3_close (sDatabase) ;
return isSuccess;
}
else {
isSuccess = NO;
DLog (@"Failed to open/create database");
}
} else
{
DLog(@"La dataBase ya existe . ") ;
}
return isSuccess;
}
- (BOOL) saveDataWithPatientName : (NSString* ) name
withlmagePath : (NSString* ) path andDate : (NSDate * ) inDate ;
{
const char *dbpath = [_databasePath UTF8String];
if (sqlite3_open (dbpath, SsDatabase) == SQLITE_OK)
{
NSString *insertSQL = [NSString stringWithFormat : @"insert into md_eye (patient name , image_path , create_time ) values (\"%@\" \"%@\",%f) ",name, path, [inDate time IntervalSince 1970 ] ] ;
DLog (@" Inserting % @ " , insertSQL ) ;
const char *insert_stmt = [insertSQL UTF8String]; sqlite3_prepare_v2 ( sDatabase , insert_stmt, -1 , SsStatement
NULL) ;
if (sqlite3_step (sStatement) == SQLITE_DONE)
{
return YES;
}
else
{
DLog (@"STATEMENT IS %i ", sqlite3_step ( sStatement) ) ; return NO;
}
sqlite3_reset (sStatement) ;
sqlite3_close (sDatabase) ;
} else
{
DLog (@"sqlite3_open (dbpath, SsDatabase) ! =SQLITE_OK" ) ;
}
return NO;
}
- (NSArray*) getGallery;
{
const char *dbpath = [_databasePath UTF8String];
if (sqlite3_open (dbpath, SsDatabase) == SQLITE_OK)
{
NSString *querySQL = [NSString stringWithFormat : @"select patient_name , image_path, create_time from md_eye ORDER BY id desc " ] ; / /where regno=\"%@\"", registerNumber] ;
const char *query_stmt = [querySQL UTF8String]; NSMutableArray *resultArray = [[NSMutableArray alloc] init]; if ( sqlite3_prepare_v2 ( sDatabase , query_stmt , -1, SsStatement, NULL) == SQLITE_OK)
{
while (sqlite3_step (sStatement) == SQLITE_ROW) {
CRegister *a = [CRegister alloc] ;
a.registerld = sqlite3_column_int (sStatement, 0); a . patientName = [ [NSString alloc]
initWithUTF8 String : (const char *) sqlite3_column_text (sStatement,
1) ] ;
a.imagePath = [[NSString alloc]
initWithUTF8 String : (const char *) sqlite3_column_text (sStatement,
2) ] ;
a . creationTime = sqlite3_column_double (sStatement,
3) ;
/* if ( [a .patientName isEqualToString : @ " " ] )
{
a . patientName = [NSString
stringWithFormat : @ "Unnamed patient"] ;
}*/
[resultArray addObj ect : a ] ;
DLog ( @ " %@ , %@" , a .patientName , a . imagePath) ;
}
sqlite3_reset (sStatement) ;
return resultArray;
} else
{
DLog(@"Not SQLITE_OK FAILE");
}
}
return nil;
}
- (BOOL) deleteRegisters : (NSArray* ) inArray
{
NSMutableString *idString = [ [NSMutableString alloc]
initWithFormat: @"-l"] ;
for (CRegister * currentRegister in inArray)
{
[idString appendFormat :@",%i", currentRegister.registerld];
}
const char *dbpath = [_databasePath UTF8String];
if (sqlite3_open (dbpath, SsDatabase) == SQLITE_OK)
{ NSString *querySQL = [NSString stringWithFormat : @"delete from md_eye where id in (%@) " , idString] ;
const char *query_stmt = [querySQL UTF8String] ;
sqlite3_prepare_v2 (sDatabase, query_stmt, -1, SsStatement,
NULL) ;
if (sqlite3_step (sStatement) == SQLITE_DONE )
{
return YES;
}
else
{
DLog (@ "STATEMENT IS %i ", sqlite3_step ( sStatement) ) ;
return NO;
}
sqlite3_reset (sStatement) ;
sqlite3_close (sDatabase) ;
}
return false;
}
@end
//
// CDisclaimerController . h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CDisclaimerController : UlViewController
{
IBOutlet UlScrollView *mScrollView;
IBOutlet UIButton *mBackButton;
IBOutlet UILabel *mDisclaimerLabel ;
}
- ( IBAction) onBackTap : (UIButton *) sender;
- ( IBAction) onAgreeTap : (UIButton *) sender;
-(void) setPopupMode : (bool ) is PopupMode ;
@end
//
// CDisclaimerController . m
// EyeArtifact
//
// #import "CDisclaimerController . h"
#import "CAppMgr.h"
Sinterface CDisclaimerController ()
{
bool _isPopupMode;
}
@end
@implementation CDisclaimerController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
_isPopupMode=false ;
}
return self;
}
- (void) iewDidLoad
{
[super viewDidLoad];
CGSize newSize =CGSizeMake (mScrollView. frame . size .width, MAX (550 , mDisclaimerLabel . frame .size. height) ) ;
[mScrollView setContentSize : newSize] ;
if (_isPopupMode )
{
mBackButton . hidden=YES ;
} else
{
mBackButton . hidden=NO ;
}
(void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
(BOOL) shouldAutorotate
return YES;
(NSUInteger) supportedlnterfaceOrientations return UI InterfaceOrientationMas kPortrait ;
(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
(IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
(void) setPopupMode : (bool ) is PopupMode ;
_isPopupMode=isPopupMode ;
- ( IBAction) onAgreeTap : (UIButton *) sender
{
if ( [CAppMgr getPtr] . globalDisclaimerAgreed == false)
{
[CAppMgr getPtr] .globalDisclaimerAgreed = true;
[ [CAppMgr getPtr] saveData] ;
}
[self.navigationController popViewControllerAnimated : NO] ;
}
@end
//
// CExamplesController . h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CExamplesController :
UIViewController<UI ScrolIViewDelegate>
{
IBOutlet UlScrollView *mScrollView;
IBOutlet UI PageControl *mPageController ;
IBOutlet UlView *mFooterView;
IBOutlet UIButton *mBackButton;
}
- (void) setFirstTime : (bool ) is FirstTime ;
- ( IBAction) onPlayVideo : (id) sender;
- ( IBAction) onBackTap : (UIButton *) sender;
- ( IBAction) onOkTap : (id) sender;
@end
// // CExamplesController . m
// EyeArtifact
//
//
#import "CExamplesController . h"
#import <QuartzCore /QuartzCore . h>
#import "CAppMgr.h"
Sinterface CExamplesController ()
{
bool _firstTime;
}
@end
@implementation CExamplesController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) iewDidLoad
{
mScrollView . delegate = self;
[super viewDidLoad];
[mScrollView
setContentSize: CGSi zeMake (mScrollView . frame .size. width* 5 , mScrollView . frame . s i ze . width) ] ;
if (_firstTime)
{
mBackButton . hidden YES;
mFooterView . hidden NO;
} else
{
mBackButton . hidden NO;
mFooterView . hidden YES;
}
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
} -(void) setFirstTime : (bool) isFirstTime;
{
firstTime=isFirstTime ;
( IBAction) onPlayVideo : (id) sender {
[ [CAppMgr getPtr] showTutorialVideo : self] ;
(BOOL) shouldAutorotate ;
return YES;
(NSUInteger) supportedInterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
(IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
- ( IBAction) onOkTap : (id) sender
{
[self.navigationController popViewControllerAnimated : true] ;
}
- (void) prepareForSegue : (UlStoryboardSegue *) segue sender: (id) sender {
UlViewController * vc = (UlViewController
* ) segue . destinationViewController ;
UllmageView * image= (UI ImageView *) [vc.view viewWithTag : 10 ] ;
if ( image )
{
[image. layer setBorderColor : [ [UlColor grayColor] CGColor] ] ; [image. layer setBorderWidth : 1.0];
}
UllmageView * image2= (UllmageView *) [vc.view viewWithTag : 11 ] ; if ( image2 )
{
[ image2. layer setBorderColor: [ [UlColor grayColor] CGColor]]; [ image2. layer setBorderWidth: 1.0];
} }
- (void) scrollViewDidScroll : (UlScrollView *) sender
{
// Update the page when more than 50% of the previous /next page is visible
CGFloat pageWidth = mScrollView. frame . size .width;
int page = floor ( (mScrollView . contentOffset . x - pageWidth / 2) / pageWidth) + 1;
mPageController . currentPage = page;
}
Send
//
// CEyeDetectionController . h
// EyeArtifact
//
//
#ifndef EyeArtifact CEyeDetectionController
#define EyeArtifact CEyeDetectionController
#include <iostream>
#include <opencv2/highgui/cap_ios . h>
#include <opencv2/objdetect/obj detect. hpp>
#include <opencv2/imgproc/imgproc . hpp>
#include <opencv2 /opencv . hpp>
class CEyeDetectionController
{
private :
cv: : CascadeClassifier _cascadeClassGlassesPair;
cv: : CascadeClassifier _cascadeClassEyesPair ;
cv: : CascadeClassifier _cascadeClassLeftEye;
cv: : CascadeClassifier _cascadeClassRightEye ;
cv: : CascadeClassifier _cascadeClassFace ;
public :
CEyeDetectionController ( ) ;
-CEyeDetectionController ( ) ;
void process (cv: :Mat &inputlmage) ;
} ;
#endif /* defined ( EyeArtifact CEyeDetectionController
//
// CEyeDetectionController . cpp
// EyeArtifact
// //
#include "CEyeDetectionController.h"
#include "CUtil.h"
#include "constants . h"
CEyeDetectionController : : CEyeDetectionController ()
{
// LOADING HAAR CASCADE CLASSIFIERS //
// EYE PAIRS
if ( ! _cascadeClassGlassesPair . load ( [CUtil
getFilePathOf : @ "haarcascade_eye_tree_eyeglas ses . xml " ] . c_str ( ) ) )
{
DLog ( @"CEyeDetectionController : Load Cascade
haarcascade_eye_tree_eyeglasses . xml has failed.");
}
if ( ! _cascadeClassEyesPair . load ( [CUtil
getFilePathOf : @ "haarcascade_eye . xml " ] . c_str ( ) ) )
{
DLog(@"Load Cascade haarcascade_eye . xml has failed.");
}
// LEFT/RIGTH EYE
if ( ! _cascadeClassLeftEye . load ( [CUtil
getFilePathOf : @ "haarcascade_lefteye_2 splits . xml " ] . c_str ( ) ) )
{
DLog(@"Load Cascade haarcascade_lefteye_2splits . xml has failed. ") ;
}
if ( ! _cascadeClassRightEye . load ( [CUtil
getFilePathOf : @ "haarcascade_righteye_2 splits . xml " ] . c_str ( ) ) )
{
DLog(@"Load Cascade haarcascade_righteye_2splits . xml has failed. ") ;
}
// FACE
if ( ! _cascadeClassFace . load ( [CUtil
getFilePathOf : @"lbpcascade_frontalface .xml"] . c_str ( ) ) )
{
DLog(@"Load Cascade lbpcascade_frontalface . xml has failed.");
}
/////////////////////////////////////
}
CEyeDetectionController: : -CEyeDetectionController ()
{
}
void CEyeDetectionController :: process ( c :: Mat &inputlmage)
{
c : : Mat temp ;
cv::Mat imageCrop = inputlmage ( cv : : Rect ( 0 , 100 , 720 , 720 ) ) ; rectangle ( imageCrop , c : : Rect ( 0 , 0 , 720 , 720 ) , 1234);
cv: :Mat
1SmallImage ( cvRound ( imageCrop . rows /5.0 ) , cvRound ( imageCrop .cols/5.0) ,C V_8UC1) ; resize ( imageCrop , 1Small Image , 1Small Image .size(),0,0, INTER_LINEAR) ; std: :vector<cv: :Rect> facesDetected;
_cascadeClassFace . detectMultiScale ( ISmalllmage, facesDetected, 1.1, 2, 0 I CV_HAAR_SCALE_IMAGE | CV_HAAR_FIND_BIGGEST_OBJECT ,
cv: : Size (50, 50) ) ;
// findSkin (debuglmage) ;
for ( int i = 0; i < facesDetected . size () ; i++ )
{
facesDetected [ i ] . height*=5.0 ;
facesDetected [ i ] .width*=5.0;
facesDetected [ i ] .x*=5.0;
facesDetected [ i ] .y*=5.0;
rectangle ( imageCrop , facesDetected [i] , 1234);
int eye_region_width = facesDetected [i] .width *
(kEyePercentWidth/100.0) ;
int eye_region_height = facesDetected [i] .width *
(kEyePercentHeight/100.0) ;
int eye_region_top = facesDetected [i] .height *
(kEyePercentTop/100.0) ;
c : : Rect
leftEyeRegion (facesDetected [i] . width* (kEyePercentSide/100.0) , eye_region_top , eye_region_width , eye_region_height) ;
cv: :Rect rightEyeRegion (facesDetected [i] .width - eye_region_width - facesDetectedfi] . width* (kEyePercentSide/100.0) , eye_region_top , eye_region_width , eye_region_height) ;
cv: :Mat leftEyeMat=imageCrop ( leftEyeRegion) ;
rectangle ( leftEyeMat, leftEyeRegion, 1234);
cv: :Mat rightEyeMat=imageCrop (rightEyeRegion) ;
rectangle ( rightEyeMat , rightEyeRegion, 1234);
} }
//
// CFeedbackTextController .h
// EyeArtifact
//
//
#import "CMailTextControiler . h"
ginterface CFeedbackTextController : CMailTextController
@end
//
// CFeedbackTextController .m
// EyeArtifact
//
//
#import "CFeedbackTextController . h"
#import "CAppMgr.h"
Sinterface CFeedbackTextController ()
@end
@implementation CFeedbackTextController
- (void) viewDidLoad {
[super viewDidLoad];
[ [mainTextView layer] setBorderColor : [ [UlColor blackColor] CGColor] ] ;
[[mainTextView layer] setBorderWidth : 1. f] ;
// Do any additional setup after loading the view.
}
- (void) didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction) onOkTap : (UIButton *)sender{
NSString *userText = [mainTextView text] ;
NSString *mailText = [mainTextField text] ;
if ( [mailText length] > 0){ if ( [userText length] > 0){
[self showLoading : [ [self navigationController] view]];
NSDictionary ^parameters = @{mailKey : mailText, feedbackKey : userText} ;
[ [CAppMgr getPtr] sendUserFeedbackText : parameters
onCompleted : ( id response) {
[self hideLoading : [ [ self navigationController] view] ] ;
CMailTextController
*mController = (CMailTextController*) [ self . storyboard
instantiateViewControllerWithldentifier : @ "CMailTextController" ] ;
[self. navigationController pushViewController : mController animated : YES ] ;
/*
[self hideLoading: [ [self navigationController] view] ] ;
UIAlertView *alert =
[ [UIAlertView alloc] initWithTitle : @"Ok"
message : @"feedback" delegate : self
cancelButtonTitle : @"ok" otherButtonTitles : nil] ; [alert show] ;
[ [self
navigationController] popToRootViewControllerAnimated : TRUE] ;
*/
} onFailure : (NSError *error)
{
[self hideLoading :[[ self navigationController] view] ] ;
UIAlertView *alert =
[[UIAlertView alloc] initWithTitle : @"Error"
message : @ "An error has ocurred"
delegate : self
cancelButtonTitle : @"ok"
otherButtonTitles : nil] ;
[alert show] ;
}] ; } else {
UIAlertView *alert = [ [UIAlertView alloc]
initWithTitle : @"Error"
message : @ "This field cannot be empty."
delegate : self
cancelButtonTitle : @"ok"
otherButtonTitles : nil] ;
[alert show] ;
}
} else {
UIAlertView *alert = [ [UIAlertView alloc]
initWithTitle : @"Error"
message : @ "Please enter a valid email address"
delegate : self cancelButtonTitle : @"ok" otherButtonTitles : nil] ;
[alert show] ;
}
}
@end
//
// CGalleryCell .h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
ginterface CGalleryCell : UITableViewCell
@end
//
// CGalleryCell .m
// EyeArtifact
//
//
#import "CGalleryCell . h" (^implementation CGalleryCell
- (id) initWithStyle : (UITableViewCellStyle ) style
reuseldentifier : (NSString * ) reuseldentifier
{
self = [super initWithStyle : style
reuseldentifier : reuseldentifier ] ;
if (self) {
// Initialization code
}
return self;
}
- (void) setSelected : (BOOL) selected animated: ( BOOL ) animated {
[super setSelected : selected animated : animated] ;
UlView *lView = (UlView*) [self viewWithTag : 777 ] ;
lView.backgroundColor= [UlColor whiteColor] ;
// Configure the view for the selected state
}
@end
//
// CGalleryController .h
// EyeArtifact
//
//
#import <UIKit/UIKit . h>
Sinterface CGalleryController :
UIViewController<UITableViewDelegate , UITableViewDataSource> {
IBOutlet UIButton *mButtonRemove ;
IBOutlet UIButton *mButtonSave ;
IBOutlet UlView *mEditView;
}
@property (strong, nonatomic) IBOutlet UITableView *mTableView;
- ( IBAction) onEditClick : (id) sender;
- ( IBAction) onDeleteClick : (UIButton *) sender;
- ( IBAction) onSave : (id) sender;
- ( IBAction) onBack : (id) sender;
@end
// // CGalleryController . m
// EyeArtifact
//
#import "CGalleryController . h"
#import "CAppMgr.h"
#import "CViewerController . h"
#import "CDataMgr.h"
#import "CRegister . h"
#import "CUtil.h"
#import "constants . h"
S interface CGalleryController ()
{
NSMutableArray *_tableData;
}
Send
@implementation CGalleryController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) iewDidLoad
{
DLog (@"Gallery Load");
CDataMgr *gDataMgr= [CDataMgr getPtr] ;
_tableData = [ [NSMutableArray alloc] initWithArray : [gDataMgr getGallery] ] ;
DLog(@"Count %i " , [_tableData count]);
[super viewDidLoad];
}
- (void) viewDidAppear : ( BOOL ) animated
{
}
- (BOOL) shouldAutorotate
{
return YES;
}
- (NSUInteger) supportedlnterfaceOrientations return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (NSInteger ) tableView: (UITableView *)tableView
numberOfRows InSection : (NSInteger) section;
return [_tableData count] ;
- (UITableViewCell*) tableView: (UITableView *) tableView
cellForRowAtlndexPath : (NSIndexPath *)indexPath;
UITableViewCell *cell = [tableView
dequeueReusableCe11WithIdentifier :@"galleryCell"] ;
if (cell==nil)
{
cell= [ [UITableViewCell alloc]
initWithStyle : UITableViewCellStyleDefault
reuse Identifier: @"galleryCell"] ;
}
UllmageView *imageView = (UllmageView*) [cell viewWithTag : 100 ] ; UlView *uiView = (UllmageView*) [cell viewWithTag : 666] ;
uiView. layer . borderWidth=l .0 ;
uiView. layer . borderColor= [UlColor colorWithRed : 0.9 green:0.9 blue:0.9 alpha : 1.0 ] . CGColor ;
CRegister *lRegister = (CRegister *) [_tableData
obj ectAtlndex : indexPath . row] ;
if ( imageView ! =nil && IRegister ! =nil)
{
UILabel *label = (UILabel*) [cell viewWithTag : 200 ] ;
NSDate *date = [NSDate
dateWithTimeIntervalSincel970 : IRegister. creationTime ] ;
NSDateFormatter *dateFormat = [ [NSDateFormatter alloc] init] ;
[dateFormat setDateFormat : @ "MM-dd-yyyy HH:mm:ss"]; label. text = [dateFormat stringFromDate : date ] ;
;// [NSString stringWithFormat : @"Date : % @ " , dateString ] ;
NSString *pngPath = [NSHomeDirectory ( )
stringByAppendingPathComponent : IRegister . imagePath] ;
imageView . image= [UI Image imageWithContentsOfFile : pngPath] ;
}
return cell;
}
- ( oid) tableView : (UITableView *) tableView
didSelectRowAtlndexPath : (NSIndexPath *)indexPath
{
if ( ! tableView. isEditing)
{
UITableViewCell *cell = [tableView
cellForRowAtlndexPath : indexPath] ;
if (cell)
{
UllmageView *imageView = (UI ImageView* ) [cell viewWithTag : 100 ] ;
CRegister *lRegister = (CRegister *) [_tableData
obj ectAtlndex : indexPath . row] ;
CViewerController *newViewController = [ self . storyboard instantiateViewControllerWithldentifier : @"CViewer"] ;
NSDate *date = [NSDate
dateWithTimeIntervalSincel970 : IRegister. creationTime ] ;
NSDateFormatter *dateFormat = [ [NSDateFormatter alloc] init] ;
[dateFormat setDateFormat : @"MM-dd-yyyy HH:mm:ss"];
NSString *dateString = [dateFormat stringFromDate : date ] ;
NSString *nameStr= [NSString stringWithFormat : @ "Name : % @ ", IRegister . patientName ] ;
NSString *dateStr= [NSString stringWithFormat : @ "Taken : %@", dateString] ;
[newViewController setDataWithlmage : imageView. image andName : nameStr andDate : dateStr ] ;
[self. navigationControiler
pushViewController : newViewController animated : true ] ;
}
cell . selected=NO ;
}
}
- ( IBAction) onEditClick : (id) sender
{ [ self .mTableView setEditing: ! self .mTableView. isEditing
animated : YES ] ;
if (self .mTableView. isEditing&& [_tableData count] >0)
{
mEditView . hidden=NO ;
mEditView. layer . affineTransform=CGAffineTrans formMakeTrans lation (0 , 70) ;
[UlView animateWithDuration : 0.25 delay: 0.0
options : UIViewAnimationCurveEaselnOut
animations : {
mEditView. layer . affineTransform=CGAffineTrans formMakeTrans lation (0 ,
0) ;
}
completion : ( BOOL finished)
{
}] ;
NSArray ^indexes = [self .mTableView
indexPathsForVisibleRows ] ;
for (NSIndexPath *index in indexes)
{
if (index. row == 0)
{
[self .mTableView selectRowAtlndexPath : index animated:YES scrollPosition : UITableViewScrollPositionBottom] ;
}
}
mButtonRemove . enabled=YES ;
mButtonSave . enabled=YES ;
} else
if ( ! self .mTableView. isEditing)
{
[UlView animateWithDuration : 0.25 delay: 0.0
options : UIViewAnimationCurveEaselnOut
animations : {
mEditView. layer . affineTransform=CGAffineTrans formMakeTrans lation (0 , 70) ;
}
completion : ( BOOL finished)
{
mEditView. hidden=YES ; }] ;
mButtonRemove . enabled = NO;
mButtonSave . enabled = NO;
( IBAction) onDeleteClick : (UIButton *) sender
{
NSArray * indexPaths= [self .mTableView indexPaths ForSelectedRows ] ; if ( [ indexPaths count] >0)
{
UIAlertView *alert= [ [UIAlertView alloc]
initWithTitle : @ "Delete Photos" message : @"The selected photos will be deleted." delegate : self cancelButtonTitle : @"Ok"
otherButtonTitles : @"Cancel" , nil] ;
alert. tag = 5000;
[alert show] ;
}
}
-(void) alertView: (UIAlertView* ) alertView
clickedButtonAtIndex : (NSInteger) buttonIndex
{
DLog ( @ "AlertShow" ) ;
NSFileManager *fileManager = [NSFileManager defaultManager ] ;
if (alertView. tag==5000)
{
if (buttonlndex==0 )
{
DLog (@"AlertShow Delete photos");
NSArray * indexPaths= [ self .mTableView
indexPathsForSelectedRows ] ;
NSMutableArray *deleteRegisterArray= [ [NSMutableArray alloc] init] ;
for (NSIndexPath *indexPath in indexPaths)
{
CRegister * currentRegister= [_tableData obj ectAtlndex : indexPath . row] ;
[deleteRegisterArray addObj ect : currentRegister ] ;
NSString *pngPath = [NSHomeDirectory ( )
stringByAppendingPathComponent : currentRegister . imagePath] ;
[fileManager remove ItemAtPath : pngPath error: NULL];
}
[ tableData removeObj ects InArray : deleteRegisterArray] ; if ( [ [CDataMgr getPtr]
deleteRegisters : deleteRegisterArray] )
{
DLog ( @ "Bd Delete complete");
} else
{
DLog ( @ "Bd Delete failed");
}
[ self . mTableView deleteRowsAtIndexPaths : indexPaths withRowAnimation : UITableViewRowAnimationFade ] ;
} else
{
DLog (@"AlertShow DO nothing");
}
}
if (alertView.tag == 10000)
{
if (buttonlndex == 0)
{
DLog (@"AlertShow Save photos");
NSArray * indexPaths= [ self .mTableView
indexPathsForSelectedRows ] ;
for (NSIndexPath *indexPath in indexPaths)
{
UITableViewCell *cell = [ self .mTableView
cellForRowAtlndexPath : indexPath] ;
if (cell)
{
CRegister * currentRegister= [_tableData
obj ectAtlndex : indexPath . row] ;
DLog(@"index path selected");
UllmageView *imageView = (UI ImageView* ) [cell viewWithTag : 100 ] ;
NSDateFormatter *formatter;
NSString *dateStr;
formatter = [ [NSDateFormatter alloc] init] ;
[formatter setDateFormat : @"MM-dd-yyyy HH:mm:ss"]; dateStr = [formatter stringFromDate : [NSDate dateWithTime IntervalSince1970 : currentRegister . creationTime ] ] ;
// NSString * final= [NSString
stringWithFormat : @ " % @ % @ ", currentRegister . patientName , dateStr] ;
if ( [CAppMgr getPtr] . optionExportWithData )
{
Ullmage *outImage= [CUtil
drawFrameOnImage : imageView . image withName : currentRegister. patientName andDate : dateStr ] ;
UI ImageWriteToSavedPhotosAlbum ( outImage , nil , nil , nil ) ; } else
{
UI ImageWriteToSavedPhotosAlbum ( imageView . image , nil , nil, nil);
}
}
}
}
}
}
- ( IBAction) onSave : (id) sender
{
NSArray * indexPaths= [self .mTableView indexPaths ForSelectedRows ] ; if ( [ indexPaths count] >0)
{
UIAlertView *alert= [ [UIAlertView alloc] initWithTitle : @ " Save to Camera Roll" message : @"The selected photos will be exported to the Camera Roll." delegate : self cancelButtonTitle : @ "Ok"
otherButtonTitles : @"Cancel" , nil] ;
alert. tag = 10000;
[alert show] ;
}
}
- (void) onSaveComplete
{
//omg
}
- ( IBAction) onBack : (id) sender
{
[self.navigationController popViewControllerAnimated : YES ] ;
@end
//
// CMailController .h
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
#import "MRProgres s . h"
Sinterface CMailController : UIViewController<UIScrollViewDelegate ,
UITextFieldDelegate>
{
IBOutlet UIButton *mBackButton;
IBOutlet UlScrollView *mScrollView;
IBOutlet UI PageControl *mPageController ;
IBOutlet UlView *mFooterView; IBOutlet UlView *mCentralView;
weak IBOutlet UITextField *mainTextField;
}
- ( IBAction) onBackTap : (UIButton *) sender;
- (IBAction) onOkTap : (UIButton *)sender;
- ( oid) showLoading : (UlView* ) iew;
- (void) hideLoading : (UlView* ) view;
@end
//
// CMailController .m
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
//
#import "CMailController . h"
#import "CMailTextControiler . h"
#import "CUtil.h"
#import "CAppMgr.h"
ginterface CMailController ()
@end
@implementation CMailController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) viewDidLoad
{
[super viewDidLoad];
mScrollView . delegate = self;
mBackButton . hidden = NO;
mFooterView . hidden = YES;
mCentralView . frame = CGRectMake ( mCentralView . frame . origin . x , mCentralView . frame . origin . y, mCentralView . frame . size .width, mCentralView . frame .size.height+70) ;
[mScrollView
setContentSize: CGSizeMake (mScrollView . frame .size .width* 3 ,
mScrollView . frame . s i ze . width) ] ;
// Do any additional setup after loading the view.
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
- (void) showLoading : (UlView* ) view{
[MRProgressOverlayView showOverlayAddedTo : view
animated : YES ] ;
}
- (void) hideLoading : (UlView* ) view{
[MRProgressOverlayView dismis sOverlayForView : view
animated : YES ] ;
}
- (IBAction) onOkTap : (UIButton *)sender
{
NSString *mailString = [mainTextField text] ;
BOOL validMail = [CUtil isValidEmail :mailString] ;
if (validMail) { [self showLoading : [ [self navigationController ] view]];
NSDictionary ^parameters = @{ mailKey : mailString } ;
[ [CAppMgr getPtr] sendUserMailText : parameters
onCompleted : ( id response) {
[self hideLoading : [[self
navigationController] view] ] ;
CMailTextController *mController (CMailTextControiler*) [ self . storyboard
instantiateViewControllerWithldentifier : @ "CMailTextController" ] ;
[mController
setMailUser :mailString] ;
[self . navigationController pushViewController : mController animated : YES ] ;
/*
UIAlertView *alert = [ [UIAlertView alloc] initWithTitle : @
message : @"mail"
delegate : self
cancelButtonTitle : @"ok
otherButtonTitles : nil]
[alert show] ;
[[self navigationController] popToRootViewControllerAnimated : TRUE] ;
*/
} onFailure : (NSError *error) {
[self hideLoading: [[self
navigationController] view] ] ;
UIAlertView *alert = [ [UIAlertView alloc] initWithTitle : @"Error"
message : NSLocalizedString ( @ "ERROR_OCURRED" , nil)
delegate : self
cancelButtonTitle : @"ok"
otherButtonTitles : nil] ;
[alert show] ;
}] ;
} else { UIAlertView *alert = [ [UIAlertView alloc]
initWithTitle : @"Error"
message : @ "Please enter a valid email address"
delegate : self cancelButtonTitle : @"ok" otherButtonTitles : nil] ;
[alert show] ;
}
}
/*
- (BOOL) shouldPerformSegueWithldentifier : (NSString *) identifier sender: (id) sender{
if ([identifier isEqualToString : @ " showMailText" ] ) {
NSString *mailString = [mainTextField text] ;
BOOL validMail = [CUtil isValidEmail :mailString] ;
if ( ! alidMail) {
UIAlertView *alert = [ [UIAlertView alloc]
initWithTitle : @"Error"
message : @ "Please enter a valid email address"
delegate : self
cancelButtonTitle : @"ok"
otherButtonTitles : nil] ;
[alert show] ;
}
return validMail;
}
return TRUE;
}
*/
- (void) prepareForSegue : (UI StoryboardSegue *) segue sender: (id) sender {
if ( [segue . identifier isEqualToString : @"showMailText"] ) {
NSString *mailString = [mainTextField text] ;
CMailTextController *destViewController =
segue . destinationViewControiler ;
destViewController .mailUser = mailString;
} - (void) scrollViewDidScroll : (UlScrollView *) sender {
// Update the page when more than 50% of the previous /next page is visible
CGFloat pageWidth = mScrollView. frame . size .width;
int page = floor ( (mScrollView . contentOffset . x - pageWidth / 2) / pageWidth) + 1;
mPageController . currentPage = page;
}
#pragma mark - UITextFieldDelegate
- (BOOL) textFieldShouldReturn : (UITextField *)textField{
[textField res ignFirstResponder ] ;
return TRUE;
@end
//
// CMailTextController .h
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
//
#import "CMailController . h"
ginterface CMailTextController : CMailController {
weak IBOutlet UITextView *mainTextView;
@property (nonatomic, strong) NSString *mailUser;
- (IBAction) onOkTap : (UIButton *)sender;
@end
//
// CMailTextController .m
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
// #import "CMailTextControiler . h"
#import "CAppMgr.h"
ginterface CMailTextController ()
Send
@implementation CMailTextController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) iewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (IBAction) onOkTap : (UIButton *)sender
{
[ [self navigationController ]
popToRootViewControllerAnimated : TRUE] ;
/*
NSString *userText = [mainTextView text] ;
if ( [userText length] > 0){
[self showLoading : [ [self navigationController] view]];
NSDictionary ^parameters = @{ mailKey : [self mailUser] } ;
[ [CAppMgr getPtr] sendUserMailText : parameters
onCompleted : ( id response) {
[self hideLoading : [[self navigationController] view] ] ; UIAlertView *alert = [ [UIAlertView alloc] initWithTitle : @
message : @"mail"
delegate : self
cancelButtonTitle : @"ok
otherButtonTitles : nil]
[alert show] ;
[[self navigationController] popToRootViewControllerAnimated : TRUE] ;
onFailure : (NSError *error) {
[self hideLoading : [[self
navigationController] view] ] ;
UIAlertView *alert = [ [UIAlertView alloc] initWithTitle : @"Error"
message : @"mail"
delegate : self
cancelButtonTitle : @"ok"
otherButtonTitles : nil] ;
[alert show] ;
}]
} else {
UIAlertView *alert = [ [UIAlertView alloc]
initWithTitle : @"Error"
message : @ "This field cannot be empty."
delegate : self cancelButtonTitle : @"ok" otherButtonTitles : nil] ;
[alert show] ;
}
*/
@end
// // CNavController .h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
Sinterface CNavController : UINavigationController
@end
//
// CNavController .m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "CNavController . h"
Sinterface CNavController ()
Send
(^implementation CNavController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle
*) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
(void) viewDidLoad
[super viewDidLoad];
// Do any additional setup after loading the view.
(BOOL) shouldAutorotate
return [ [ self . viewControllers lastObject] shouldAutorotate];
(NSUInteger) supportedlnterfaceOrientations
return [[ self . viewControllers lastObject]
supportedlnterfaceOrientations ] ; -(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation {
return [ [ self . iewControllers lastObject]
preferredlnterfaceOrientationForPresentation] ;
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
@end
#ifndef CONSTANTS_H
#define CONSTANTS H
//#define LOG true
//#define DEV true
#ifdef LOG
# define DLog ( ... ) NSLog ( VA_ARGS )
#else
# define DLog (...) /* */
#endif
#ifdef DEV
# define BASE_URL @"http: //dev.movidreams .cl/eyecare/ws/"
#else
# define BASE_URL @ "http : / /app9. movidreams . com/eyecare/ws / "
#endif
// Debugging
const bool kPlotVectorField = false;
// Size constants
const int kEyePercentTop = 25;
const int kEyePercentSide = 13;
const int kEyePercentHeight = 30;
const int kEyePercentWidth = 35;
// Preprocessing
const bool kSmoothFace Image = false;
const float kSmoothFaceFactor = 0.005;
// Algorithm Parameters
const int kFastEyeWidth = 50; const int kWeightBlurSize = 5;
const bool kEnableWeight = false;
const float kWeightDivisor = 150.0;
const double kGradientThreshold = 50.
// Postprocessing
const bool kEnablePostProcess = true;
const float kPostProcessThreshold = 0
// Eye Corner
const bool kEnableEyeCorner = false;
#endif
//
// COptionsController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights
//
#import <UIKit/UIKit . h>
Sinterface COptionsController : UlViewController
{
IBOutlet UlView *mContainerView;
}
- ( IBAction) onBackButtonTap : (UIButton *) sender;
@end
//
// COptionsController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved. //
#import "COptionsController . h"
#import "CUtil.h"
Sinterface COptionsController ()
Send
@implementation COptionsController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle
*) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ; if (self) {
// Custom initialization
}
return self;
(void) iewWillAppear : (BOOL) animated
(void) viewDidLoad
[super viewDidLoad];
// Do any additional setup after loading the view.
(void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (IBAction) onBackButtonTap : (UIButton
{
[CUtil dismissOverlayModal : self] ;
}
@end
//
// COptionTableController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved. //
#import <UIKit/UIKit . h>
#import <Mes sageUI /Mes sageUI . h>
Sinterface COptionTableController :
UITableViewController<MFMailComposeViewControilerDelegate> {
IBOutlet UlSwitch *mCropSwitch ;
IBOutlet UlSwitch *mImageDataSwitch;
}
- ( IBAction) onAutoCropValueChanged : (UlSwitch *) sender;
- ( IBAction) onExportlmageWithDataChanged : (UlSwitch *) sender;
@end
// // COptionTableController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "COptionTableController .h"
#import "CAppMgr.h"
#import "CUtil.h"
Sinterface COptionTableController ()
Send
@implementation COptionTableController
- (id) initWithStyle : (UITableViewStyle ) style
{
self = [super initWithStyle : style ] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) viewDidLoad
{
[super viewDidLoad];
// Uncomment the following line to preserve selection between presentations .
// self . clearsSelectionOnViewWillAppear = NO;
// Uncomment the following line to display an Edit button in the navigation bar for this view controller.
// self . navigationltem . rightBarButtonltem = self . editButtonltem;
}
- (void) viewWillAppear : (BOOL) animated
{
mCropSwitch . on = [CAppMgr getPtr] . optionUseAutoCrop ;
mlmageDataSwitch . on = [CAppMgr getPtr] . optionExportWithData ;
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
#pragma mark - Table view delegate - (void) tableView: (UITableView *)tableView
didSelectRowAtlndexPath : (NSIndexPath *)indexPath
{
if ( indexPath . row==3 & &indexPath . section==l )
{
// [CUtil openEmailWithEmail : @ " support@mdprocare . com" andTitle : @"MD EyeCare Feedback" forController : self] ;
}
// Navigation logic may go here. Create and push another view controller .
/*
<#DetailViewController#> *detailViewController =
[ [<#DetailViewController#> alloc] initWithNibName : @ "<#Nib name#>" bundle : nil] ;
// ...
// Pass the selected object to the new view controller,
[self. navigationControiler
pushViewController : detailViewController animated : YES ] ;
*/
}
-(void) mailComposeController : (MFMailComposeViewController
*) controller didFinishWithResult : (MFMailComposeResult) result error: (NSError *) error
{
[self dismissViewControllerAnimated: YES completion : nil ] ;
}
- ( IBAction) onAutoCropValueChanged : (UlSwitch *) sender
{
[CAppMgr getPtr] . optionUseAutoCrop = sender. isOn;
}
- ( IBAction) onExportlmageWithDataChanged : (UlSwitch *) sender {
[CAppMgr getPtr ] . optionExportWithData = sender. isOn;
}
Send
//
// CRegister.h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <Foundation/Foundation . h>
Sinterface CRegister : NSObject @property (nonatomic, strong) NSString * patientName;
@property (nonatomic, strong) NSString * imagePath;
Sproperty (nonatomic) double creationTime ;
Sproperty (nonatomic) int registerld;
@end
//
// CRegister.m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "CRegister . h"
(^implementation CRegister
@end
//
// CResultController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
Sinterface CResultController : UIViewController<UITextFieldDelegate> {
IBOutlet UllmageView *mViewer;
IBOutlet UllmageView *mViewerBack ;
IBOutlet UlScrollView *mScrollView;
IBOutlet UITextField *mTextField;
}
- (void) setResuitImage : (UI Image* ) image ;
- (void) setResultBacklmage : (UI Image * ) image ;
- ( IBAction) savePicture : (id) sender;
- ( IBAction) discardPicture : (id) sender;
- ( IBAction) editingBegin : (UITextField *) sender;
- ( IBAction) editiingEnd : (UITextField *) sender;
@end
// // CResultController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "CResultController . h"
#import <QuartzCore /QuartzCore . h>
#import "CUtil.h"
#import "CDataMgr.h"
#import "constants . h"
S interface CResultController ()
{
Ullmage* _currentlmage ;
Ullmage* _currentBackImage ;
UITextField * activeField;
}
Send
@implementation CResultController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
_currentlmage = nil;
_currentBackImage = nil;
}
return self;
}
- (void) viewDidLoad
{
[super viewDidLoad];
//mViewer . layer . borderColor= [UlColor lightGrayColor ] .CGColor; //Viewer . backgroundColor= [UlColor blackColor] ;
/ /mViewer .layer . borderWidth=5.0 ;
mViewer . backgroundColor= [UlColor clearColor] ;
[mViewer setClipsToBounds : YES ] ;
[mViewer setContentMode : UIViewContentModeScaleAspectFit] ; [mViewerBack setContentMode : UIViewContentModeScaleAspectFit] ;
//Ullmage *pattern2= [UI Image imageNamed : @ "tweed . png "] ;
/ / self . view . backgroundColor= [UlColor
colorWithPatternImage : pattern2 ] ;
[ [NSNotificationCenter defaultCenter ] addObserver : self selector: @selector ( keyboaro!Was Shown : )
name : UIKeyboardDidShowNotification object:nil] ;
[ [NSNotificationCenter defaultCenter ] addObserver : self
selector: @selector ( keyboardWi11BeHidden : )
name : UIKeyboardWillHideNotification object:nil] ;
mTextField .delegate=self;
}
- (void) iewDidAppear : (BOOL) animated
{
[super viewDidAppear : animated] ;
if (_currentImage ! =nil)
[mViewer setlmage :_currentlmage] ;
if (_currentBackImage ! =nil )
[mViewerBack setlmage : _currentBackImage ] ;
}
- (void) keyboardWas Shown : (NSNotification* ) aNotification
{
NSDictionary* info = [aNotification userlnfo] ;
CGSize kbSize = [[info
objectForKey: UIKeyboardFrameBeginUserInfoKey] CGRectValue ] .size;
UIEdgelnsets contentlnsets = UIEdgelnsetsMake ( 0.0 , 0.0, kbSize . height, 0.0);
mScrollView . contentlnset = contentlnsets;
mScrollView. scrolllndicatorlnsets = contentlnsets;
// If active text field is hidden by keyboard, scroll it so it's visible
// Your application might not need or want this behavior.
CGRect aRect = self . view. frame ;
aRect . size . height -= kbSize .height;
CGPoint origin = activeField . frame . origin;
origin. y -= mScrollView . contentOffset . y;
if ( ! CGRectContains Point ( aRect , origin) ) {
CGPoint scrollPoint = CGPointMake ( 0.0 ,
activeField. frame . origin. y- (aRect.size.height)+
activeField . frame . size . height) ;
[mScrollView setContentOffset : scrollPoint animated : YES ] ;
}
} // Called when the UIKeyboardWillHideNotification is sent
- (void) keyboardWillBeHidden : (NSNotification* ) aNotification
{
UIEdgelnsets contentlnsets = UIEdgelnsets Zero ;
mScrollView . contentlnset = contentlnsets;
mScrollView. scrolllndicatorlnsets = contentlnsets;
}
- ( IBAction) editingBegin : (UITextField *) sender
{
activeField = sender;
( IBAction) editiingEnd : (UITextField *) sender
activeField = nil;
(BOOL) textFieldShouldReturn : (UITextField*) textField
DLog (@"Should") ;
[textField res ignFirstResponder ] ;
return YES;
(void) setResuitImage : (UI Image* ) image
_currentlmage = image;
(void) setResuitBackImage : (UI Image * ) image
_currentBackImage = image;
- ( IBAction) savePicture : (id) sender
{
if (_currentImage ! =nil)
{
CDataMgr *gDataMgr= [CDataMgr getPtr] ;
NSData *imgData = UIImagePNGRepresentation (_currentlmage) ; // 1 is compression quality
// Identify the home directory and file name
NSDateFormatter *formatter;
NSString *fileString;
formatter = [ [NSDateFormatter alloc] init] ;
[formatter setDateFormat : @ "ddMMyyyyHHmms s " ] ;
fileString = [formatter stringFromDate : [NSDate date]]; fileString = [ NSString
stringWithFormat : @ "Documents /% @ ", fileString] ;
NSString *pngPath = [NSHomeDirectory ( )
stringByAppendingPathComponent : fileString] ;
[imgData writeToFile : pngPath atomically : YES ] ;
if ( [ gDataMgr sa eDataWithPatientName : mTextField .text withlmagePath : fileString andDate : [NSDate date]])
{
DLog (@"Save OK") ;
} else
{
DLog(@"Save FAILED");
}
/* NSDateFormatter *formatter;
NSString *fileString;
formatter = [ [NSDateFormatter alloc] init] ;
[formatter setDateFormat : @ "dd-MM-yyyy HH : mm : ss " ] ; fileString = [formatter stringFromDate : [NSDate date]];
NSString * final= [NSString stringWithFormat : @"%@ %@", mTextField. text, fileString] ;
_currentlmage= [CUtil drawText : final inlmage : _currentlmage atPoint: CGPointMake ( 0 , 0)];
UI ImageWriteToSa edPhotosAlbum (_currentImage , nil , nil , nil);*/
}
// [ self . navigationController popViewControllerAnimated : true ] ; // [self dismissViewControllerAnimated : YES completion : nil ] ;
[CUtil dismissOverlayModal : self] ;
}
- ( IBAction) discardPicture : (id) sender
{
[CUtil dismissOverlayModal : self] ;
// [self . navigationController popViewControllerAnimated : true ] ; //[self dismissViewControllerAnimated : YES completion : nil ] ;
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
} - (BOOL) shouldAutorotate
{
return YES;
}
- (NSUInteger) supportedlnterfaceOrientations
{
return UI InterfaceOrientationMas kAll ;
}
@end
//
// CTutorialController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved. //
#import <UIKit/UIKit . h>
Sinterface CTutorialController :
UIViewController<UI ScrolIViewDelegate>
{
IBOutlet UIButton *mBackButton;
IBOutlet UlScrollView *mScrollView;
IBOutlet UI PageControl *mPageController ;
IBOutlet UlView *mFooterView;
IBOutlet UlView *mCentralView;
IBOutlet UIButton *nextButton;
IBOutlet UIButton *playButton;
}
- ( IBAction) onBackTap : (UIButton *) sender;
- (IBAction) onOkTap : (UIButton *)sender;
- (void) setFirstTime : (bool) firstTime ;
@end
//
// CTutorialController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved. //
#import "CTutorialController . h"
#import "CAppMgr.h"
Sinterface CTutorialController ()
{ bool _isFirstTime;
}
@end
@implementation CTutorialController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
_isFirstTime=false ;
}
return self;
}
- (void) iewDidLoad
{
[super viewDidLoad];
mScrollView . delegate = self;
if (_isFirstTime )
{
mBackButton . hidden = YES;
mFooterView . hidden = NO;
} else
{
mBackButton . hidden = NO;
[nextButton setHidden : TRUE ] ;
[playButton setCenter : CGPointMake ( [mFooterView
frame] . size .width/2 , [playButton center] .y) ] ;
/*
mFooterView . hidden = YES;
mCentralView . frame = CGRectMake ( mCentralView . frame . origin . x, mCentralView . frame . origin . y,
mCentralView . frame .size. width , mCentralView . frame .size. height+70) ;
*/
}
[mScrollView
setContentSize: CGSizeMake (mScrollView. frame .size .width* 3 ,
mScrollView . frame . s i ze . width) ] ;
// Do any additional setup after loading the view.
} - (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (IBAction) onBackTap : (UIButton *) sender
[self.navigationController popViewControllerAnimated : true] ;
- ( IBAction) onPlayVideo : (id) sender {
[ [CAppMgr getPtr] showTutorialVideo : self] ;
- (IBAction) onOkTap : (UIButton *)sender
[self.navigationController popViewControllerAnimated : NO] ;
- (void) setFirstTime : (bool) firstTime ;
_isFirstTime=firstTime ;
- (void) scrollViewDidScroll : (UlScrollView *) sender {
// Update the page when more than 50% of the previous /next page is visible
CGFloat pageWidth = mScrollView. frame . size .width;
int page = floor ( (mScrollView . contentOffset . x - pageWidth / 2) / pageWidth) + 1;
mPageController . currentPage = page;
}
Send
//
// CUtil.h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
// #import <Foundation/Foundation . h>
#import <opencv2 /highgui /cap_ios . h>
#import <opencv2 /obj detect/obj detect. hpp>
#import <opencv2 /imgproc/imgproc . hpp>
#import <opencv2 /openc . hpp>
#include <string>
using namespace cv; ginterface CUtil : NSObject
+(UIImage * ) UIImageFromCVMat : (cv: :Mat) cvMat;
+ (std: : string) getFilePathOf : (NSString*) inPath;
+ ( c : : Rect) getCorrectedRect : (cv: : Rect) rect
forMaxRect: (cv: : Rect) maxRect;
+ (cv : : Rect) getOuts ideCropRectOfSize : (cv : :Size) Size
onRect: (cv: : Rect) imageRect forMaxRect: (cv : : Rect) maxRect ;
+ (void) presentOverlayModal : (UlViewController* ) viewController onViewController : (UlViewController*) parent withTag: (NSInteger) tag; + (void) dismissOverlayModal : (UlViewController*) viewController ;
+ (Ullmage*) drawText: (NSString*) text inlmage : (Ullmage*) image atPoint: (CGPoint) point;
+ (Ullmage*) drawFrameOnlmage : (Ullmage* ) image withName : (NSString* ) name andDate : (NSString* ) date ;
+ (NSString *)getModel;
+ (void) openEmailWithEmail : (NSString* ) Email
andTitle: (NSString* ) subj ect
forController : (UlViewController*) controller;
+ (BOOL) isValidEmail : (NSString * ) checkString ;
@end
//
// CUtil.m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "CUtil. h"
#include <sys /types . h>
#include <sys /sysctl . h> #import <Mes sageUI /Mes sageUI . h>
(^implementation CUtil
+ (std: : string) getFilePathOf : (NSString*) inPath
{
NSBundle *b = [NSBundle mainBundle] ;
NSString *dir = [b resourcePath] ;
NSArray Mparts = [NSArray arrayWithObj ects : dir , inPath *) nil] ;
NSString *path = [NSString pathWithComponents : parts ] ; std::string stdPath =[path fileSystemRepresentation] ; return stdPath;
}
+(UIImage * ) UIImageFromCVMat : (cv: :Mat) cvMat;
{
NSData *data = [NSData dataWithBytes : cvMat . data length : cvMat . elemSize ( ) *cvMat . total ( ) ] ;
CGColorSpaceRef colorSpace;
if (cvMat . elemSize ( ) == 1) {
colorSpace = CGColorSpaceCreateDeviceGray ( ) ;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB ( ) ;
}
CGDataProviderRef provider =
CGDataProviderCreateWithCFData ( ( bridge CFDataRef) data) ;
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate (cvMat . cols ,
//width
cvMat . rows ,
//height
8,
//bits per component
8 * cvMat . elemSize
//bits per pixel
cvMat .step [ 0 ] ,
/ /bytes PerRow
colorSpace ,
/ /colorspace
kCGImageAlphaNone I kCGBitmapByteOrderDefault, // bitmap info provider ,
//CGDataProviderRef
NULL,
/ /decode
false ,
//should interpolate kCGRenderingIntentDefault
// Getting Ullmage from CGImage
Ullmage *finallmage = [Ullmage imageWithCGImage : imageRef] ;
CGImageRelease (imageRef) ;
CGDataProviderRelease (provider) ;
CGColorSpaceRelease (colorSpace) ;
return finallmage;
}
+ ( c : : Rect) getCorrectedRect : (cv: : Rect) rect
forMaxRect: (cv: : Rect) maxRect;
{
cv: :Rect outCanvas ;
outCanvas . x=rect . x>=0 ?rect . x : 0 ;
outCanvas . y=rect . y>=0 ?rect . y : 0 ;
outCanvas . width= (rect.x+rect. width) >maxRect . width? (maxRect . width- rect.x) : (rect. width) ;
outCanvas . height= (rect.y+rect. height) >maxRect .height? (maxRect . height- rect.y) : (rect .height) ;
/ /Correct
outCanvas . width=outCanvas . width>=0 ?outCanvas . width : 1 ;
outCanvas . height=outCanvas . height>=0 ?outCanvas .height : 1 ;
return outCanvas;
}
+ (cv : : Rect) getOuts ideCropRectOfSize : (cv : :Size) Size
onRect: (cv: : Rect) imageRect forMaxRect: (cv :: Rect) maxRect ;
{
cv: :Rect outCanvas;
int IMax = MAX ( imageRect . width , imageRect . height) ;
int IMin = MIN ( imageRect . width , imageRect . height) ;
int IDiff = IMax-lMin;
if ( imageRect . width>imageRect .height)
{
outCanvas . x=imageRect . x ;
outCanvas . y=imageRect .y-lDiff/2;
outCanvas . width = imageRect . width ;
outCanvas . height = imageRect . height+lDiff ;
outCanvas= [CUtil getCorrectedRect : outCanvas
forMaxRect : maxRect] ;
} else
{
//A iomplementar }
return outCanvas;
}
+ (void) presentOverlayModal : (UlViewController*) viewController onViewController : (UlViewController*) parent withTag: (NSInteger) tag; {
viewController . view . tag = tag;
[parent addChildViewController : viewController ] ;
viewController . view . frame=parent . view . frame ;
[parent. view addSubview : viewController . view] ;
[viewController didMoveToParentViewController : parent];
}
+ (void) dismissOverlayModal : (UlViewController*) viewController ;
{
if ( [viewController .parentViewController
respondsToSelector : @selector (overlayModalClosed : ) ] )
{
[viewController . parentViewController
performselector : @selector ( overlayModalClosed : ) withObj ect : [NSNumber numberWithlnteger : viewController . view . tag ] ] ;
}
[viewController willMoveToParentViewController : nil] ;
[viewController . view removeFromSuperview] ;
[viewController removeFromParentViewController] ;
}
+(UIImage*) drawFrameOnlmage : (Ullmage* ) image withName : (NSString* ) name andDate : (NSString* ) date ;
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth (imageRef) +40 ;
NSUInteger height = CGImageGetHeight ( imageRef) +80 ;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB ( ) ;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bits PerComponent = 8;
unsigned char *rawData = (unsigned char *) malloc (height * width * 4) ;
memset (rawData, Oxff, height * width * 4);
CGContextRef context = CGBitmapContextCreate ( rawData , width, height, bits PerComponent , bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big) ;
CGColorSpaceRelease (colorSpace) ;
CGContextDrawImage ( context , CGRectMake (10 , 70, width-20, height- 80), imageRef) ; CGImageRef imgRef = CGBitmapContextCreate Image (context) ;
Ullmage* img = [Ullmage imageWithCGImage : imgRef] ;
img = [self drawText : name inlmage : img atPoint : CGPointMake ( 10 , height-60) ] ;
img = [self drawText : date inlmage : img atPoint : CGPointMake ( 10 , height-35) ] ;
CGContextRelease (context) ;
free (rawData) ;
return img
}
+ (Ullmage*) drawText: (NSString* ) text inlmage: (UI Image *) image atPoint: (CGPoint) point
{
UIFont *font = [UIFont boldSystemFontOfSize : 24] ;
UIGraphicsBeginlmageContext (image .size) ;
[ image
drawInRect : CGRectMake (0,0, image .size. width , image .size. height) ] ;
CGRect rect = CGRectMake (point . x, point. y, image . size .width, image . size . height) ;
[ [UlColor blackColor] set] ;
[text drawInRect : CGRectlntegral (rect) withFont : font] ;
Ullmage *newlmage = UIGraphicsGetlmageFromCurrentlmageContext ( ) ; UIGraphicsEndlmageContext ( ) ;
return newlmage;
}
+ (NSString *)getModel {
size_t size;
sysctlbyname ( "hw .machine " , NULL, &size, NULL, 0);
char *model = new char [size];
sysctlbyname ( "hw .machine " , model, &size, NULL, 0);
NSString *sDeviceModel = [NSString stringWithCString :model encoding : NSUTF8StringEncoding] ;
free (model) ;
if ( [sDeviceModel isEqual : @ "i386" ] ) return @"Simulator" ;
//iPhone Simulator
if ([sDeviceModel isEqual : @ "iPhone 1 , 1 " ] ) return @"iPhonelG"; //iPhone 1G
if ([sDeviceModel isEqual : @ "iPhone 1 , 2 "] ) return @"iPhone3G"; //iPhone 3G
if ([sDeviceModel isEqual : @ "iPhone2 , 1 "] ) return @ "iPhone3GS " ; //iPhone 3GS
if ([sDeviceModel isEqual : @ "iPhone3 , 1 "] ) return @"iPhone4 AT&T"; //iPhone 4 - AT&T if ( [ sDeviceModel isEqual @''iPhone3 , 2 "] ) return @''iPhone4 Othe
//iPhone 4 - Other carrier
if ( [sDeviceModel isEqual @' 'iPhone3 , 3 "] ) return @' 'iPhone4" ;
//iPhone 4 - Other carrier
if ( [sDeviceModel isEqual @' 'iPhone4, 1 "] ) return @' 'iPhone4S" ;
//iPhone 4S
if ( [sDeviceModel isEqual @' 'iPhone5 , 1 "] ) return @' 'iPhone5" ;
//iPhone 5 (GSM)
if ( [sDeviceModel isEqual @' 'iPodl, 1"] return @' 'iPodlstGen";
//iPod Touch 1G
if ( [sDeviceModel isEqual @' 'iPod2, 1"] return @' 'iPod2ndGen";
//iPod Touch 2G
if ( [sDeviceModel isEqual @' 'iPod3, 1"] return @' 'iPod3rdGen";
//iPod Touch 3G
if ( [sDeviceModel isEqual @' 'iPod4, 1"] return @' 'iPod4thGen";
//iPod Touch 4G
if ( [sDeviceModel isEqual @' 'iPadl, 1"] return @' 'iPadWiFi" ;
//iPad Wifi
if ( [sDeviceModel isEqual @' 'iPadl, 2"] return @' 'iPad3G";
//iPad 3G
if ( [sDeviceModel isEqual @' 'iPad2, 1"] return @' 'iPad2";
//iPad 2 (WiFi)
if ( [sDeviceModel isEqual @' 'iPad2, 2"] return @' 'iPad2";
//iPad 2 (GSM)
if ( [sDeviceModel isEqual @' 'iPad2, 3"] return @' 'iPad2";
//iPad 2 (CDMA)
NSString *aux = [[sDeviceModel components SeparatedByString : @ "," ] obj ectAtlndex : 0 ] ;
//If a newer version exist
if ( [aux rangeOfString : @"iPhone"] . location ! =NSNotFound)
int version = [ [aux
stringByReplacingOccurreneesOfString : @ "iPhone" withString : @ " intValue ] ;
if (version == 3) return @"iPhone4";
if (version >= 4) return @"iPhone4s";
}
if ([aux rangeOfString : @"iPod" ] . location ! =NSNotFound) {
int version = [ [aux
stringByReplacingOccurreneesOfString : @ "iPod" withString : @ " " ] intValue ] ;
if (version >=4) return @"iPod4thGen" ;
}
if ([aux rangeOfString : @"iPad" ] . location ! =NSNotFound) {
int version = [ [aux
stringByReplacingOccurreneesOfString : @ "iPad" withString : @ " " ] intValue ] ;
if (version ==1) return @"iPad3G";
if (version >=2) return @"iPad2";
} //If none was found, send the original string
return sDeviceModel ;
}
+ (void) openEmailWithEmail : (NSString* ) Email
andTitle: (NSString* ) subj ect
forController: (UIViewController<MFMailComposeViewControllerDelegate>*
) controller ;
{
MFMailComposeViewController ^composer =
[ [MFMailComposeViewController alloc] init] ;
[composer setMailComposeDelegate : controller ] ;
if ( [MFMailComposeViewController canSendMail] )
{
[composer setToRecipients : [NSArray arrayWithObj ect : Email ]] ; [composer setSubj ect : subj ect] ;
[controller presentViewController : composer animated: YES completion : nil] ;
}
}
+ (BOOL) isValidEmail : (NSString * ) checkString
{
BOOL stricterFilter = NO; // Discussion
http: //blog.logichigh. com/2010/09/02 /validating-an-e-mail-addres s /
NSString *stricterFilterString = @" [A-Z0-9a-z\\ ._%+-] +@ ( [A-Za-z0- 9-]+\\.)+[A-Za-z] {2,4}";
NSString *laxString = @" . +@ ( [A-Za-zO-9-] +\\ . ) + [A-Za-z] { 2 } [A-Za- z ] * " ;
NSString *emailRegex = stricterFilter ? stricterFilterString : laxString ;
NSPredicate *emailTest = [NSPredicate predicateWithFormat : @ " SELF MATCHES %@", emailRegex] ;
return [emailTest evaluateWithObj ect : checkString] ;
}
@end
//
// CVideoPlayerController . h
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
#import <MediaPlayer/MediaPlayer . h>
Sinterface CVideoPlayerController : MPMoviePlayerViewController @end
//
// CVideoPlayerController . m
// EyeArtifact
//
// Copyright (c) 2014 Movidreams S.A. All rights reserved.
//
#import "CVideoPlayerController . h"
#define defaultUrlString
@ "http : / /www .ebookfrenzy. com/ios_book/movie/movie .mov"
Sinterface CVideoPlayerController ()
@end
@implementation CVideoPlayerController
@end
//
// CViewerController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
Sinterface CViewerController : UlViewController
{
IBOutlet UllmageView *mImageView;
IBOutlet UILabel *mNameLabel;
IBOutlet UILabel *mDateLabel;
IBOutlet UlView *mContainerView;
}
- ( IBAction) onBackButtonTap : (UIButton *) sender;
-(void) setDataWithlmage : (Ullmage* ) image andName : (NSString* ) name andDate : (NSString* ) date ;
@end
//
// CViewerController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved. //
#import "CViewerController . h"
#import "CUtil.h"
Sinterface CViewerController ()
{
Ullmage *_currentlmage ;
NSString *_currentName ;
NSString *_currentDate ;
}
Send
@implementation CViewerController
- (id) initWithNibName : (NSString * ) nibNameOrNil bundle : (NSBundle *) nibBundleOrNil
{
self = [super initWithNibName : nibNameOrNil
bundle : nibBundleOrNil] ;
if (self) {
// Custom initialization
}
return self;
}
- (void) iewDidLoad
{
[super viewDidLoad];
mlmageView . image = _currentlmage ;
mNameLabel . text=_currentName ;
mDateLabel . text=_currentDate ;
// Do any additional setup after loading the view.
}
- (void) didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- ( IBAction) onBackButtonTap : (UIButton *) sender
{
[self.navigationController popViewControllerAnimated : true] ;
}
- (void)
didRotateFromlnterfaceOrientation : (UllnterfaceOrientation) fromlnterfa ceOrientation
{ if (fromlnterfaceOrientation==UI InterfaceOrientationLandscapeLeft | | fro mlnterfaceOrientation==UI InterfaceOrientationLandscapeRight)
{
[UlView animateWithDuration : 0.4 delay: 0.0
options : UIViewAnimationCurveEaselnOut
animations :
Λ{
mlmageView. frame = CGRectMake (10 , 10 , 280, 280);
}
completion : ( BOOL finished)
{
}
] ;
} else
{
[UlView animateWithDuration : 0.4 delay: 0.0
options : UIViewAnimationCurveEaselnOut
animations :
Λ{
mlmageView . frame = CGRectMake (10 ,
10,
mContainerView . frame .size. width-20 , mContainerView . frame .size.height- 20) ;
}
completion : (BOOL finished)
{
}
] ;
}
}
(void) willRotateToInterfaceOrientation : (UllnterfaceOrientation)toInte rfaceOrientation duration: (NSTimelnterval) duration
{
if (tolnterfaceOrientation==UI InterfaceOrientationLandscapeLef11 | tolnt erfaceOrientation==UI InterfaceOrientationLandscapeRight)
{
[UlView animateWithDuration : duration delay: 0.0
options : UIViewAnimationCurveEaselnOut
animations :
Λ{ mDateLabel . alpha = 0.0;
mNameLabel . alpha = 0.0;
completion : ( BOOL finished)
}
] ;
} else
{
[UlView animateWithDuration : duration delay: 0.0 options : UIViewAnimationCurveEaselnOut
animations :
Λ{
mDateLabel . alpha = 1.0;
mNameLabel . alpha = 1.0;
}
completion : ( BOOL finished)
{
}
] ;
}
}
-(void) setDataWithlmage : (Ullmage* ) image andName : (NSString* ) name andDate : (NSString* ) date ;
{
_currentlmage = image;
_currentName = name;
_currentDate = date;
}
@end
//
// CvVideoCameraMod.h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <opencv2 /highgui /cap_ios . h>
#import <opencv2 /obj detect/obj detect. hpp>
#import <opencv2 /imgproc/imgproc . hpp>
#import <opencv2 /opencv . hpp>
using namespace cv;
Sprotocol CvVideoCameraDelegateMod <CvVideoCameraDelegate> @end Sinterface CvVideoCameraMod : CvVideoCamera
- ( oid) updateOrientation;
- (void) layoutPreviewLayer ;
- (void) initializeWithDelegate : (id)deleg;
- (CGPoint) getNormalizedPositionFrom: (CGPoint) point ;
@property (nonatomic, retain) CALayer *customPreviewLayer ;
@property (nonatomic) float outerXOffset ;
@end
77
// CvVideoCameraMod .m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "CvVideoCameraMod . h"
#define DEGREES_RADIANS (angle) ((angle) / 180.0 * M_PI)
Sinterface CvVideoCameraMod ()
{
CGSize _cameraResolution;
CGSize _viewportSize;
CGSize _cameraSize;
CGSize _cameraOffset;
float _cameraAspectRatio ;
float _viewportAspectRatio ;
}
@end
(^implementation CvVideoCameraMod
- (void) updateOrientation;
{
self . customPreviewLayer .bounds = CGRectMake ( 0 , 0,
self .parentView. frame .size .width, self .parentView. frame .size. height) ;
[self layoutPreviewLayer];
}
- (void) layoutPreviewLayer ;
{ if ( self . parentView != nil)
{
CALayer* layer = self . customPreviewLayer ;
CGRect bounds = self . customPreviewLayer . bounds ;
int rotation_angle = 0;
switch (defaultAVCaptureVideoOrientation) {
case AVCaptureVideoOrientationLandscapeRight :
rotation_angle = 180;
break;
case AVCaptureVideoOrientationPortraitUps ideDown :
rotation_angle = 270;
break;
case AVCaptureVideoOrientationPortrait :
rotation_angle = 90;
case AVCaptureVideoOrientationLandscapeLeft :
break;
default :
break;
}
rotation_angle = 0;// sin rotaciones
layer .position =
CGPointMake (self .parentView. frame .size .width/2. ,
self . parentView . frame .size. height/2. ) ;
layer . affineTransform = CGAffineTransformMakeRotation ( DEGREES_RADIANS (rotation_angle) ) ;
_viewportSize = bounds. size;
_viewportAspectRatio =
_viewportSize .height/ (float) _viewportSize .width;
_cameraSize = CGSizeMake (_viewportSize .width,
_viewportSize . width*_cameraAspectRatio ) ;
_cameraOffset = CGSizeMake ( (_cameraSize .width- _viewportSize .width) /2.0 , (_cameraSize . height- _viewportSize .height) /2.0) ;
CGRect outRect=CGRectMake ( 0 , 0, _cameraSize .width,
_cameraSize . height) ;
layer. bounds = outRect;
self . outerXOffset=_viewportSize . height/_cameraSize .height; NSLog (@"self . outerXOffset : %f", self . outerXOffset) ;
/* NSLog (@"»Camera Param Begin");
NSLog (@"Bounds
%f,%f,%f,%f", bounds . origin . x, bounds . origin . y, bounds .size .width, bounds . size .height) ; NSLog ( @ "LayerBounds
%f,%f,%f,%f", layer. bounds . origin. x, layer. bounds . origin. y, layer. bounds .size. width , layer . bounds .size. height) ;
NSLog (@"_viewportSize
%f , %f" , _cameraOffset . width , _cameraOffset . height) ; * /
}
}
//ZOSER FUNC
- (CGPoint) getNormalizedPositionFrom: (CGPoint) point ;
{
CGPoint outPoint;
outPoint.x =
( (point . x+_cameraOffset . width) / (float) _viewportSi ze . width) ;
outPoint. y =
( (point . y+_cameraOffset .height) / ( (float) _viewportSi ze . width*_cameraAs pectRatio) ) ;
return outPoint;
}
- (void) initializeWithDelegate : (id)deleg;
{
self . defaultAVCaptureDe icePosition =
AVCaptureDevicePositionBack; //AVCaptureDevicePositionFront; //
self . defaultAVCaptureSessionPreset =
AVCaptureSessionPresetl280x720 ; //AVCaptureSessionPreset640x480 ; // self . defaultAVCaptureVideoOrientation =
AVCaptureVideoOrientationPortrait ;
self .defaultFPS = 30;
self . delegate = deleg;
self . grayscaleMode = NO;
_cameraResolution = CGSi zeMake ( 720 , 1280 ) ;
_viewportSize = CGSizeMake (320 , 548 ) ;
_cameraOffset = CGSizeMake (0 ,0);
_cameraAspectRatio =
_cameraResolution .height/ (float) _cameraResolution . width ;
_viewportAspectRatio =
viewportSize .height/ (float) viewportSize .width;
}
@end
<?xml version="l .0" encoding="UTF-8 "?>
<!DOCTYPE plist PUBLIC "-/ /Apple/ /DTD PLIST 1.0//EN"
"http : //www. apple . com/DTDs/PropertyList-1.0. dtd">
<plist vers ion=" 1.0 ">
<dict> <key>CFBundleDe elopmentRegion</ key>
<string>en</string>
<key>CFBundleDisplayName</ key>
<string>$ { PRODUCT_NAME } </string>
<key>CFBundleExecutable</ key>
<string>$ { EXECUTABLE_NAME }</string>
<key>CFBundleIcons</key>
<dict/>
<key>CFBundleIcons~ipad</key>
<dict/>
<key>CFBundleIdentifier</ key>
<string>com . mo idrearns .Eye-Care</string>
<key>CFBundleInfoDictionaryVersion</ key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$ { PRODUCT_NAME } </string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVers ionString</ key>
<string>l .1. K/string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>1.1.2</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>NSAppTransportSecurity</ key>
<dict>
<key>NSAllowsArbitraryLoads</ key>
<false/>
<key>NSExceptionDomains</ key>
<dict>
<key>movidreams . cl</key>
<dict>
<key>NSIncludesSubdomains</key> <true/>
<key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key>
<true/>
</dict>
<key>movidreams . com</key>
<dict>
<key>NSIncludesSubdomains</key> <true/>
<key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key>
<true/>
</dict>
</dict>
</dict>
<key>UIMainStoryboardFile</key>
<string>MainStoryboard_iPhone</string> <key>UIMainStoryboardFile~ipad</ key>
<string>MainStoryboard_iPad</string>
<key>UIPrerenderedIcon</key>
<true/>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
<string>camera-flash</string>
</array>
<key>UI StatusBarHidden</key>
<true/>
<key>UI SupportedlnterfaceOrientations</key>
<array>
<string>UI InterfaceOrlentationPortraitUps ideDown</string> <string>UI InterfaceOrientationLandscapeLeft</string> <string>UI InterfaceOrientationLandscapeRight</string> </array>
<key>UI SupportedlnterfaceOrientations ~ipad</ key>
<array>
<string>UI InterfaceOrlentationPortraitUps ideDown</string> <string>UI InterfaceOrientationLandscapeLeft</string> <string>UI InterfaceOrientationLandscapeRight</string> </array>
<key>UIViewControllerBasedStatusBarAppearance</ key>
<false/>
</dict>
</plist>
//
// Prefix header for all source files of the ' EyeArtifact ' target in the ' EyeArtifact ' project
//
#import <Availability . h>
#ifndef IPHONE_5_0
#warning "This project uses features only available in iOS SDK 5.0 and later . "
#endif
#ifdef OBJC
#import <UIKit/UIKit . h>
#import <Foundation/Foundation . h>
#endif
#include "openc 2 /obj detect/obj detect . hpp "
#include "opencv2/highgui/highgui .hpp"
#include "opencv2/imgproc/imgproc . hpp" //#include <mgl2/mgl.h>
#include <iostream>
#include <queue>
#include <stdio.h>
#include "constants . h"
#include "helpers. h"
// Pre-declarations
cv: :Mat floodKillEdges (cv: :Mat Smat) ;
#pragma mark Visualization
/*
template<typename T> mglData *matToData (const cv
mglData *data = new mglData (mat . cols , mat . rows )
for (int y = 0; y < mat. rows; ++y) {
const T *Mr = mat . ptr<T> ( y) ;
for (int x = 0; x < mat. cols; ++x) {
data->Put ( ( (mreal) Mr [x] ) , x, y) ;
}
}
return data; void plotVecField (const cv::Mat &gradientX, const cv::Mat &gradientY, const cv: :Mat &img) {
mglData *xData = matToData<double> ( gradientX) ;
mglData *yData = matToData<double> ( gradientY) ;
mglData *imgData = matToData<float> ( img ) ;
mglGraph gr ( 0 , gradientX . cols * 20, gradientY . rows * 20);
gr . Vect ( *xData , *yData) ;
gr .Mesh (*imgData) ;
gr . WriteFrame ( "vecField . png " ) ;
delete xData;
delete yData;
delete imgData;
}*/
#pragma mark Helpers
cv::Point unscalePoint (cv: : Point p, cv: :Rect origSize) {
float ratio = ((( float) kFastEyeWidth) /origSize . width) ;
int x = round(p.x / ratio) ;
int y = round(p.y / ratio) ;
return cv : :Point(x,y) ; void scaleToFastSize (const cv::Mat &src,cv::Mat &dst) { c : : res ize ( src , dst,
c : : Size ( kFastEyeWidth, ( ( (float) kFastEyeWidth) /src. cols) *
src . rows ) ) ;
}
cv::Mat computeMatXGradient (const cv::Mat Smat) {
c : :Mat out (mat . rows , mat . cols , CV_64F) ;
for (int y = 0; y < mat. rows; ++y) {
const uchar *Mr = mat . ptr<uchar> ( y) ;
double *Or = out . ptr<double> ( y) ;
Or [0] = Mr [1] - Mr [ 0 ] ;
for (int x = 1; x < mat. cols - 1; ++x) {
Or[x] = (Mr[x+1] - Mr [x-1] ) /2.0 ;
}
Or [mat . cols-1] = Mr [mat . cols- 1 ] - Mr [mat . cols-2 ] ;
}
return out;
}
#pragma mark Main Algorithm
void testPossibleCentersFormula (int x, int y, unsigned char
weight, double gx, double gy, cv::Mat Sout) {
// for all possible centers
for (int cy = 0; cy < out. rows; ++cy) {
double *Or = out . ptr<double> ( cy) ;
for (int cx = 0; cx < out. cols; ++cx) {
if (x == cx && y == cy) {
continue ;
}
// create a vector from the possible center to the gradient origin
double dx = x - cx;
double dy = y - cy;
// normalize d
double magnitude = sqrt ( (dx * dx) + (dy * dy) ) ;
dx = dx / magnitude;
dy = dy / magnitude;
double dotProduct = dx*gx + dy*gy;
dotProduct = std :: max ( 0.0 , dotProduct) ;
// square and multiply by the weight
if (kEnableWeight) {
Or[cx] += dotProduct * dotProduct * (weight/kWeightDivisor ) ; } else {
Or[cx] += dotProduct * dotProduct;
}
}
}
} cv::Point findEyeCenter (c : :Mat face, cv::Rect eye, std::string debugWindow) {
cv::Mat eyeROIUnscaled = face (eye);
c : :Mat eyeROI ;
scaleToFastSize (eyeROIUnscaled, eyeROI) ;
// draw eye region
rectangle (face, eye, 1234) ;
//-- Find the gradient
cv : :Mat gradientX = computeMatXGradient ( eyeROI ) ;
cv::Mat gradientY = computeMatXGradient ( eyeROI . t ()) . t () ;
//-- Normalize and threshold the gradient
// compute all the magnitudes
cv::Mat mags = matrixMagnitude ( gradientX , gradientY);
//compute the threshold
double gradientThresh = computeDynamicThreshold (mags ,
kGradientThreshold) ;
//double gradientThresh = kGradientThreshold;
//double gradientThresh = 0;
//normalize
for (int y = 0; y < eyeROI. rows; ++y) {
double *Xr = gradientX . ptr<double> (y) , *Yr =
gradientY . ptr<double> ( y) ;
const double *Mr = mags . ptr<double> ( y) ;
for (int x = 0; x < eyeROI. cols; ++x) {
double gX = Xr [x] , gY = Yr[x];
double magnitude = Mr [x] ;
if (magnitude > gradientThresh) {
Xr[x] = gX/magnitude ;
Yr[x] = gY/magnitude ;
} else {
Xr [x] = 0.0;
Yr [x] = 0.0;
}
}
}
imshow (debugWindow, gradientX) ;
//-- Create a blurred and inverted image for weighting
cv : :Mat weight;
GaussianBlur ( eyeROI, weight, cv: : Size ( kWeightBlurSize, kWeightBlurSize ), 0, 0 );
for (int y = 0; y < weight. rows; ++y) {
unsigned char *row = weight .ptr<unsigned char>(y) ;
for (int x = 0; x < weight. cols; ++x) {
row[x] = (255 - row[x]);
}
}
//imshow (debugWindow, weight) ;
//-- Run the algorithm!
cv::Mat outSum = cv :: Mat :: zeros ( eyeROI . rows , eyeROI . cols , CV_64F) ; // for each possible center
printf("Eye Size: %ix%i\n" , outSum . cols , outSum . rows ) ; for (int y = 0; y < weight. rows; ++y) {
const unsigned char *Wr = weight . ptr<uns igned char>(y) ;
const double *Xr = gradientX . ptr<double> ( y) , *Yr =
gradientY . ptr<double> ( y) ;
for (int x = 0; x < weight. cols; ++x) {
double gX = Xr [x] , gY = Yr[x];
if (gX == 0.0 && gY == 0.0) {
continue ;
}
testPossibleCentersFormula (x, y, Wr [x] , gX, gY, outSum) ;
}
}
// scale all the values down, basically averaging them
double numGradients = (weight . rows*weight . cols ) ;
cv : : Mat out ;
outSum . convertTo ( out , CV_32 F , 1.0 /numGradients ) ;
//imshow (debugWindow, out) ;
//-- Find the maximum point
cv : :Point maxP;
double maxVal;
cv : : minMaxLoc ( out , NULL , SmaxVal , NULL , SmaxP ) ;
//-- Flood fill the edges
if ( kEnablePostProcess ) {
cv::Mat floodClone;
//double floodThresh = computeDynamicThreshold (out, 1.5);
double floodThresh = maxVal * kPostProcessThreshold;
cv: : threshold (out, floodClone, floodThresh, O.Of,
cv: : THRESH_TOZERO) ;
if (kPlotVectorField) {
//plotVecField (gradientX, gradientY, floodClone);
imwrite ( "eyeFrame . png " , eyeROIUnsealed) ;
}
cv: :Mat mask = floodKillEdges (floodClone) ;
//imshow (debugWindow + " Mask", mask) ;
//imshow (debugWindow, out) ;
// redo max
cv: iminMaxLoc (out, NULL , SmaxVal , NULL , SmaxP , mas k) ;
}
return unscalePoint (maxP , eye ) ;
#pragma mark Postprocessing
bool floodShouldPushPoint (const cv::Point &np, const cv::Mat Smat) { return inMat(np, mat. rows, mat. cols);
}
// returns a mask
cv: :Mat floodKillEdges (cv: :Mat Smat) {
rectangle (mat , cv : : Rect ( 0 , 0 , mat . cols , mat . rows ) ,255) ;
cv::Mat mas k (mat . rows , mat. cols, CV 8U, 255); std : : queue<c : : Point> toDo;
toDo. push (cv: : Point (0,0) ) ;
while ( ! toDo . empty () ) {
cv: : Point p = toDo.front () ;
toDo . pop ( ) ;
if (mat . at<float> (p) == O.Of) {
continue ;
}
// add in every direction
cv::Point np(p.x + 1, p.y); // right
if (floodShouldPushPoint (np, mat)) toDo . push ( np ) ;
np.x = p.x - 1; np.y = p.y; // left
if (floodShouldPushPoint (np, mat)) toDo . push ( np ) ;
np.x = p.x; np.y = p.y + 1; // down
if (floodShouldPushPoint (np, mat)) toDo . push ( np ) ;
np.x = p.x; np.y = p.y - 1; // up
if (floodShouldPushPoint (np, mat)) toDo . push ( np ) ;
// kill it
mat . at<float> (p) = O.Of;
mas k . at<uchar> (p ) = 0;
}
return mask;
#ifndef EYE_CENTER_H
#define EYE_CENTER_H
#include "opencv2/imgproc/imgproc . hpp"
cv::Point findEyeCenter (cv : :Mat face, cv::Rect eye, std::string debugWindow) ;
#endif
#include "opencv2 /obj detect/obj detect . hpp "
#include "opencv2/highgui/highgui .hpp"
#include "opencv2/imgproc/imgproc . hpp"
#include <iostream>
#include <queue>
#include <stdio.h>
#include "constants . h"
bool rectlnlmage (cv : : Rect rect, cv::Mat image) {
return rect.x > 0 && rect.y > 0 && rect . x+rect . width < image. cols
&&
rect . y+rect . height < image. rows;
}
bool inMat ( cv :: Point p,int rows,int cols) { return p.x >= 0 && p.x < cols && p.y >= 0 && p.y < rows;
}
cv::Mat matrixMagnitude (const cv::Mat &matX, const cv::Mat SmatY) { c : :Mat mags (matX . rows ,matX . cols , CV_64F) ;
for (int y = 0; y < matX.rows; ++y) {
const double *Xr = matX . ptr<double> ( y) , *Yr =
matY . ptr<double> (y) ;
double *Mr = mags . ptr<double> ( y) ;
for (int x = 0; x < matX.cols; ++x) {
double gX = Xr [x] , gY = Yr[x];
double magnitude = sqrt ( (gX * gX) + (gY * gY) ) ;
Mr [x] = magnitude;
}
}
return mags; double computeDynamicThreshold (const cv::Mat &mat, double
stdDevFactor ) {
cv: iScalar stdMagnGrad, meanMagnGrad;
c : : meanStdDe (mat , meanMagnGrad, stdMagnGrad);
double stdDev = stdMagnGrad [ 0 ] / sqrt (mat . rows *mat . cols ) ;
return stdDevFactor * stdDev + meanMagnGrad [ 0 ] ;
#ifndef HELPERS_H
#define HELPERS_H
bool rectlnlmage (cv : : Rect rect, cv::Mat image);
bool inMat ( cv :: Point p,int rows, int cols);
cv::Mat matrixMagnitude (const cv::Mat &matX, const cv::Mat SmatY) ; double computeDynamicThreshold (const cv::Mat &mat, double
stdDevFactor) ;
#endif
//
// main.m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
#import "AppDelegate . h"
int main (int argc, char *argv[] )
{
@autoreleasepool { return UIApplicationMain ( argc , argv, nil,
NSStringFromClass ( [AppDelegate class] ) ) ;
}
}
//
// ViewController . h
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import <UIKit/UIKit . h>
#import <opencv2 /highgui /cap_ios . h>
#import "CvVideoCameraMod . h"
#import <opencv2 /obj detect/obj detect. hpp>
#import <opencv2 /imgproc/imgproc . hpp>
#import <opencv2 /openc . hpp>
#import <vector> using namespace cv;
Sinterface ViewController :
UIViewController<CvVideoCameraDelegateMod, CvPhotoCameraDelegate> {
IBOutlet UllmageView *Viewport;
IBOutlet UILabel *mlnfoLabel;
IBOutlet UILabel *mlnfoLabel2 ;
IBOutlet UllmageView *mFocusImageView;
IBOutlet UllmageView *mCoverImageView;
IBOutlet UllmageView *mCoverDownImageView;
IBOutlet UllmageView *mArrow;
IBOutlet UIButton * StartButton ;
IBOutlet UllmageView *mDetectArea ;
CvVideoCameraMod * videoCamera;
}
@property (nonatomic, strong) CvVideoCameraMod *videoCamera ;
@property (nonatomic, strong) CvPhotoCamera *photoCamera ;
- ( IBAction) takePhoto : (id) sender;
- ( IBAction) onOptionButtonTap : (UIButton *) sender;
- ( IBAction) onGalleryButtonTap : (id) sender;
- (void) updateOptions ;
- (void) turnTorchOnWithPower : (float) 1Power ;
- (void) turnTorchOff ; - (bool) detectEyesWithImage : (Mats) image ;
- (void) postProcesses : (Mats) image;
- (void) postProcessesEx: (Mats) image ;
- (double ) getContrastMeasure : (Mats ) image ;
- (void) photoCamera : (CvPhotoCamera* ) photoCamera
capturedlmage : (Ullmage *) image;
- (void) photoCameraCancel : (CvPhotoCamera* ) photoCamera ;
- (float) getl lumination : (Mats) image withSubSample : (int) subSample ; - (void) cropTakenlmage : (Mat& ) image withRect : ( cv : : Rect &)rect; - (void) showlnfoMessageWith : (NSString*) text;
- (void) showMessageWithText : (NSString* ) text andColor: (int) color; @end
//
// ViewController . m
// EyeArtifact
//
// Copyright (c) 2013 Movidreams S.A. All rights reserved.
//
#import "ViewController . h"
#import "CResultController . h"
#import "COptionsController . h"
#import "CGalleryController . h"
#import "CDisclaimerController . h"
#import "CTutorialControiler . h"
#import "CExamplesControiler . h"
#import "CUtil.h"
#import "CAppMgr.h"
#include "CEyeDetectionController ,
#include <string>
#import "constants . h"
Sinterface ViewController
{
BOOL _takePhoto;
BOOL _canTakePhoto;
BOOL _canFocus ;
BOOL _useEyeDetect ;
BOOL _videoStarted;
BOOL is!nView;
float _currentLuminance ;
float _luminanceAverage ;
int luminanceSample ; double currentPhotoTime ;
double luminanceTimer ;
int frameToTake ;
int frameCounter ;
CGPoint _focus Point ;
CGPoint _detectAreaOffset ;
CGRect _cropRect;
CascadeClassifier _eyeCascade;
CascadeClassifier _eyeCascadeEx;
cv::Mat _takenlmage;
cv::Mat _canvas Image ;
NSString *_iPhoneModel ;
int _iPhoneModelId;
//NEW
CEyeDetectionControiler *_eyeDetector ;
@end
@implementation ViewController
- (void) iewDidLoad
{
[super viewDidLoad];
[ [CAppMgr getPtr] loadData] ;
_iPhoneModelId = 450;
_iPhoneModel= [CUtil getModel] ;
if ( [_iPhoneModel isEqualToString : @"iPhone5"] ) {
_iPhoneModelId = 500;
}
DLog (@"%@",_iPhoneModel) ;
self . videoCamera = [ [CvVideoCameraMod alloc] initWithParentView : Viewport] ;
[self .videoCamera initializeWithDelegate : self] /* self . photoCamera = [ [CvPhotoCamera alloc]
initWithParentView : Viewport] ;
self . photoCamera . defaultAVCaptureDevicePosition =
AVCaptureDevicePositionBack; //AVCaptureDevicePositionFront; //
self .photoCamera .defaultAVCaptureSess ionPreset =
AVCaptureSessionPresetl280x720 ; //AVCaptureSessionPreset640x480 ; // self . photoCamera . defaultAVCaptureVideoOrientation =
AVCaptureVideoOrientationPortrait ;
self . photoCamera . delegate = self;*/
mFocus ImageView . hidden = YES;
takePhoto = NO;
canTakePhoto = NO;
canFocus = YES;
useEyeDetect = YES;
videoStarted = NO;
is InView = NO;
currentPhotoTime = 0;
frameToTake = 2;
luminanceSample = 0;
luminanceAverage = 0;
detectAreaOffset = CGPointMake (0,0) ;
Viewport . userInteractionEnabled=true ;
_focusPoint . x=0 ;
_focusPoint . y=0 ;
std: : string path = [CUtil
getFilePathOf : @ "haarcascade_eye_tree_eyeglas ses . xml " ] ;
if ( ! _eyeCascade . load (path . c_str ( ) ) )
{
DLog(@"Load Cascade Failed
haarcascade_eye_tree_eyeglasses .xml") ;
}
std:: string path2 = [CUtil getFilePathOf : @ "haarcascade_eye . xml "] ; if ( ! _eyeCascadeEx . load (path2. c_str ( ) ) )
{
DLog(@"Load Cascade Failed haarcascade_eye . xml" ) ;
}
//NEW DETECTOR
_eyeDetector = new CEyeDetectionController ( ) ;
[ self . navigationController setNavigationBarHidden : true ] ;
}
- (void) dealloc {
delete _eyeDetector ;
}
- (void) iewWillAppear : (BOOL) animated
{
DLog ( @"viewWillAppear" ) ;
[self updateOptions ] ;
}
- (void) viewDidAppear : (BOOL) animated
{
_isInView = true;
if ( ! [CAppMgr getPtr] . globalDisclaimerAgreed)
{
CDisclaimerController *newViewController = [ self . storyboard instantiateViewControllerWithldentifier : @"CDisclaimer"] ;
[newViewController setPopupMode : true ] ;
[ self. navigationControiler
pushViewController : newViewController animated : true ] ;
} else
{
if ( [CAppMgr getPtr] . globalFirstTimeUse)
{
CTutorialController ^newViewController = [self . storyboard instantiateViewControllerWithldentifier : @ "CTutorial " ] ;
[newViewController setFirstTime : true ] ;
[self. navigationControiler
pushViewController : newViewController animated:NO] ;
[CAppMgr getPtr] . globalFirstTimeUse=false ;
[ [CAppMgr getPtr] saveData] ;
} else
{
if ( [CAppMgr getPtr] . globalFirstTimeUseExamples )
{
CExamplesController ^newViewController =
[self. storyboard
instantiateViewControllerWithldentifier : @ "CExample " ] ;
[newViewController setFirstTime : true ] ;
[self. navigationControiler
pushViewController : newViewController animated:NO] ;
[CAppMgr getPtr ]. globalFirstTimeUseExamples = false; [ [CAppMgr getPtr] saveData] ;
}
}
} /* DLog ( @ "mDetectArea
%f,%f,%f,%f",self. iew . frame . origin. x, self. iew . frame . origin. y, self.v iew . frame .size. width , self. view. frame . s ize . height) ;
DLog (@"Viewport
%f,%f,%f,%f", Viewport . frame .origin . x, Viewport . frame .origin . y, Viewport . frame .size. width, Viewport . frame .size. height) ; * /
float h = (1280.0 *mDetectArea . frame . origin. y) /( (float) 568) ;
float h2 = (1280.0* (mDetectArea . frame . size .height) )/( (float) 568 ) ; float diff = (1280- (1280* (self .view. frame . size .height/568.0) ) ) /2.0 ;
_cropRect=CGRectMake (50 , h+diff, 640, h2);
DLog (@"Viewport %f,%f,diff: %f " , h, h2 , diff) ;
[super viewDidAppear : animated] ;
if ( ! _videoStarted)
{
[ self . videoCamera start];
}
[ self . videoCamera updateOrientation] ;
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType : AVMediaTypeVideo] ;
[device lockForConfiguration : nil] ;
[device setFocusMode : AVCaptureFocusModeContinuousAutoFocus ] ;
[device
setExposureMode : AVCaptureExposureModeContinuousAutoExposure ] ;
[device
setWhiteBalanceMode : AVCaptureWhiteBalanceModeContinuousAutoWhiteBalan ce] ;
if (device . lowLightBoostSupported)
[device unlockForConfiguration] ;
// [self .photoCamera start];
_luminanceTimer = CACurrentMediaTime ( ) ;
[UlView animateWithDuration : 0.5 delay:1.5
options : UIViewAnimationCurveEaselnOut | UIViewAnimationOptionBeginFromC urrentState
animations :
Λ{
mCoverImageView . layer . affineTransform = CGAffineTransformMakeTranslation (-160 , 0) ; mCoverDownImageView . layer . affineTransform CGAffineTransformMakeTranslation (+160 , 0) ;
}
completion : ( BOOL finished)
{
if (finished)
{
mCoverDownImageView . layer . affineTransform
CGAffineTransformldentity;
mCoverDownImageView . hidden=YES ;
mCoverImageView . layer . affineTransform
CGAffineTransformldentity;
mCoverImageView . hidden=YES ;
] ;
}
- (void) updateOptions
{
_useEyeDetect = [CAppMgr getPtr] . optionUseAutoCrop ;
mDetectArea . hidden = [CAppMgr getPtr] .optionUseAutoCrop;
}
- (void) overlayModalClosed : (NSNumber *) tag
{
DLog (@"Calling selector %@",tag);
if ([tag intValue] ==10)
{
DLog ( @"returning from options");
[self updateOptions];
[ [CAppMgr getPtr] saveData] ;
}
}
- (void) viewDidDisappear : (BOOL) animated
{
_is InView=false ;
[super viewDidDisappear : animated] ;
DLog(@"VIEW DISAPEAR");
// [ self . videoCamera stop];
mCoverImageView . layer . affineTransform =
CGAffineTransformldentity;
mCoverDownlmageView . layer . affineTransform =
CGAffineTransformldentity; mCoverImageView . hidden = NO;
mCoverDownImageView . hidden = NO;
DLog (@"On delete") ;
- (void) didReceiveMemoryWarning
[super didReceiveMemoryWarning];
DLog ( @ "memory low") ;
- (BOOL) shouldAutorotate
return YES;
- (NSUInteger) supportedlnterfaceOrientations
return UI InterfaceOrientationMas kPortrait ;
-(UI InterfaceOrientation)preferredInterfaceOrientationForPresentation return UllnterfaceOrientationPortrait;
- (void) showlnfoMessageWith : (NSString*) text;
[mlnfoLabel . layer removeAllAnimations ] ;
mlnfoLabel . text=text ;
mlnfoLabel . alpha = 1.0;
[UlView animateWithDuration : 2.0 delay:0.0
options : UIViewAnimationCurveEaselnOut
animations : {
mlnfoLabel . alpha = 0.0;
}
completion : ( BOOL finished)
{
}] ;
- (void) showMessageWithText : (NSString* ) text andColor: (int) color;
{
/*if (color==l I I color==2)
{
mlnfoLabel2. textColor= [UlColor colorWithRed : 0.3 green:0.5 blue : 1.0 alpha : 1.0] ;
} else
{
mlnfoLabel2. textColor= [UlColor redColor] ;
}*/
mInfoLabel2.textColor= [UlColor whiteColor ] ;
mInfoLabel2.text=text; -(void) touchesEnded: (NSSet *) touches withEvent: (UIEvent *) event {
UITouch *touch = [[event allTouches] anyObject];
if ( [touch view] == Viewport)
{
Class captureDeviceClas s =
NSClassFromString ( @ "AVCaptureDevice " ) ;
if ( captureDeviceClas s != nil)
{
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType : AVMediaTypeVideo] ;
[device lockForConfiguration : nil ] ;
[device setFocusMode : AVCaptureFocusModeLocked] ;
[device unlockForConfiguration] ;
}
[self showInfoMessageWith : @"FOCUS LOCKED!"];
}
}
-(void) touchesBegan : (NSSet *) touches withEvent: (UIEvent *) event {
UITouch *touch = [[event allTouches] anyObject];
if ( [touch view] == Viewport)
{
[self showInfoMessageWith: @"FOCUSING"] ;
CGPoint location = [touch locationlnView : Viewport] ; DLog ( @"touches touchesBegan %f , %f" , location . x, location . y) ; CGPoint focusPoint;
_focusPoint= [ self . videoCamera
getNormalizedPositionFrom: location] ;
focusPoint. x = l-_focus Point . x ;
focusPoint. y = _focusPoint . y;
if (_canFocus )
{
mFocus ImageView . hidden = NO;
mFocus ImageView . frame = CGRectMake ( location . x- 64 , location. y-64, 128, 128);
_canFocus=NO ;
[UlView animateWithDuration : 0.25 delay: 0.0 options : UIViewAnimationCurveEaselnOut
animations : {
CGAffineTransform
t=CGAffineTransformMakeScale (0.5, 0.5) ; mFocus ImageView . layer . affineTransform =
CGAffineTrans formTrans late (t, -32, -32) ;
[UlView animateWithDuration : 0.1 delay : 1.5 options: UIViewAnimationCurveEaselnOut
animations : { mFocus ImageView . alpha=0 ;
} completion : Λ (BOOL finished) {
mFocus ImageView . hidden = YES;
mFocus ImageView . layer . affineTransform
CGAffineTransformMakeScale (1.0, 1.0) ;
mFocus ImageView . alpha = 1.0;
_canFocus=YES ;
}] ;
}
completion : ( BOOL finished)
{
}] ;
}
Class captureDeviceClas s =
NSClassFromString ( @ "AVCaptureDevice " ) ;
if ( captureDeviceClas s != nil)
{
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType : AVMediaTypeVideo] ;
[device lockForConfiguration : nil ] ;
[device setFocusPointOfInterest : focusPoint] ;
[device setFocusMode : AVCaptureFocusModeAutoFocus ] ;
[device unlockForConfiguration] ;
- (bool) detectEyesWithlmage : (Mats) image
{
DLog ( @"Detecting eyes!");
std: :vector<cv: :Rect> eyes;
double scale=3.0; bool !SecondHaarUsed=false ;
c : : Size imageSi ze ( image . cols , image . rows ) ;
Mat
gray, 1Small Image (cvRound ( image . rows /scale ) , cvRound ( image .cols/scale) , CV_8UC1) ;
cvtColor (image, gray, CV_BGR2GRAY) ;
resize (gray, ISmalllmage , ISmalllmage .size(),0,0, INTER_LINEAR) ; equalizeHist (ISmalllmage, ISmalllmage) ;
_eyeCascade . detectMultiScale (ISmalllmage ,
eyes, 1.1,2,0| CV_HAAR_SCALE_IMAGE , c : :Size(4,4));
if (eyes .size () <=1)
{
DLog(@"Eyes not detected: Using another haar") ;
eyes . clear ( ) ;
_eyeCascadeEx . detectMultiScale ( 1Small Image ,
eyes, 1.05,2,0| CV_HAAR_SCALE_IMAGE , c : :Size(4,4));
lSecondHaarUsed = true;
} else
{
DLog(@"Eyes detected: Using normal haar %li" , eyes . size ()) ;
}
cv::Point UL ( 9999999 , 9999999 ) ;
cv: : Point DR (0 , 0) ;
for ( std : : vector<cv : : Rect> : : const_iterator
e=eyes . begin ( ) ; e ! =eyes . end ( ) ;++e)
{
int x = (e->x+e->width* 0.5 ) *scale ;
int y = (e->y+e->height*0.5) *scale;
int r = (e->width+e->height) *0.25*scale;
cv : : Point 1UL (max (x-r , 0 ) , min ( y-r , image . cols ) ) ;
cv::Point 1DR (max (x+r , 0 ) , min ( y+r , image . rows ) ) ;
if (lUL.x<UL.x)
UL.x=lUL.x;
if (1UL . y<UL . y)
UL.y=lUL.y;
if (lDR.x>DR.x)
DR.x=lDR.x;
if (1DR. y>DR. y)
DR . y=lDR . y;
}
if (lSecondHaarUsed)
{
int s ize=DR . x-UL . x ; DLog(@"Image amp: %i",size);
UL.x= (UL.x-size/2) >=0? (UL.x-size/2) : 0;
x= (DR. x+size/2) <imageSi ze . width? (DR.x+size/2) : imagesi ze . width- }
//Fix
UL.x= (UL.x>=0) ?UL.x 0;
UL . y= (UL . y>=0 ) ?UL . y 0;
DR.x= (DR.x>=0) ?DR.x 0;
DR. y= (DR. y>=0) ?DR. y 0;
UL . x= (UL . x>=imageSi ze . width) ? imageSi ze . width- 1 : UL . x ;
UL . y= (UL . y>=imageSize .height) ?imageSize . height-1 : UL . y;
DR . x= (DR . x>=imageSi ze . width) ? imageSi ze . width- 1 : DR . x ;
DR . y= (DR . y>=imageSi ze .height) ? imageSi ze .height-1 : DR . y;
if (eyes .size () >=1)
{
cv::Rect rect(UL,DR);
[self cropTakenlmage : image withRect : rect] ;
return true;
} else
{
return false;
}
}
- (void) cropTakenlmage : (Mat& ) image withRect : ( c :: Rect S) rect;
{
//Eyes
cv: :Rect lOriginal(0,0, image . cols , image . rows ) ;
/* cv: :Size s ize ( 280 , 280 ) ;
cv: :Rect rectCanvas= [CUtil getOutsideCropRectOfSize : size onRect:rect forMaxRect : lOriginal ] ;
cv: :Mat preCanvas = image ( rectCanvas ) . clone ();* /
//cv: :Rect rectCanvas= [CUtil aspectRatio: forAspect : 0.4 forMaxRect : lOriginal ] ;
_takenlmage = image (rect) .clone () ;
// cvtColor (preCanvas , _canvas Image , CV_RGBA2GRAY) ;
}
- (void) postProcesses : (Mats) image
{
cv : : Mat temp ;
/* cvtColor (image, temp, CV_BGR2YCrCb) ;
std : : vector<Mat> channels;
cv: : split (temp , channels ) ; cv: :equalizeHist (channels [0] , channels [0] ) ;
cv: imerge (channels , temp) ;
cvtColor (temp, image, CV_YCrCb2BGR) ; */
/ /NoiseReduction
//cv: : fastNlMeansDenoisingColored (image, image , 7 , 7 , 5 , 7 ) ;
/ /Sharpen
cv : : Gaus s ianBlur ( image , temp , cv::Size(0,0), 3);
cv : : addWeighted ( image , 1.75 , temp, -0.75, 0, image);
}
- (void)postProcessesEx: (Mats) image ;
{
}
/ /Actions
- ( IBAction) takePhoto : (id) sender
{
/* DLog ( @ "TakeButton" ) ;
[self turnTorchOnWithPower : 1.0 ] ;
[self .photoCamera takePicture] ;*/
[ [CAppMgr getPtr] playTakePictureAudio : self] ;
_takePhoto = YES;
_currentPhotoTime = 0;
_frameCounter = 0;
}
- (void) turnTorchOff
{
Class captureDeviceClas s = NSClas s FromString ( @ "AVCaptureDevice " ) ; if (captureDeviceClass != nil)
{
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType : AVMediaTypeVideo] ;
if ( [device hasTorch] && [device hasFlash] )
{
[device lockForConfiguration : nil ] ;
[device setTorchMode : AVCaptureTorchModeOff] ;
[device setFlashMode : AVCaptureFlashModeOff] ;
[device unlockForConfiguration] ;
}
}
}
-(void) turnTorchOnWithPower: ( float) lPower
{
Class captureDeviceClass = NSClas s FromString ( @ "AVCaptureDevice ") ; if (captureDeviceClass != nil)
{
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType : AVMediaTypeVideo] ;
if ( [device hasTorch] && [device hasFlash] )
{
[device lockForConfiguration : nil ] ;
[device setTorchModeOnWithLevel : lPower error :nil] ;
[device setFlashMode : AVCaptureFlashModeOn] ;
[device unlockForConfiguration] ;
- (float) getl lumination : (Mats) image;
{
Mat frame_yuv;
cvtColor ( image, frame_yuv, CV_BGR2YUV ) ;
uint8_t* pixelPtr = (uint8_t* ) frame_yuv . data ;
int cn = frame_yuv . channels ( ) ;
int prom = 0;
for (int i = 0; i < frame_yuv . rows ; i+=3)
{
for (int j = 0; j < frame_yuv . cols ; j += 3)
{
Scalar_<uint8_t> yuvPixel;
yuvPixel . val [ 0 ] = pixelPtr [ i * frame_yuv . cols*cn + j*cn +
0] ; // Y
yuvPixel . val [ 1 ] = pixelPtr [ i * frame_yuv . cols*cn + j*cn +
1] ; // U
yuvPixel . val [ 2 ] = pixelPtr [ i * frame_yuv . cols*cn + j*cn +
2] ; // V
prom+=yuvPixel .val [ 0 ] ;
}
}
return prom/ (float) (( frame_yuv . rows * frame_yuv . cols )) ;
}
- (double ) getContrastMeasure : (Mats ) image ;
{
return 2 ;
}
/*- ( IBAction) eyeDetectSwitchChanged : (UlSwitch *) sender
{ _useEyeDetect = sender. isOn;
mDetectArea . hidden=sender . isOn;
if ( ! sender . isOn) //posicionar la vista.
{
// _detectAreaOffset
DLog ( @"Viewport :
f" , Viewport . frame . origin . x , Viewport . frame . origin . y) ;
DLog ( @"mDetectArea :
f" , mDetectArea . frame . origin . x , mDetectArea . frame .origin .
}
- (void) process Image : (Mat&) image;
{
if ( ! _is InView)
{
return;
}
/ *_eyeDetector->proces s (image) ;
return; * /
// Do some OpenCV stuff with the image
cv: :Rect rectangulo(0,0,720,300) ;
cv: :Mat crop = image (rectangulo) ;
/*cv: :Point pi (_cropRect . origin . x , _cropRect . origin . y) ;
cv : : Point
p2 (_cropRect .origin . x+_cropRect .size. width , _cropRect .origin . y+_cropRe ct . s i ze . height) ;
cv :: rectangle ( image , pi, p2, CV_RGB(255, 0, 0));*/ takePhoto)
float imageLuminance [self getl lumination : crop ] ;
if ( ! _canTakePhoto)
{
_canTakePhoto YES ;
_currentPhotoTime CACurrentMediaTime ( ) ;
currentLuminance imageLuminance ;
[self turnTorchOnWithPower : 1.0 ] ;
dispatch_sync (dispatch_get_main_queue ( ) , {
mlnfoLabel . text=@" ";
}) ;
} else
{
if (_frameCounter>0 I | imageLuminance>_currentLuminance+l .5 | |
CACurrentMediaTime ( ) -_currentPhotoTime>0.4)
{ _frameCounter++ ;
if (_frameCounter>_frameToTake )
{
cvtColor (image, _takenlmage, CV_BGRA2RGB) ;
_takePhoto=NO;
_canTakePhoto=NO;
[self turnTorchOff] ;
if ( ! _useEyeDetect I | [self
detectEyesWithlmage : _takenlmage ] )
T
if ( ! _useEyeDetect)
{
c : : Rect
rect (_cropRect . origin . x , _cropRect .origin . y, _cropRect .size. width , _crop Rect . size .height) ;
Mat temp = image (rect) .clone () ;
cvtColor (temp, _takenlmage, CV_BGRA2RGB) ;
}
[self postProcesses :_takenlmage] ;
[self postProcessesEx :_takenlmage] ;
Ullmage * image_out = [CUtil
UIImageFromCVMat :_takenlmage] ;
//Ullmage * image_back = [CUtil
UIImageFromCVMat : _canvas Image ] ;
dispatch_sync (dispatch_get_main_queue ( ) , { CResultController *newViewController = [self.storyboard instantiateViewControllerWithldentifier : @ "CResult" ] ;
[ newViewControiler
setResultlmage : image_out] ;
[CUtil
presentOverlayModal : newViewController onViewController : self
withTag : 5 ] ;
}) ;
} else
{
dispatch_sync (dispatch_get_main_queue ( ) , {
[self showInfoMessageWith: @"EYES NOT
DETECTED ! "]
}) ; } else
{
if (CACurrentMediaTime ( ) -_luminanceTimer>0.8 )
{
_luminanceTimer = CACurrentMediaTime () ;
_currentLuminance = [self getl lumination : crop]
_luminanceSample++ ;
float value =
currentLuminance* 0.25+_luminanceAverage*0.75;
_luminanceAverage = _currentLuminance ;
float maxValid = 6.5;
if (_iPhoneModelId == 500)
{
maxValid = 1.0;
}
value= ( alue>25 ) ? 25 : value ;
/ /DLog ( @ "Luminance : %f", value) ;
dispatch_sync (dispatch_get_main_queue ( ) , { if (value<maxValid/2.0 )
{
[self
showMessageWithText : NSLocalizedString ( @ "LUMINOS ITY_PERFECT " , nil) andColor : 1 ] ;
} else
{
if (value<maxValid* 1.2)
{
[self
showMessageWithText : NSLocalizedString ( @ "LUMINOS ITY_GOOD" , nil) andColor : 2 ] ;
} else
{
[self
showMessageWithText : NSLocalizedString ( @ "LUMINOS ITY_BAD" , nil) andColor : 3 ] ;
}
}
}) ;
if (value>maxValid)
{
value=0.3+ (0.7* ( (value-maxValid) / (25.0-maxValid) ) ) ; } else
{
value = 0.3* (value/maxValid) ;
} dispatch_sync (dispatch_get_main_queue ( ) , {
float alpha=value;
alpha=alpha<0.2 ? 0 : alpha ;
alpha=alpha>0.870: alpha;
// DLog(@"value %f", alue);
[UlView animateWithDuration : 0.78 delay: 0.0
options : UIViewAnimationCurveLinear
animations : {
mArrow. layer . affineTransform=CGAffineTransformMakeTranslation (value*2 25, 0);
}
completion : (BOOL finished)
{
}] ;
}) ;
- ( IBAction) onOptionButtonTap : (UIButton *) sender;
{
COptionsController *newViewController = [ self . storyboard instantiateViewControllerWithldentifier : @ "COptions " ] ;
[CUtil presentOverlayModal : newViewController
onViewController : self withTag : 10] ;
}
(IBAction) onGalleryButtonTap : (id) sender
CALayer *presLayer = mCoverDownlmageView . layer . presentationLayer ; CALayer *presLayer2 = mCoverImageView . layer . presentationLayer ; mCoverDownlmageView. layer .bounds presLayer .bounds ;
mCoverDownlmageView . layer. contentsRect presLayer. contentsRect; mCoverDownlmageView. layer .position presLayer. position; mCoverDownlmageView .layer . trans form presLayer. transform; mCoverImageView .layer . bounds presLayer2.bounds ;
mCoverImageView . layer. contentsRect presLayer2. contentsRect; mCoverImageView .layer. position presLayer2.position;
mCoverImageView .layer . trans form presLayer2. transform; [mCoverDownlmageView . layer removeAllAnimations ] ;
[mCoverImageView . layer removeAllAnimations] ;
CGalleryController *newViewController = [ self . storyboard instantiateViewControllerWithldentifier : @"CGallery"] ;
[self.navigationController pushViewControiler : newViewControiler animated : true ] ;
}
@end

Claims

1. A method for preliminary diagnosis of ocular diseases, the method comprising:
A) obtaining, from an application, at least one final corrected image of respective pupils of eyes of an individual, wherein the application is configured to process a plurality of digital images of the eyes of the individual to generate the at least one final corrected image;
B) color processing the at least one final corrected image to transform a color content in the at least one final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance components, and thereby obtain a color-transformed final corrected image;
C) representing the color-transformed final corrected image using an HSV color scale;
D) determining a white color content of reflection from each eye of the individual based on the HSV color scale representing the color-transformed final corrected image; and
E) electronically diagnosing at least one ocular disease of the individual based at least in part on D).
2. The method of claim 1, wherein in B), the luminance- based color space is:
a YUV color space; or
a YCbCr color space;
3. The method of claim 1, wherein D) comprises:
calculating an HSV value for at least one region of the color-transformed final corrected image; and
determining an average Saturation (S) value for the at least one region based on the HSV value.
4. The method of claim 3, further comprising:
upon determining that the white color content of the reflection from each eye includes a hue of red, identifying that the eyes of the individual are normal.
5. The method of claim 3, wherein:
upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of yellow, identifying that the at least one eye comprises a deformation.
6. The method of claim 3, wherein:
upon determining that the white color content of the reflection from at least one eye of the eyes of the individual includes a tint of white, identifying that the at least one eye includes a tumor.
7. The method of claim 1, further comprising:
storing a plurality of classified images including at least:
a first image classified as a normal eye;
a second image classified as a deformed eye, and
a third image classified as an eye with tumor; and
generating a machine-learning model based on the plurality of classified images.
8. The method of claim 7, wherein generating the machine-learning model includes implementing at least one machine learning technique.
9. The method of claim 7, wherein E) comprises:
comparing the color-transformed final corrected image to at least one image in the plurality of classified images.
10. A method for preliminary diagnosis of ocular diseases, the method comprising:
providing an audible cue to attract a subject's attention toward a camera;
capturing a sequence of images of eyes of the subject with the camera, the camera including a flash;
processing the sequence of images to localize respective pupils of the eyes of the subject to generate a digitally refocused image;
transmitting, via a network, the digitally refocused image to a processor; receiving from the processor, via the network, preliminary diagnosis of ocular diseases, wherein the preliminary diagnosis is based on a white color content of reflection of each eye of the subject in the digitally refocused image.
11. The method of claim 10, wherein the audible cue includes barking of a dog.
12. The method of claim 10, wherein the camera is included in a smart phone.
13. The method of claim 12, further comprising:
providing an external adapter to adjust a distance between the flash and the camera based on a type of the smart phone.
14. The method of claim 10, wherein the sequence of images includes red-eye effect.
15. The method of claim 10, wherein receiving the preliminary diagnosis includes receiving at least one index value from the processor, the at least one index value indicating the presence and/or absence of ocular disease.
16. A system for preliminary diagnosis of ocular diseases, the system comprising:
a camera;
a flash;
a memory for storing a computational application;
a processor coupled to the camera, the flash, and the memory, wherein upon execution of the computational application by the processor, the processor:
provides an audible cue to attract a subject's attention toward the camera;
processes a plurality of images captured with the camera in order to obtain a final corrected image;
transmits the final corrected image to a central server; and
receives electronic diagnosis of ocular disease from the central server; the central server communicably coupled to the processor, the central server to: obtain the final corrected image from the processor;
color process the final corrected image to transform color content in the final corrected image from an RGB color space to a luminance-based color space comprising one luma and two chrominance components, thereby generating a color-transformed final corrected image;
conduct preliminary conclusion of abnormalities in at least one eye of the subject based on the color-transformed final corrected image;
represent the final corrected image using an HSV color scale;
determine white color content of reflection from each eye of the subject based on the HSV color scale;
electronically diagnose at least one ocular disease of the subject based on the white color content; and
transmit at least one index to the processor, the at least one index being based on the electronic diagnosis of the at least one ocular disease.
17. The system of claim 16, wherein the central server is further configured to:
generate a machine-learning model to classify at least one of:
the color-transformed final corrected image, or
the final corrected image
as at least one of a normal eye, a deformed eye, or an eye with tumor.
18. The system of claim 17, wherein the central server is configured to generate the machine- learning model based on a database of classified images.
19. The system of claim 18, wherein the database of classified images includes a corresponding classification for each of a plurality of sample color-transformed final corrected images and each of a plurality of sample final corrected images, wherein the corresponding classification is provided by an expert.
PCT/IB2018/000806 2017-10-11 2018-04-26 System and device for preliminary diagnosis of ocular disease WO2019073291A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762570979P 2017-10-11 2017-10-11
US62/570,979 2017-10-11

Publications (1)

Publication Number Publication Date
WO2019073291A1 true WO2019073291A1 (en) 2019-04-18

Family

ID=66100496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/000806 WO2019073291A1 (en) 2017-10-11 2018-04-26 System and device for preliminary diagnosis of ocular disease

Country Status (1)

Country Link
WO (1) WO2019073291A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024075064A1 (en) * 2022-10-05 2024-04-11 Eyecare Spa Methods and apparatus for detection of optical diseases

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153799A1 (en) * 2007-12-13 2009-06-18 David Johns Vision Screener
US20120236425A1 (en) * 2011-03-18 2012-09-20 Premier Systems Usa, Inc. Selectively attachable and removable lenses for communication devices
WO2012162060A2 (en) * 2011-05-25 2012-11-29 Sony Computer Entertainment Inc. Eye gaze to alter device behavior
US20130235346A1 (en) * 2011-09-08 2013-09-12 Icheck Health Connection, Inc. System and methods for documenting and recording of the pupillary red reflex test and corneal light reflex screening of the eye in infants and young children
US20150220144A1 (en) * 2012-05-17 2015-08-06 Nokia Technologies Oy Method and apparatus for attracting a user's gaze to information in a non-intrusive manner
US20150257639A1 (en) * 2014-03-12 2015-09-17 Eyecare S.A. System and device for preliminary diagnosis of ocular diseases
US20170026568A1 (en) * 2015-07-21 2017-01-26 Qualcomm Incorporated Camera orientation notification system
US20170055822A1 (en) * 2014-05-02 2017-03-02 Massachusetts Eye & Ear Infirmary Grading Corneal Fluorescein Staining

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153799A1 (en) * 2007-12-13 2009-06-18 David Johns Vision Screener
US20120236425A1 (en) * 2011-03-18 2012-09-20 Premier Systems Usa, Inc. Selectively attachable and removable lenses for communication devices
WO2012162060A2 (en) * 2011-05-25 2012-11-29 Sony Computer Entertainment Inc. Eye gaze to alter device behavior
US20130235346A1 (en) * 2011-09-08 2013-09-12 Icheck Health Connection, Inc. System and methods for documenting and recording of the pupillary red reflex test and corneal light reflex screening of the eye in infants and young children
US20150220144A1 (en) * 2012-05-17 2015-08-06 Nokia Technologies Oy Method and apparatus for attracting a user's gaze to information in a non-intrusive manner
US20150257639A1 (en) * 2014-03-12 2015-09-17 Eyecare S.A. System and device for preliminary diagnosis of ocular diseases
US20170055822A1 (en) * 2014-05-02 2017-03-02 Massachusetts Eye & Ear Infirmary Grading Corneal Fluorescein Staining
US20170026568A1 (en) * 2015-07-21 2017-01-26 Qualcomm Incorporated Camera orientation notification system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024075064A1 (en) * 2022-10-05 2024-04-11 Eyecare Spa Methods and apparatus for detection of optical diseases

Similar Documents

Publication Publication Date Title
US11030481B2 (en) Method and apparatus for occlusion detection on target object, electronic device, and storage medium
US20210209762A1 (en) Processing fundus images using machine learning models
Bajwa et al. G1020: A benchmark retinal fundus image dataset for computer-aided glaucoma detection
US20220175325A1 (en) Information processing apparatus, information processing method, information processing system, and program
US10426332B2 (en) System and device for preliminary diagnosis of ocular diseases
JP2021532881A (en) Methods and systems for extended imaging with multispectral information
Teikari et al. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter
JP7297628B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
US11887299B2 (en) Image processing system and image processing method
Zhen et al. Assessment of central serous chorioretinopathy depicted on color fundus photographs using deep learning
CN111553436A (en) Training data generation method, model training method and device
Huang et al. A depth-first search algorithm based otoscope application for real-time otitis media image interpretation
Dias et al. Evaluation of retinal image gradability by image features classification
Hwang et al. Smartphone-based diabetic macula edema screening with an offline artificial intelligence
JP2021101965A (en) Control device, optical interference tomography apparatus, control method of optical interference tomography apparatus, and program
JP2019208851A (en) Fundus image processing device and fundus image processing program
WO2019073291A1 (en) System and device for preliminary diagnosis of ocular disease
CN112966620A (en) Fundus image processing method, model training method and equipment
Höffner Gaze tracking using common webcams
JP2021058563A (en) Information processing device and information processing method
US11880976B2 (en) Image retention and stitching for minimal-flash eye disease diagnosis
Zhang et al. Luminosity rectified blind Richardson-Lucy deconvolution for single retinal image restoration
US20230284902A1 (en) Information processing device, eyesight test system, information processing method
WO2024075064A1 (en) Methods and apparatus for detection of optical diseases
Resendiz Automated Method for Localization and Segmentation of the Optic Disc for Glaucoma Evaluation.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18866407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18866407

Country of ref document: EP

Kind code of ref document: A1