WO2021001766A1 - An automated guiding system and method thereof - Google Patents

An automated guiding system and method thereof Download PDF

Info

Publication number
WO2021001766A1
WO2021001766A1 PCT/IB2020/056222 IB2020056222W WO2021001766A1 WO 2021001766 A1 WO2021001766 A1 WO 2021001766A1 IB 2020056222 W IB2020056222 W IB 2020056222W WO 2021001766 A1 WO2021001766 A1 WO 2021001766A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
interest
region
location information
guiding system
Prior art date
Application number
PCT/IB2020/056222
Other languages
French (fr)
Inventor
Shyam Vasudeva RAO
Bharathkumar HEGDE
Vijayashri B NAGAVI
Original Assignee
Autoyos Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autoyos Private Limited filed Critical Autoyos Private Limited
Publication of WO2021001766A1 publication Critical patent/WO2021001766A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/15Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
    • A61B3/152Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for aligning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present disclosure generally relates automation systems. Particularly, but not exclusively, the present disclosure relates to an automated guiding system and method for imaging a target object.
  • Imaging the eye using various devices and different methods, for the diagnosis and treatment of the eyes, according to state of art, are well known.
  • imaging the posterior part of the eye is done manually by the ophthalmologist or a skilled lab technician.
  • the ophthalmologist or a skilled lab technician uses various imaging devices for capturing the image of the retina of the patient eye.
  • the face of a patient is positioned on the device with chin resting at a chin rest and the patient is asked to keep looking at a distinct object/ light source, which acts an eye fixation.
  • the ophthalmologist or the lab technician aims to get the retina of the eye in the imaging frame.
  • the ophthalmologist or the lab technician manually adjusts the device and captures the image of retina of the eye upon alignment with the imaging frame.
  • Such approaches are implemented for imaging many other parts of the body as well.
  • the eye fixation and definition of region of interest is highly subjective in the existing method which causes unintended retinal image and loss of important information.
  • the image capturing body (the operator), the eye fixation source and the object (the retina) act without any direct link which might result in erroneous output.
  • capturing the image of the retina with different poses is not possible because the patient eye will be instructed to focus on the fixation light or object generated in the imaging device.
  • the present disclosure relates to a method for automated guiding of a target object.
  • a target object to be aligned with a region of interest is tracked and a current location information associated with the target object is estimated. Further, the current location information associated with the target object is compared with predefined location information of the region of interest. Based on the comparison, one or more coordinated stimulus are generated for guiding the target object, to aid in alignment of the target object within the region of interest.
  • the automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
  • the present disclosure relates to an automated guiding system for automated guiding of a target object.
  • the automated guiding system comprises a processor and a memory communicatively coupled to the processor.
  • the memory stores processor-executable instructions, which, on execution, cause the processor to perform the automated guiding.
  • a target object to be aligned with a region of interest is tracked and a current location information associated with the target object is estimated. Further, the current location information associated with the target object is compared with predefined location information of the region of interest. Based on the comparison, one or more coordinated stimulus are generated for guiding the target object, to aid in alignment of the target object within the region of interest.
  • the automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
  • Figure 1 shows an exemplary environment of an automated guiding system for automated guiding of target object, in accordance with some embodiments of the present disclosure
  • Figure 2 shows a detailed block diagram of an automated guiding system for automated guiding of target object, in accordance with some embodiments of the present disclosure
  • Figures 3a-3e show exemplary embodiments for automated guiding of target object, in accordance with some embodiments of the present disclosure
  • Figure 4 shows a flow diagram illustrating method for automated guiding of target object, in accordance with some embodiments of present disclosure.
  • Figure 5 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
  • Present disclosure provides system and method for automatic guiding of target object which may be imaged using an imaging system.
  • the automation in guiding may be achieved by fixing a region of interest i.e., an imaging frame, and by automatically guiding the target object to be within the region of interest.
  • Coordinates of the target object and the region of interest is determined to generate coordinated stimulus which may be audio stimuli, video stimuli or mechanical stimuli.
  • Figure 1 shows an exemplary environment 100 associated with an automated guiding system 101 for automated guiding of a target object 102.
  • the automated guiding system 101 may be in communication with an imaging system 103 which is configured to image the target object 102.
  • the automated guiding system 101 may be implemented in a variety of computing systems, such as a system on module, single board computer, microcontroller based system, laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a smartphone, a tablet, e-book readers, a server, a network server, and the like.
  • the automated guiding system 101 may communicate with the imaging system 103 via a communication network (not shown in the figure).
  • the communication network may include, but is not limited to, at least one of a direct interconnection, a Peer to Peer (P2P) network, Local Area Network (LAN), Wide Area Network (WAN), wireless network (e.g., using Wireless Application Protocol), Controller Area Network (CAN), the Internet, Wi-Fi, Bluetooth, Near- Field Communication (NFC) and such.
  • P2P Peer to Peer
  • LAN Local Area Network
  • WAN Wide Area Network
  • wireless network e.g., using Wireless Application Protocol
  • Controller Area Network CAN
  • the Internet Wi-Fi, Bluetooth, Near- Field Communication (NFC) and such.
  • the target object 102 may be an object which needs to be imaged.
  • the target object 102 need to be aligned with region of interest.
  • the region of interest maybe within imaging frame of the imaging system 103.
  • the region of interest may be part the field of view of the automated guiding system 101.
  • the region of interest may be central portion of the imaging frame of the imaging system 103 or central portion of the field of
  • the automated guiding system 101 may be configured to guide the target object 102 within the region of interest.
  • the automated guiding system 101 may include one or more processors 104, Input/Output (I/O) interface 105 and a memory 106.
  • the memory 106 may be communicatively coupled to the one or more processors 104.
  • the memory 106 stores instructions, executable by the one or more processors 104, which on execution, may cause the automated guiding system 101 to guide the target object 102 as proposed in the present disclosure.
  • the memory 106 may include one or more modules 107 and data 108.
  • the one or more modules 107 may be configured to perform the steps of the present disclosure using the data 108, to guide the target object 102.
  • each of the one or more modules 107 may be a hardware unit which may be outside the memory 106 and coupled with the automated guiding system 101.
  • the automated guiding system 101 may be configured to track the target object 102 in real-time.
  • the tracking of the target object may be performing using the imaging system 103 associated with the automated guiding system 101.
  • the tracking may be performed by capturing images or video of the target object 102.
  • the automated guiding system 101 may include one or more techniques, known to a person skilled in the art. to track the target object 102, using the captured images or video.
  • One or more other techniques, known to a person skilled in the art may be implemented in the automated guiding system 101, to tracking the target object 102.
  • the automated guiding system 101 may be configured to estimate a current location information associated with the target object 102 based on the tracking.
  • an image of the field of view of the automated guiding system 101 may be captured for determining the current location information associated with the target object 102.
  • the captured image includes the target object 102 and the region of interest.
  • the current location information comprising coordinates of the target object 102 in the field of view of the captured image is calculated.
  • the image for receiving the current location information associated with the target object 102 may be captured using the imaging system 103 associated with the automated guiding system 101. Further, upon capturing the image, the automated guiding system 101 may be configured to calculate the coordinates of the target object 102.
  • a dedicated processing unit or server may be associated with the automated guiding system 101 and the imaging system 103, for calculating the coordinates of the target object 102.
  • the dedicated processing unit or server may communicate the coordinates with the automated guiding system 101, for automatic guiding of the target object 102.
  • One or more techniques known to a person skilled in the art, may be implemented to calculate the coordinates using the image of the field of view of the automated guiding system 101.
  • the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with a predefined location information of the region of interest.
  • the predefined location information of the region of interest may indicate the coordinates of the region of interest, within the field of view of the automated guiding system 101.
  • the predefined location information of the region of interest may be a pre-stored data which is retrieved during comparison.
  • the predefined location information may be stored in the memory 106 and retrieved when performing the comparison.
  • the automated guiding system 101 may be associated with a repository which stores the predefined location information. The predefined location information may be retrieved from such repository, when performing the comparison.
  • the automated guiding system 101 may be configured to calculate a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest.
  • the real-time relative difference may indicate distance between the target object 102 and the region of interest.
  • the real-time relative difference is compared with a predefined threshold value to identify value of the real-time relative difference to be greater than the predefined threshold value.
  • the value of the predefined threshold value may be equal to value“0”.
  • one or more coordinated stimulus are generated for guiding the target object 102 with respect to current location of the target object.
  • the target object 102 may be displaced from the current location, towards the region of interest, to aid in alignment of the target object 102 within the region of interest.
  • the one or more coordinated stimulus are generated based on the real-time relative difference, for reducing the value of the real-time relative difference.
  • the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli.
  • the audio stimuli may include audio signals which are provided via a speaker in environment of the target object 102.
  • the audio signals may be in form of instructions.
  • the visual stimuli may include images or videos which are provided via a display interface to the target object 102.
  • the visual stimuli may be provided to be viewed by the target object 102.
  • the mechanical stimuli may be provided with physical contact with the target object 102.
  • the mechanical stimuli may be provided via an actuator associated with the target object 102.
  • at least one of said audio stimuli, visual stimuli and mechanical stimuli may be coordinated with each other for aiding the alignment between the target object 102 and the region of interest.
  • the audio stimuli may include instructions relating to viewing of the visual stimuli.
  • the automated guiding system 101 may perform the automatic guiding using the generated one or more coordinated stimulus, until the target object 102 is aligned with the region of interest.
  • steps of automated guiding is performed for a target object 102 to aid in alignment with a region of interest.
  • the steps of retrieving current location information and comparing the current location information with the predefined location information. If the alignment has not yet occurred, the one or more coordinated stimulus may be generated again.
  • the above indicated steps may be repeated again and again, until the target object 102 is aligned with the region of interest.
  • the automated guiding may be performed iteratively until the target object 102 is aligned with the region of interest.
  • imaging of the target object 102 may be performed.
  • the imaging system 103 may be a camera coupled with the automated guiding system 101.
  • output of the imaging system 103 of the target object 102 may be used for examining the target object 102.
  • the imaging of the target object 102 may be performed for at least one of plurality of wavelengths of light. In an embodiment, the wavelengths may range between 300nm and 900nm.
  • the automated guiding system 101 may be associated with plurality of target objects. In such case, the automated guiding system 101 may be configured to automatically guide each of the plurality of target objects, subsequently or simultaneously.
  • the I/O interface 105 of the automated guiding system 101 may be configured to provision transmission and reception of data.
  • Received data may include, but is not limited to, image of field of view of the automated guiding system 101, current location information, predefined location information and so on.
  • Transmitted data may include, but is not limited to, generated one or more coordinated stimulus, and so on.
  • One or more other data, which is related to guiding the target object 102 to align with the region of interest may be received and transmitted via the I/O interface 105.
  • Figure 2 shows a detailed block diagram of the automated guiding system 101 for automated guiding of the target object 102, in accordance with some embodiments of the present disclosure.
  • the data 108 and the one or more modules 107 in the memory 106 of the automated guiding system 101 may be described herein in detail.
  • the one or more modules 107 may include, but are not limited to, a tracking module 201 , a location information estimation module 202, a comparison module 203, a stimulus generation module 204, and one or more other modules 205, associated with the automated guiding system 101.
  • the data 108 in the memory 107 may include location information data 206, region of interest data 207, comparison data 208, stimulus data 209 and other data 210 associated with the automated guiding system 101.
  • the data 108 in the memory 106 may be processed by the one or more modules 107 of the automated guiding system 101.
  • the one or more modules 107 may be implemented as dedicated units and when implemented in such a manner, said modules may be configured with the functionality defined in the present disclosure to result in a novel hardware.
  • the term module may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Arrays
  • PSoC Programmable System-on-Chip
  • Imaging of an object may require the object to be placed within focus of the imaging system 103. It may be required to guide the obj ect toward the focus of the imaging system 103.
  • Present disclosure discloses to automatically guide the target object 102 to region of interest.
  • the target object 102 and the region of interest may vary based on the application of the automated guiding system 101.
  • the target object 102 may be an eye of patient and the region of intertest may imaging frame of the imaging system 103.
  • it may be required to align Object of Interest (OOI) i.e., optic disc of the eye in first frame 301.1 within area of Region of Interest (ROI).
  • OOI Object of Interest
  • ROI Region of Interest
  • the OOI may be aligned with the ROI as shown in second frame 301.2.
  • the images of optic disc may be captured, which may be further used for examination of the eye.
  • other application of the automated guiding system 101 may include to control the target object 102, displace the target object 102 and so on.
  • the tracking module 201 may be configured to track the target object 102.
  • the tracking module may be integral part of the automated guiding system 101 or may be externally connected with the automated guiding system 101.
  • the tracking module 201 may aid in detecting the target object 102 in field of view of the automated guiding system 101.
  • the location information estimation module 202 of the automated guiding system 101 may be configured to receive the current location information associated with the target object 102.
  • the current location information may be estimated based on the tracking of the target object 102.
  • the current location information may indicate location of the target object 102 within the field of view of the automated guiding system 101.
  • the current location information associated with the target object 102 may be received by capturing an image of the field of view of the automated guiding system 101 and calculating the current location information which comprises coordinates of the target object 102 in the field of view of the captured image.
  • the received current location information i.e., the coordinates of the target objects may be stored in the memory as the location information data 206.
  • the comparison module 203 of the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with the predefined location information of the region of interest.
  • the predefined location information may include coordinates of the region of interest in the field of view of the automated guiding system 101. Such predefined location information may be prestored to be as the region of interest data 207 in the memory 106 of the automated guiding system 101.
  • the comparison of the current location information with the predefined location information may be performed by calculating a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest. Results of the comparison performed by the comparison module may be stored as the comparison data 208 and used for generating the one or more coordinated stimulus.
  • first frame 301.1 where the real-time relative difference between the optic disc and the region of interest is (Dc, Ay). Further, value of the real-time relative difference is identified to be greater than the predefined threshold value. It may be desirable that the value of the real-time relative difference is zero for the alignment. In case the value of the real-time relative difference is identified to be lesser than the predefined threshold value or zero, the automated guiding system 101 may identify the target object 102 to be aligned with the region of interest.
  • the second frame 301.2 where the real-time relative difference may be said to be zero.
  • the stimulus generation module 204 of the automated guiding system 101 may be configured to generate the one or more coordinated stimulus for guiding the target object 102 with respect to current location of the target object 102.
  • the target object may be displaced from the current location and towards the region of interest.
  • the generated one or more coordinated stimulus aids in alignment of the target object 102 within the region of interest.
  • the one or more coordinated stimulus are generated based on the real-time relative difference.
  • the one or more coordinated stimulus are generated in order to reduce the value of the real-time relative difference . For example, consider the value of the real-time relative difference between the target object 102 and the region of interest is 1 unit and the target object is placed towards right side of the region of interest.
  • the one or more coordinated stimulus are generated such that the target object 102 moves towards left, nearing to the region of interest, to aid reduction of the value of the real-time relative difference.
  • One or more known algorithm known to a person skilled in the art, may be implemented to generate the one or more coordinated stimulus.
  • the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli. The selection may be automatically done by the image guiding system 101 or may be manually set by a user associated with the image guiding system 101.
  • the eye image capturing equipment may also comprise the imaging system 103, for capturing an images.
  • the automated guiding system 101 may audio stimuli and video stimuli.
  • the audio stimuli and the video stimuli includes audio sounds and visual images.
  • the eye image capturing equipment may comprises an opening on the body of the eye image capturing equipment which allows the patient to view the visual image generated by the imaging system 103 and a speaker to output the audio sounds. Exemplary representation of visual stimuli is shown in Figure 3c. The patient may be instructed to position chin on a chin rest and forehead on a forehead rest to view the visual image through the opening.
  • Movement of face of the patient under diagnosis may be fixed with the help of chin rest and the forehead rest.
  • the one or more coordinated stimulus aim in guiding the eye to align the optic disc with the region of interest.
  • First anterior position 303.1 of the eye is to look straight.
  • Corresponding first posterior position 304.1 of the eye shows position of optic disc 305.2 and macula 305.1 of the eye. It may be noticed that the optic disc 305.2 aligns to centre of the eye. However, in case where the eye ball is displaced towards right or left, the optic disc 305.2 may be aligned to the centre.
  • second anterior position 303.2 of the eye is to look left.
  • Corresponding second posterior position 304.2 of the eye shows position of the optic disc 305.2.
  • the macula 305.1 of the eye may disappear from the field of view or imaging frame.
  • third anterior position 303.3 of the eye is to look right.
  • Corresponding third posterior position 304.3 of the eye shows position of the optic disc 305.2 and the macula 305.1 of the eye.
  • the generated one or more coordinated stimuli aim in varying the position of the eye to achieve the desirable position.
  • the visual stimuli may be first visual stimuli 307.1 including a reference pattern with a reference block.
  • the patient may be asked to view the reference block in the reference pattern.
  • Corresponding audio stimuli may be“LOOK STRAIGHT”.
  • fourth posterior position 304.4 may be achieved.
  • second visual stimuli 307.2 may be generated.
  • Corresponding audio stimuli may be“LOOK TWO BLOCKS UP”.
  • third visual stimuli 307.3 may be generated.
  • Corresponding audio stimuli may be“LOOK FOUR BLOCKS TOWARDS LEFT”.
  • the position may vary to sixth posterior position 304.6 where the optic disc 305.2 is aligned with the region of interest 306.
  • the image of the optic disc 305.2 may be captured.
  • the generated stimuli may be a mechanical stimuli which is configured to physically move the target object 102 to align with the region of interest 306.
  • the one or more coordinated stimulus generated by the stimulus generation module 204 may be stored as the stimulus data 209.
  • Implementation of the automated guiding system 101 may not be restricted to imaging of the target object 102.
  • the automated guiding system 101 may be implemented for various other applications as well. Application may have a need to align the target object 102 to the region of interest.
  • the other data 210 may store data, including temporary data and temporary fdes, generated by modules for performing the various functions of the automated guiding system 101.
  • the one or more modules 107 may also include other modules 205 to perform various miscellaneous functionalities of the automated guiding system 101. It will be appreciated that such modules may be represented as a single module or a combination of different modules.
  • Figure 4 shows a flow diagram illustrating method for automatic guiding of th target object 102, in accordance with some embodiments of present disclosure.
  • the tracking module 201 of the automated guiding system 101 may be configured to track the target object 102.
  • the tracking of the target object may be performed in real-time.
  • the location information estimation module 202 of the automated guiding system 101 may be configured to estimate a current location information associated with the target object 102 to be guided to align with the region of interest.
  • the current location information associated with the target object 102 may be estimated by capturing an image of the field of view of the automated guiding system 101 and calculating the current location information which comprises coordinates of the target object 102 in the field of view of the captured image.
  • the comparison module 203 of the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with the predefined location information of the region of interest.
  • the predefined location information may include coordinates of the region of interest 306 in the field of view of the automated guiding system 101.
  • the comparison of the current location information with the predefined location information may be performed by calculating a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest. Further, value of the real-time relative difference is identified to be greater than the predefined threshold value.
  • the stimulus generation module 204 of the automated guiding system 101 may be configured to generate, based on the comparison, the one or more coordinated stimulus for guiding the target object 102 with respect to current location of the target object.
  • the generated one or more coordinated stimulus aids in alignment of the target object 102 within the region of interest.
  • the one or more coordinated stimulus are generated based on the real-time relative difference and are used for reducing the value of the real-time relative difference.
  • the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli.
  • the automated guiding of the target object 102 is performed using generated one or more coordinated stimulus until the target object 102 is aligned with the region of interest.
  • the steps of the blocks 401,402,403 and 404 may be performed iteratively to achieve alignment of the target object 102 with the region of interest.
  • the imaging system 103 associated with the automated guiding system 101 may capture images of the target object 102.
  • Method illustrated in Figure 4 may include one or more blocks for executing processes in the automated guiding system 101.
  • the method illustrated in Figure 4 may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
  • FIG. 5 illustrates a block diagram of an exemplary computer system 500 for implementing embodiments consistent with the present disclosure.
  • the computer system 500 is used to implement the automated guiding system 101.
  • the computer system 500 may include a central processing unit (“CPU” or“processor”) 502.
  • the processor 502 may include at least one data processor for executing processes in Virtual Storage Area Network.
  • the processor 502 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
  • the processor 502 may be disposed in communication with one or more input/output (I/O) devices 509 and 510 via I/O interface 501.
  • the I/O interface 501 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), radio frequency (RF) antennas, S -Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
  • CDMA code-division multiple access
  • HSPA+ high-speed packet access
  • GSM global system for mobile communications
  • LTE long-term evolution
  • WiMax or
  • the computer system 500 may communicate with one or more I/O devices 509 and 510.
  • the input devices 509 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc.
  • the output devices 510 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light- emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
  • video display e.g., cathode ray tube (CRT), liquid crystal display (LCD), light- emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like
  • audio speaker etc.
  • the computer system 500 may consist of the automated guiding system 101.
  • the processor 502 may be disposed in communication with a communication network 511 via a network interface 503.
  • the network interface 503 may communicate with the communication network 511.
  • the network interface 503 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/intemet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the communication network 511 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
  • the computer system 500 may communicate with target object 512 and imaging system 513, for automatic guiding of the target object 512.
  • the network interface 503 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/intemet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
  • the communication network 511 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such.
  • the first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/intemet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other.
  • the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
  • the processor 502 may be disposed in communication with a memory 505 (e.g., RAM, ROM, etc. not shown in Figure 5) via a storage interface 504.
  • the storage interface 504 may connect to memory 505 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as, serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE- 1394, Universal Serial Bus (USB), fibre channel, Small Computer Systems Interface (SCSI), etc.
  • the memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
  • the memory 505 may store a collection of program or database components, including, without limitation, user interface 506, an operating system 507, web browser 508 etc.
  • computer system 500 may store user/application data, such as, the data, variables, records, etc., as described in this disclosure.
  • databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle ® or Sybase®.
  • the operating system 507 may facilitate resource management and operation of the computer system 500.
  • Examples of operating systems include, without limitation, APPLE MACINTOSH® OS X, UNIX®, UNIX-like system distributions (E G., BERKELEY SOFTWARE DISTRIBUTIONTM (BSD), FREEBSDTM, NETBSDTM, OPENBSDTM, etc ), LINUX DISTRIBUTIONSTM (E G., RED HATTM, UBUNTUTM, KUBUNTUTM, etc ), IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc ), APPLE® IOSTM, GOOGLE® ANDROIDTM, BLACKBERRY® OS, or the like.
  • the computer system 500 may implement a web browser 508 stored program component.
  • the web browser 508 may be a hypertext viewing application, such as Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, etc. Secure web browsing may be provided using Hypertext Transport Protocol Secure (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browser 508 may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, Application Programming Interfaces (APIs), etc.
  • the computer system 500 may implement a mail server stored program component.
  • the mail server may be an Internet mail server such as Microsoft Exchange, or the like.
  • the mail server may utilize facilities such as ASP, ActiveX, ANSI C++/C#, Microsoft .NET, Common Gateway Interface (CGI) scripts, Java, JavaScript, PERL, PHP, Python, WebObjects, etc.
  • the mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like.
  • IMAP Internet Message Access Protocol
  • MAPI Messaging Application Programming Interface
  • PMP Post Office Protocol
  • SMTP Simple Mail Transfer Protocol
  • the computer system 500 may implement a mail client stored program component.
  • the mail client may be a mail viewing application, such as Apple Mail, Microsoft Entourage, Microsoft Outlook, Mozilla Thunderbird, etc.
  • a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
  • a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
  • the term“computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc (CD) ROMs, DVDs, flash drives, disks, and any other known physical storage media.
  • Embodiment of present disclosure provides a system that is capable of automatically guide a target object without a need for manual intervention or physical fixation.
  • Embodiment of present disclosure provisions error free guiding to the target object by dynamically generating coordinated stimuli for the guidance.
  • the described operations may be implemented as a method, system or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof.
  • the described operations may be implemented as code maintained in a“non-transitory computer readable medium”, where a processor may read and execute the code from the computer readable medium.
  • the processor is at least one of a microprocessor and a processor capable of processing and executing the queries.
  • a non-transitory computer readable medium may include media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc.
  • non-transitory computer-readable media may include all computer-readable media except for a transitory.
  • the code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.).
  • An“article of manufacture” includes non-transitory computer readable medium, and /or hardware logic, in which code may be implemented.
  • a device in which the code implementing the described embodiments of operations is encoded may include a computer readable medium or hardware logic.
  • FIG. 4 shows certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.

Abstract

Embodiments of present disclosure relates to method and system for automated guiding of a target object. For the guiding, a target object to be aligned with a region of interest is tracked and a current location information associated with the target object is estimated. Further, the current location information associated with the target object is compared with predefined location information of the region of interest. Based on the comparison, one or more coordinated stimulus are generated for guiding the target object with respect to current location of the target object, to aid in alignment of the target object within the region of interest. The automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.

Description

AN AUTOMATED GUIDING SYSTEM AND METHOD THEREOF
TECHNICAU FIEUD
[0001] The present disclosure generally relates automation systems. Particularly, but not exclusively, the present disclosure relates to an automated guiding system and method for imaging a target object.
BACKGROUND
[0002] Imaging the eye using various devices and different methods, for the diagnosis and treatment of the eyes, according to state of art, are well known. Conventionally, imaging the posterior part of the eye is done manually by the ophthalmologist or a skilled lab technician. The ophthalmologist or a skilled lab technician uses various imaging devices for capturing the image of the retina of the patient eye. The face of a patient is positioned on the device with chin resting at a chin rest and the patient is asked to keep looking at a distinct object/ light source, which acts an eye fixation. By which, the ophthalmologist or the lab technician aims to get the retina of the eye in the imaging frame. The ophthalmologist or the lab technician manually adjusts the device and captures the image of retina of the eye upon alignment with the imaging frame. Such approaches are implemented for imaging many other parts of the body as well.
[0003] The eye fixation and definition of region of interest is highly subjective in the existing method which causes unintended retinal image and loss of important information. The image capturing body (the operator), the eye fixation source and the object (the retina) act without any direct link which might result in erroneous output. Moreover, capturing the image of the retina with different poses is not possible because the patient eye will be instructed to focus on the fixation light or object generated in the imaging device.
[0004] The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art. SUMMARY
[0005] In an embodiment, the present disclosure relates to a method for automated guiding of a target object. For the guiding, a target object to be aligned with a region of interest is tracked and a current location information associated with the target object is estimated. Further, the current location information associated with the target object is compared with predefined location information of the region of interest. Based on the comparison, one or more coordinated stimulus are generated for guiding the target object, to aid in alignment of the target object within the region of interest. The automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
[0006] In an embodiment, the present disclosure relates to an automated guiding system for automated guiding of a target object. The automated guiding system comprises a processor and a memory communicatively coupled to the processor. The memory stores processor-executable instructions, which, on execution, cause the processor to perform the automated guiding. For the guiding, , a target object to be aligned with a region of interest is tracked and a current location information associated with the target object is estimated. Further, the current location information associated with the target object is compared with predefined location information of the region of interest. Based on the comparison, one or more coordinated stimulus are generated for guiding the target object, to aid in alignment of the target object within the region of interest. The automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
[0007] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which:
[0009] Figure 1 shows an exemplary environment of an automated guiding system for automated guiding of target object, in accordance with some embodiments of the present disclosure;
[0010] Figure 2 shows a detailed block diagram of an automated guiding system for automated guiding of target object, in accordance with some embodiments of the present disclosure;
[0011] Figures 3a-3e show exemplary embodiments for automated guiding of target object, in accordance with some embodiments of the present disclosure;
[0012] Figure 4 shows a flow diagram illustrating method for automated guiding of target object, in accordance with some embodiments of present disclosure; and
[0013] Figure 5 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
[0014] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether such computer or processor is explicitly shown.
DESCRIPTION OF THE DISCLOSURE
[0015] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. [0016] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
[0017] The terms“comprises”,“comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by“comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0018] The terms“includes”,“including”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by“includes ... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0019] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0020] Present disclosure provides system and method for automatic guiding of target object which may be imaged using an imaging system. The automation in guiding may be achieved by fixing a region of interest i.e., an imaging frame, and by automatically guiding the target object to be within the region of interest. Coordinates of the target object and the region of interest is determined to generate coordinated stimulus which may be audio stimuli, video stimuli or mechanical stimuli.
[0021] Figure 1 shows an exemplary environment 100 associated with an automated guiding system 101 for automated guiding of a target object 102. The automated guiding system 101 may be in communication with an imaging system 103 which is configured to image the target object 102. In an embodiment, the automated guiding system 101 may be implemented in a variety of computing systems, such as a system on module, single board computer, microcontroller based system, laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a smartphone, a tablet, e-book readers, a server, a network server, and the like. The automated guiding system 101 may communicate with the imaging system 103 via a communication network (not shown in the figure). The communication network may include, but is not limited to, at least one of a direct interconnection, a Peer to Peer (P2P) network, Local Area Network (LAN), Wide Area Network (WAN), wireless network (e.g., using Wireless Application Protocol), Controller Area Network (CAN), the Internet, Wi-Fi, Bluetooth, Near- Field Communication (NFC) and such. The target object 102 may be an object which needs to be imaged. For the imaging, the target object 102 need to be aligned with region of interest. The region of interest maybe within imaging frame of the imaging system 103. In an embodiment, the region of interest may be part the field of view of the automated guiding system 101. In an embodiment, the region of interest may be central portion of the imaging frame of the imaging system 103 or central portion of the field of view of the automated guiding system 101.
[0022] The automated guiding system 101 may be configured to guide the target object 102 within the region of interest. The automated guiding system 101 may include one or more processors 104, Input/Output (I/O) interface 105 and a memory 106. In some embodiments, the memory 106 may be communicatively coupled to the one or more processors 104. The memory 106 stores instructions, executable by the one or more processors 104, which on execution, may cause the automated guiding system 101 to guide the target object 102 as proposed in the present disclosure. In an embodiment, the memory 106 may include one or more modules 107 and data 108. The one or more modules 107 may be configured to perform the steps of the present disclosure using the data 108, to guide the target object 102. In an embodiment, each of the one or more modules 107 may be a hardware unit which may be outside the memory 106 and coupled with the automated guiding system 101. [0023] For the guiding, the automated guiding system 101 may be configured to track the target object 102 in real-time. In an embodiment, the tracking of the target object may be performing using the imaging system 103 associated with the automated guiding system 101. In an embodiment, the tracking may be performed by capturing images or video of the target object 102. The automated guiding system 101 may include one or more techniques, known to a person skilled in the art. to track the target object 102, using the captured images or video. One or more other techniques, known to a person skilled in the art, may be implemented in the automated guiding system 101, to tracking the target object 102.
[0024] Further, the automated guiding system 101 may be configured to estimate a current location information associated with the target object 102 based on the tracking. In an embodiment, an image of the field of view of the automated guiding system 101 may be captured for determining the current location information associated with the target object 102. The captured image includes the target object 102 and the region of interest. Further, the current location information comprising coordinates of the target object 102 in the field of view of the captured image is calculated. In an embodiment, the image for receiving the current location information associated with the target object 102 may be captured using the imaging system 103 associated with the automated guiding system 101. Further, upon capturing the image, the automated guiding system 101 may be configured to calculate the coordinates of the target object 102. In an embodiment, a dedicated processing unit or server may be associated with the automated guiding system 101 and the imaging system 103, for calculating the coordinates of the target object 102. In such case, the dedicated processing unit or server may communicate the coordinates with the automated guiding system 101, for automatic guiding of the target object 102. One or more techniques, known to a person skilled in the art, may be implemented to calculate the coordinates using the image of the field of view of the automated guiding system 101.
[0025] Further, the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with a predefined location information of the region of interest. In an embodiment, the predefined location information of the region of interest may indicate the coordinates of the region of interest, within the field of view of the automated guiding system 101. The predefined location information of the region of interest may be a pre-stored data which is retrieved during comparison. In an embodiment, the predefined location information may be stored in the memory 106 and retrieved when performing the comparison. In an embodiment, the automated guiding system 101 may be associated with a repository which stores the predefined location information. The predefined location information may be retrieved from such repository, when performing the comparison. For the comparison of the current location information associated with the target object 102 with the predefined location information of the region of interest, the automated guiding system 101 may be configured to calculate a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest. In an embodiment, the real-time relative difference may indicate distance between the target object 102 and the region of interest. Further, the real-time relative difference is compared with a predefined threshold value to identify value of the real-time relative difference to be greater than the predefined threshold value. In an embodiment, the value of the predefined threshold value may be equal to value“0”.
[0026] Based on the comparison, one or more coordinated stimulus are generated for guiding the target object 102 with respect to current location of the target object. By said tracking, the target object 102 may be displaced from the current location, towards the region of interest, to aid in alignment of the target object 102 within the region of interest. In an embodiment, the one or more coordinated stimulus are generated based on the real-time relative difference, for reducing the value of the real-time relative difference. In an embodiment, the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli. The audio stimuli may include audio signals which are provided via a speaker in environment of the target object 102. In an embodiment, the audio signals may be in form of instructions. The visual stimuli may include images or videos which are provided via a display interface to the target object 102. The visual stimuli may be provided to be viewed by the target object 102. The mechanical stimuli may be provided with physical contact with the target object 102. For example, the mechanical stimuli may be provided via an actuator associated with the target object 102. In an embodiment, at least one of said audio stimuli, visual stimuli and mechanical stimuli may be coordinated with each other for aiding the alignment between the target object 102 and the region of interest. For example, the audio stimuli may include instructions relating to viewing of the visual stimuli.
[0027] The automated guiding system 101 may perform the automatic guiding using the generated one or more coordinated stimulus, until the target object 102 is aligned with the region of interest. Consider steps of automated guiding is performed for a target object 102 to aid in alignment with a region of interest. The steps of retrieving current location information and comparing the current location information with the predefined location information. If the alignment has not yet occurred, the one or more coordinated stimulus may be generated again. The above indicated steps may be repeated again and again, until the target object 102 is aligned with the region of interest. Thus, the automated guiding may be performed iteratively until the target object 102 is aligned with the region of interest.
[0028] Upon the alignment, using the imaging system 103, imaging of the target object 102 may be performed. In an embodiment, the imaging system 103 may be a camera coupled with the automated guiding system 101. In an embodiment, output of the imaging system 103 of the target object 102 may be used for examining the target object 102. In an embodiment, the imaging of the target object 102 may be performed for at least one of plurality of wavelengths of light. In an embodiment, the wavelengths may range between 300nm and 900nm.
[0029] In an embodiment, the automated guiding system 101 may be associated with plurality of target objects. In such case, the automated guiding system 101 may be configured to automatically guide each of the plurality of target objects, subsequently or simultaneously. The I/O interface 105 of the automated guiding system 101 may be configured to provision transmission and reception of data. Received data may include, but is not limited to, image of field of view of the automated guiding system 101, current location information, predefined location information and so on. Transmitted data may include, but is not limited to, generated one or more coordinated stimulus, and so on. One or more other data, which is related to guiding the target object 102 to align with the region of interest, may be received and transmitted via the I/O interface 105.
[0030] Figure 2 shows a detailed block diagram of the automated guiding system 101 for automated guiding of the target object 102, in accordance with some embodiments of the present disclosure.
[0031] The data 108 and the one or more modules 107 in the memory 106 of the automated guiding system 101 may be described herein in detail. [0032] In one implementation, the one or more modules 107 may include, but are not limited to, a tracking module 201 , a location information estimation module 202, a comparison module 203, a stimulus generation module 204, and one or more other modules 205, associated with the automated guiding system 101.
[0033] In an embodiment, the data 108 in the memory 107 may include location information data 206, region of interest data 207, comparison data 208, stimulus data 209 and other data 210 associated with the automated guiding system 101.
[0034] In an embodiment, the data 108 in the memory 106 may be processed by the one or more modules 107 of the automated guiding system 101. In an embodiment, the one or more modules 107 may be implemented as dedicated units and when implemented in such a manner, said modules may be configured with the functionality defined in the present disclosure to result in a novel hardware. As used herein, the term module may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality.
[0035] Imaging of an object may require the object to be placed within focus of the imaging system 103. It may be required to guide the obj ect toward the focus of the imaging system 103. Present disclosure discloses to automatically guide the target object 102 to region of interest. The target object 102 and the region of interest may vary based on the application of the automated guiding system 101. For example, consider Figure 3a, where the automated guiding system 101 is implemented for examination of the eye. In such case, the target object 102 may be an eye of patient and the region of intertest may imaging frame of the imaging system 103. As shown in Figure 3b, it may be required to align Object of Interest (OOI) i.e., optic disc of the eye in first frame 301.1 within area of Region of Interest (ROI). Using the automated guiding system 101, the OOI may be aligned with the ROI as shown in second frame 301.2. By aligning the optic disc with the region of interest, the images of optic disc may be captured, which may be further used for examination of the eye. In an embodiment, other application of the automated guiding system 101 may include to control the target object 102, displace the target object 102 and so on.
[0036] For enabling the automatic guiding, the tracking module 201 may be configured to track the target object 102. In an embodiment, the tracking module may be integral part of the automated guiding system 101 or may be externally connected with the automated guiding system 101. In an embodiment, the tracking module 201 may aid in detecting the target object 102 in field of view of the automated guiding system 101. Upon tracking of the target object 102, the location information estimation module 202 of the automated guiding system 101 may be configured to receive the current location information associated with the target object 102. The current location information may be estimated based on the tracking of the target object 102. In an embodiment, the current location information may indicate location of the target object 102 within the field of view of the automated guiding system 101. In an embodiment, the current location information associated with the target object 102 may be received by capturing an image of the field of view of the automated guiding system 101 and calculating the current location information which comprises coordinates of the target object 102 in the field of view of the captured image. In an embodiment, the received current location information i.e., the coordinates of the target objects may be stored in the memory as the location information data 206.
[0037] Further, the comparison module 203 of the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with the predefined location information of the region of interest. In an embodiment, the predefined location information may include coordinates of the region of interest in the field of view of the automated guiding system 101. Such predefined location information may be prestored to be as the region of interest data 207 in the memory 106 of the automated guiding system 101. In an embodiment, the comparison of the current location information with the predefined location information may be performed by calculating a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest. Results of the comparison performed by the comparison module may be stored as the comparison data 208 and used for generating the one or more coordinated stimulus.
[0038] Consider first frame 301.1, where the real-time relative difference between the optic disc and the region of interest is (Dc, Ay). Further, value of the real-time relative difference is identified to be greater than the predefined threshold value. It may be desirable that the value of the real-time relative difference is zero for the alignment. In case the value of the real-time relative difference is identified to be lesser than the predefined threshold value or zero, the automated guiding system 101 may identify the target object 102 to be aligned with the region of interest. Consider the second frame 301.2, where the real-time relative difference may be said to be zero.
[0039] Based on the comparison, the stimulus generation module 204 of the automated guiding system 101 may be configured to generate the one or more coordinated stimulus for guiding the target object 102 with respect to current location of the target object 102. By the guiding, the target object may be displaced from the current location and towards the region of interest. The generated one or more coordinated stimulus aids in alignment of the target object 102 within the region of interest. In an embodiment, the one or more coordinated stimulus are generated based on the real-time relative difference. The one or more coordinated stimulus are generated in order to reduce the value of the real-time relative difference . For example, consider the value of the real-time relative difference between the target object 102 and the region of interest is 1 unit and the target object is placed towards right side of the region of interest. The one or more coordinated stimulus are generated such that the target object 102 moves towards left, nearing to the region of interest, to aid reduction of the value of the real-time relative difference. One or more known algorithm, known to a person skilled in the art, may be implemented to generate the one or more coordinated stimulus. In an embodiment, the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli. The selection may be automatically done by the image guiding system 101 or may be manually set by a user associated with the image guiding system 101.
[0040] Consider the automated guiding system 101 implemented in eye image capturing equipment as shown in Figure 3a. The eye image capturing equipment may also comprise the imaging system 103, for capturing an images. In an embodiment, the automated guiding system 101 may audio stimuli and video stimuli. In some embodiment, the audio stimuli and the video stimuli includes audio sounds and visual images. In some embodiment, the eye image capturing equipment may comprises an opening on the body of the eye image capturing equipment which allows the patient to view the visual image generated by the imaging system 103 and a speaker to output the audio sounds. Exemplary representation of visual stimuli is shown in Figure 3c. The patient may be instructed to position chin on a chin rest and forehead on a forehead rest to view the visual image through the opening. Movement of face of the patient under diagnosis may be fixed with the help of chin rest and the forehead rest. [0041] For the given example, the one or more coordinated stimulus aim in guiding the eye to align the optic disc with the region of interest. Consider Figure 3d illustrating variation in positions of the optic disc of the eye based on movement of eyeball. First anterior position 303.1 of the eye is to look straight. Corresponding first posterior position 304.1 of the eye shows position of optic disc 305.2 and macula 305.1 of the eye. It may be noticed that the optic disc 305.2 aligns to centre of the eye. However, in case where the eye ball is displaced towards right or left, the optic disc 305.2 may be aligned to the centre. For example, consider second anterior position 303.2 of the eye is to look left. Corresponding second posterior position 304.2 of the eye shows position of the optic disc 305.2. The macula 305.1 of the eye may disappear from the field of view or imaging frame. Consider third anterior position 303.3 of the eye is to look right. Corresponding third posterior position 304.3 of the eye shows position of the optic disc 305.2 and the macula 305.1 of the eye. For positions similar to the second posterior position 304.2 and the third posterior position 304.3, it is desirable to bring the position to the first posterior position 304.1. The generated one or more coordinated stimuli aim in varying the position of the eye to achieve the desirable position.
[0042] Consider visual stimuli and audio stimuli are generated as the coordinated stimulus for aligning the optic disc 305.2 with the region of interest 306, as shown in Figure 3e. Initially, as shown in the figure, the visual stimuli may be first visual stimuli 307.1 including a reference pattern with a reference block. The patient may be asked to view the reference block in the reference pattern. Corresponding audio stimuli may be“LOOK STRAIGHT”. Using such first visual stimuli 307.1, fourth posterior position 304.4 may be achieved. By calculating the real time relative difference, it may be identified that the optic disc 305.2 is not aligned with the region of interest 306. To achieve the alignment, second visual stimuli 307.2 may be generated. Corresponding audio stimuli may be“LOOK TWO BLOCKS UP”. Using second visual stimuli 307.2, the position may vary to fifth posterior position 304.5 wherein the optic disc 305.2 is still not aligned with the region of interest 306. Based on real-time relative difference calculated for the fifth posterior position 304.5, third visual stimuli 307.3 may be generated. Corresponding audio stimuli may be“LOOK FOUR BLOCKS TOWARDS LEFT”. Using such third visual stimuli 307.3, the position may vary to sixth posterior position 304.6 where the optic disc 305.2 is aligned with the region of interest 306. Upon the alignment, the image of the optic disc 305.2 may be captured. In an embodiment, the generated stimuli may be a mechanical stimuli which is configured to physically move the target object 102 to align with the region of interest 306. Various combination of the one or more coordinated stimulus may be used for achieving the alignment. The one or more coordinated stimulus generated by the stimulus generation module 204 may be stored as the stimulus data 209.
[0043] Implementation of the automated guiding system 101 may not be restricted to imaging of the target object 102. The automated guiding system 101 may be implemented for various other applications as well. Application may have a need to align the target object 102 to the region of interest.
[0044] The other data 210 may store data, including temporary data and temporary fdes, generated by modules for performing the various functions of the automated guiding system 101. The one or more modules 107 may also include other modules 205 to perform various miscellaneous functionalities of the automated guiding system 101. It will be appreciated that such modules may be represented as a single module or a combination of different modules.
[0045] Figure 4 shows a flow diagram illustrating method for automatic guiding of th target object 102, in accordance with some embodiments of present disclosure.
[0046] At block 401, the tracking module 201 of the automated guiding system 101 may be configured to track the target object 102. The tracking of the target object may be performed in real-time.
[0047] At block 402, the location information estimation module 202 of the automated guiding system 101 may be configured to estimate a current location information associated with the target object 102 to be guided to align with the region of interest. In an embodiment, the current location information associated with the target object 102 may be estimated by capturing an image of the field of view of the automated guiding system 101 and calculating the current location information which comprises coordinates of the target object 102 in the field of view of the captured image.
[0048] At block 403, the comparison module 203 of the automated guiding system 101 may be configured to compare the current location information associated with the target object 102 with the predefined location information of the region of interest. In an embodiment, the predefined location information may include coordinates of the region of interest 306 in the field of view of the automated guiding system 101. In an embodiment, the comparison of the current location information with the predefined location information may be performed by calculating a real-time relative difference between the current location information of the target object 102 and the predefined location information of the region of interest. Further, value of the real-time relative difference is identified to be greater than the predefined threshold value.
[0049] At block 403 , the stimulus generation module 204 of the automated guiding system 101 may be configured to generate, based on the comparison, the one or more coordinated stimulus for guiding the target object 102 with respect to current location of the target object. The generated one or more coordinated stimulus aids in alignment of the target object 102 within the region of interest. In an embodiment, the one or more coordinated stimulus are generated based on the real-time relative difference and are used for reducing the value of the real-time relative difference. In an embodiment, the one or more coordinated stimulus may be selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli. The automated guiding of the target object 102 is performed using generated one or more coordinated stimulus until the target object 102 is aligned with the region of interest.
[0050] In an embodiment, the steps of the blocks 401,402,403 and 404 may be performed iteratively to achieve alignment of the target object 102 with the region of interest. Upon such alignment, the imaging system 103 associated with the automated guiding system 101, may capture images of the target object 102.
[0051] Method illustrated in Figure 4 may include one or more blocks for executing processes in the automated guiding system 101. The method illustrated in Figure 4 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
[0052] The order in which the method illustrated in Figure 4 are described may not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. Computing System
[0053] Figure 5 illustrates a block diagram of an exemplary computer system 500 for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system 500 is used to implement the automated guiding system 101. The computer system 500 may include a central processing unit (“CPU” or“processor”) 502. The processor 502 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor 502 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
[0054] The processor 502 may be disposed in communication with one or more input/output (I/O) devices 509 and 510 via I/O interface 501. The I/O interface 501 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), radio frequency (RF) antennas, S -Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
[0055] Using the I/O interface 501, the computer system 500 may communicate with one or more I/O devices 509 and 510. For example, the input devices 509 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output devices 510 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light- emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
[0056] In some embodiments, the computer system 500 may consist of the automated guiding system 101. The processor 502 may be disposed in communication with a communication network 511 via a network interface 503. The network interface 503 may communicate with the communication network 511. The network interface 503 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/intemet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network 511 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface 503 and the communication network 511, the computer system 500 may communicate with target object 512 and imaging system 513, for automatic guiding of the target object 512. The network interface 503 may employ connection protocols include, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/intemet protocol (TCP/IP), token ring, IEEE 802.1 la/b/g/n/x, etc.
[0057] The communication network 511 includes, but is not limited to, a direct interconnection, an e-commerce network, a peer to peer (P2P) network, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, Wi-Fi, and such. The first network and the second network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/intemet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the first network and the second network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
[0058] In some embodiments, the processor 502 may be disposed in communication with a memory 505 (e.g., RAM, ROM, etc. not shown in Figure 5) via a storage interface 504. The storage interface 504 may connect to memory 505 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as, serial advanced technology attachment (SATA), Integrated Drive Electronics (IDE), IEEE- 1394, Universal Serial Bus (USB), fibre channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
[0059] The memory 505 may store a collection of program or database components, including, without limitation, user interface 506, an operating system 507, web browser 508 etc. In some embodiments, computer system 500 may store user/application data, such as, the data, variables, records, etc., as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle ® or Sybase®.
[0060] The operating system 507 may facilitate resource management and operation of the computer system 500. Examples of operating systems include, without limitation, APPLE MACINTOSH® OS X, UNIX®, UNIX-like system distributions (E G., BERKELEY SOFTWARE DISTRIBUTION™ (BSD), FREEBSD™, NETBSD™, OPENBSD™, etc ), LINUX DISTRIBUTIONS™ (E G., RED HAT™, UBUNTU™, KUBUNTU™, etc ), IBM™ OS/2, MICROSOFT™ WINDOWS™ (XP™, VISTA™/7/8, 10 etc ), APPLE® IOS™, GOOGLE® ANDROID™, BLACKBERRY® OS, or the like.
[0061] In some embodiments, the computer system 500 may implement a web browser 508 stored program component. The web browser 508 may be a hypertext viewing application, such as Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, etc. Secure web browsing may be provided using Hypertext Transport Protocol Secure (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browser 508 may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system 500 may implement a mail server stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP, ActiveX, ANSI C++/C#, Microsoft .NET, Common Gateway Interface (CGI) scripts, Java, JavaScript, PERL, PHP, Python, WebObjects, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), Microsoft Exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 500 may implement a mail client stored program component. The mail client may be a mail viewing application, such as Apple Mail, Microsoft Entourage, Microsoft Outlook, Mozilla Thunderbird, etc.
[0062] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term“computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc (CD) ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Advantages
[0063] Embodiment of present disclosure provides a system that is capable of automatically guide a target object without a need for manual intervention or physical fixation.
[0064] Embodiment of present disclosure provisions error free guiding to the target object by dynamically generating coordinated stimuli for the guidance.
[0065] The described operations may be implemented as a method, system or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a“non-transitory computer readable medium”, where a processor may read and execute the code from the computer readable medium. The processor is at least one of a microprocessor and a processor capable of processing and executing the queries. A non-transitory computer readable medium may include media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc. Further, non-transitory computer-readable media may include all computer-readable media except for a transitory. The code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.).
[0066] An“article of manufacture” includes non-transitory computer readable medium, and /or hardware logic, in which code may be implemented. A device in which the code implementing the described embodiments of operations is encoded may include a computer readable medium or hardware logic. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the invention, and that the article of manufacture may include suitable information bearing medium known in the art.
[0067] The terms“an embodiment”,“embodiment”,“embodiments”,“the embodiment”,“the embodiments”,“one or more embodiments”,“some embodiments”, and“one embodiment” mean“one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
[0068] The terms“including”,“comprising”,“having” and variations thereof mean“including but not limited to”, unless expressly specified otherwise.
[0069] The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
[0070] The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
[0071] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
[0072] When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0073] The illustrated operations of Figure 4 shows certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
[0074] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
[0075] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral numerals:
Figure imgf000023_0001
Figure imgf000024_0001

Claims

We Claim:
1. A method for automated guiding of a target object, comprising:
tracking, by an automated guiding system, in real-time, a target object to be guided to align with a region of interest;
estimating, by the automated guiding system, a current location information associated with the target object;
comparing, by the automated guiding system, the current location information associated with the target object with predefined location information of the region of interest; and
generating, by the automated guiding system, based on the comparison, one or more coordinated stimulus for guiding the target object with respect to current location of the target object, to aid in alignment of the target object within the region of interest, wherein the automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
2. The method as claimed in claim 1, further comprises imaging the target object using an imaging system associated with the automated guiding system, upon alignment of the target object with the region of interest.
3. The method as claimed in claim 1, wherein imaging of the target object is performed for at least one of plurality of wavelengths of light.
4. The method as claimed in claim 1, wherein estimating the current location information associated with the target object comprises:
capturing an image of field of view of the automated guiding system, said field of view comprises the target object and the region of interest; and
calculating the current location information comprises coordinates of the target object in the field of view of the captured image.
5. The method as claimed in claim 1 , wherein the region of interest is part of field of view of the automated guiding system.
6. The method as claimed in claim 1, wherein the comparison of the current location information associated with the target object with the predefined location information of the region of interest comprises: calculating a real-time relative difference between the current location information of the target object and the predefined location information of the region of interest; and
identifying value of the real-time relative difference to be greater than a predefined threshold value, wherein the one or more coordinated stimulus are generated based on the real-time relative difference, for reducing the value of the real-time relative difference.
7. The method as claimed in claim 1, wherein the one or more coordinated stimulus is selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli.
8. The method as claimed in claim 1, wherein generating the one or more coordinated stimulus until the target object is aligned with the region of interest comprises iteratively performing the automated guiding.
9. The method as claimed in claim 1, wherein the target object is an optic disc of an eye.
10. The method as claimed in claim 1, wherein the target object is selected to be an object that is to be examined via imaging.
11. An automated guiding system for automated guiding of a target object, said automated guiding system comprises:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to:
track, in real-time, a target object to be guided to align with a region of interest;
estimate, a current location information associated with the target object;
compare the current location information associated with the target object with predefined location information of the region of interest; and
generate based on the comparison, one or more coordinated stimulus for guiding the target object with respect to current location of the target object, to aid in alignment of the target object within the region of interest, wherein the automated guiding of the target object is performed using generated one or more coordinated stimulus until the target object is aligned with the region of interest.
12. The automated guiding system as claimed in claim 11, further comprises the processor configured to image the target object using an imaging system associated with the automated guiding system, upon alignment of the target object with the region of interest.
13. The automated guiding system as claimed in claim 12, wherein imaging of the target object is performed for at least one of plurality of wavelengths of light.
14. The automated guiding system as claimed in claim 9, wherein the current location information associated with the target object is estimated by:
capturing an image of field of view of the automated guiding system, said field of view comprises the target object and the region of interest; and
calculating the current location information comprises coordinates of the target object in the field of view of the captured image.
15. The automated guiding system as claimed in claim 9, wherein the region of interest is part of field of view of the automated guiding system.
16. The automated guiding system as claimed in claim 9, wherein the current location information associated with the target object is compared with the predefined location information of the region of interest by:
calculating a real-time relative difference between the current location information of the target object and the predefined location information of the region of interest; and
identifying value of the real-time relative difference to be greater than a predefined threshold value, wherein the one or more coordinated stimulus is generated based on the real-time relative difference, for reducing the value of the real-time relative difference.
17. The automated guiding system as claimed in claim 9, wherein the one or more coordinated stimulus is selected to be at least one of audio stimuli, visual stimuli, and mechanical stimuli.
18. The automated guiding system as claimed in claim 9, wherein the one or more coordinated stimulus is generated until the target object is aligned with the region of interest comprises by iteratively performing the automated guiding.
19. The automated guiding system as claimed in claim 9, wherein the target object is an optic disc of an eye.
20. The automated guiding system as claimed in claim 11, wherein the target object is selected to be an object that is to be examined via imaging.
PCT/IB2020/056222 2019-07-01 2020-07-01 An automated guiding system and method thereof WO2021001766A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941000053 2019-07-01
IN201941000053 2019-07-01

Publications (1)

Publication Number Publication Date
WO2021001766A1 true WO2021001766A1 (en) 2021-01-07

Family

ID=74100857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/056222 WO2021001766A1 (en) 2019-07-01 2020-07-01 An automated guiding system and method thereof

Country Status (1)

Country Link
WO (1) WO2021001766A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170007447A1 (en) * 2009-11-16 2017-01-12 Alcon Lensx, Inc. Imaging surgical target tissue by nonlinear scanning
US20170119247A1 (en) * 2008-03-27 2017-05-04 Doheny Eye Institute Optical coherence tomography-based ophthalmic testing methods, devices and systems
US20180008141A1 (en) * 2014-07-08 2018-01-11 Krueger Wesley W O Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170119247A1 (en) * 2008-03-27 2017-05-04 Doheny Eye Institute Optical coherence tomography-based ophthalmic testing methods, devices and systems
US20170007447A1 (en) * 2009-11-16 2017-01-12 Alcon Lensx, Inc. Imaging surgical target tissue by nonlinear scanning
US20180008141A1 (en) * 2014-07-08 2018-01-11 Krueger Wesley W O Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance

Similar Documents

Publication Publication Date Title
US20200211191A1 (en) Method and system for detecting disorders in retinal images
US11386712B2 (en) Method and system for multimodal analysis based emotion recognition
US20170103558A1 (en) Method and system for generating panoramic images with real-time annotations
US20220114719A1 (en) System and Methods for Qualifying Medical Images
WO2020079704A1 (en) Method and system for performing semantic segmentation of plurality of entities in an image
US10417484B2 (en) Method and system for determining an intent of a subject using behavioural pattern
US10178358B2 (en) Method for surveillance of an area of interest and a surveillance device thereof
US20160374605A1 (en) Method and system for determining emotions of a user using a camera
US20170147764A1 (en) Method and system for predicting consultation duration
EP3182330B1 (en) Method and system for remotely annotating an object in real-time
US10380747B2 (en) Method and system for recommending optimal ergonomic position for a user of a computing device
US10691650B2 (en) Method and server for vendor-independent acquisition of medical data
US20190236790A1 (en) Method and system for tracking objects within a video
US11340439B2 (en) Method and system for auto focusing a microscopic imaging system
WO2021001766A1 (en) An automated guiding system and method thereof
US11120916B2 (en) Method and a system for managing time-critical events associated with critical devices
US10929992B2 (en) Method and system for rendering augmented reality (AR) content for textureless objects
US10699417B2 (en) Method and system for acquisition of optimal images of object in multi-layer sample
EP3054669A1 (en) Method and device for assisting a user to capture images
EP3486023B1 (en) Method and system for performing laser marking
US20210165204A1 (en) Method and system for reconstructing a field of view
US10318799B2 (en) Method of predicting an interest of a user and a system thereof
US20230252768A1 (en) Method and system for dynamically generating annotated content for ar-vr applications
US20170148291A1 (en) Method and a system for dynamic display of surveillance feeds
US11144985B2 (en) Method and system for assisting user with product handling in retail shopping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20835555

Country of ref document: EP

Kind code of ref document: A1

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20835555

Country of ref document: EP

Kind code of ref document: A1