WO2016161905A1 - Method and apparatus for magnifying and/or highlighting objects on screens - Google Patents

Method and apparatus for magnifying and/or highlighting objects on screens Download PDF

Info

Publication number
WO2016161905A1
WO2016161905A1 PCT/CN2016/077401 CN2016077401W WO2016161905A1 WO 2016161905 A1 WO2016161905 A1 WO 2016161905A1 CN 2016077401 W CN2016077401 W CN 2016077401W WO 2016161905 A1 WO2016161905 A1 WO 2016161905A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
screen
focus area
eye focus
sensor
Prior art date
Application number
PCT/CN2016/077401
Other languages
French (fr)
Inventor
Annu BOTHRA
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to CN201680019576.XA priority Critical patent/CN107430441A/en
Publication of WO2016161905A1 publication Critical patent/WO2016161905A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the present subject matter described herein in general, relates to a data processing system or computing system comprising a display screen, and more particularly, the invention relates to a graphical user interface for displaying objects on the display screen, and systems, apparatus and methods for automatically changing the display magnification of the objects displayed on the display screen to a desired display magnification.
  • Data processing system or computing system such as mobile phones that includes but not limited to Smartphone’s , personal digital assistant (PDA) , etc., are widely used by almost each and every individual today.
  • PDA personal digital assistant
  • These data processing system or computing systems provides a graphical screen for displaying objects of a graphical user interface, such as text strings and hyperlinks, graphical buttons, etc.
  • These data processing system or computing systems are highly advanced and includes various inbuilt sensors like camera sensors, accelerometer sensors, touch sensors, and the like.
  • a user may read his/her e-mails, documents, e-books, view and image/picture, view presentation/figures like pie-chart, etc.
  • the prior-art provides various solutions wherein, when user is reading/viewing any object, and if he/she wants to see the object with more clarity, the user has to manually zoom in or zoom out the content.
  • a manual intervention is always required to magnify the contents to make in more visible/clearer in order to avoid strain on the user eyes.
  • the surrounding light is not proper, it may result in the straining of eye-sight, to avoid this, the user has to manually adjust the screen/display light and/or further magnify the object.
  • One aspect of the present invention is to provide a mechanism to magnify and/or highlight the objects on screen.
  • Another aspect of the present invention is to provide a mechanism to magnify and highlight the objects on screen based on the eye focus.
  • Another aspect of the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on the eye focus.
  • Another aspect of the present invention is to an automated mechanism provide a mechanism to magnify and highlight the objects on screen based on the eye focus thereby reducing feel of strain while reading/viewing the objects.
  • Another aspect of the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on an eye focus of a user and/or based on the surrounding light thereby reducing feel of strain while reading/viewing the objects.
  • the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the apparatus comprises detecting, based on at least one sensor event fromsaid sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
  • the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the apparatus comprises a detection module, upon receipt of at least one sensor event from said sensor, configured to detect an eye focus area of at least one user concentrating on said screen, and detect an intensity of light nearby said apparatus.
  • the apparatus also comprises a magnify and highlight module configured to magnify said eye focus area based on eye focus area detected; and/or highlight said eye focus area based on intensity of light detected; wherein said eye focus area is current focused area of said user on said object displayed on said screen.
  • the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the method comprises detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
  • the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the method comprises:
  • ⁇ detecting using a detection module, and upon receipt of at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen; and an intensity of light nearby said apparatus;
  • said eye focus area is current focused area of said user on said object displayed on said screen.
  • the present invention provides a technical solution which solves a technical problem for a certain group of people who have their daily business with the documents and the e-books.
  • the solution reduces pain for a lot of other people as well like the aged people who reads newspapers every day in morning, or for the people who feels lot of strain while reading any document on the device.
  • the present invention makes the user life simple, by providing a feature which can be used in the device that can capture the eye focus of the user and highlight and magnify the surrounding area where the user is currently focusing while reading.
  • the present invention improves user experience of reading the documents on the Smartphone which is a very common operation done for many different activities of day–to–day life like reading news articles, blogs, e-books, and the like.
  • a user needs to get the device closer to his eyes to get the exact focus area of the object which he/she is concentrating, to get the following effects:
  • Figure 1 illustrates an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen n, in accordance with an embodiment of the present subject matter.
  • Figure 2 illustrates method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen, in accordance with an embodiment of the present subject matter.
  • FIG. 3 and Figure 4 illustrates the general overview of the present invention, in accordance with an embodiment of the present subject matter.
  • the invention can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on an eye focus of a user and/or based on the surrounding light thereby reducing feel of strain while reading/viewing the objects.
  • Stella is very fond of writing blogs and reading them as well, but her daily routine is very hectic so she is able to do that work only in late nights. So for doing this it puts lots of strain to her eyes for focusing over the text and manually she has to do zoom in and zoom out of the text which is very small.
  • the text whatever she is going through gets automatically highlighted and the screen zooms around that text, which will make her reading experience way better. Even if there is no power outside, still the text reading becomes very comfortable as it will not strain your eyes.
  • This solution can be helpful to aged people or senior citizens who often have a habit of reading newspapers where the text written is usually very small for them.
  • the present invention will be helpful for such people who find it very difficult to read small characters and put lot of pressure to understand them.
  • a user needs to get the device closer to his eyes to get the exact focus area of the object which he/she is concentrating, to get the following effects:
  • the present invention detects an eye focus of the user to understand the current focused area on the screen and also detects the intensity of the surrounding light so as to magnify and highlight the text for the user accordingly.
  • the present invention enables the device to get the sensor events to detect the current eye focus and the surrounding light intensity.
  • the Android sensors events may be received by the device.
  • the user gets the device closer to the eye, device needs to continue to get the sensor events in an android service.
  • the options for magnifying/zooming and/or highlighting may be provided as configurable items to the user in a device settings page.
  • device when device (operating system) , for example the android system, receives the particular sensor events from the user that the device is very close to user, the below mentioned actions can be provided to the user:
  • the device should highlight the focused text which is compatible to the surrounding light.
  • a camera sensors captures the eye focus and send that event to the user with the focused area coordinates.
  • the camera sensors (3D sensors) may be used to recognize the light intensity of the surrounding area which will tell the highlighter to how to highlight the text so it becomes clearer to the user. So, after getting the focused area and the light intensity, the present invention zooms and/or highlights (if either of these options are enabled) the text which is currently in the focus area of the user.
  • the present invention is based on the data based magnification and highlighting which is in contrast with the eye-tracking techniques already available in the prior-art.
  • the present invention while tracking the eye movement, marks a first frame position as the start point and then captures a position of the next frame. Join each frame points to create a cord like structure which will give the direction in which the user is viewing. If the next pixel calculated crosses a threshold, then end the cord there and start with the new pixel point. If the next pixel position crosses the threshold value then it means the line ends there and it should start with a new cord.
  • cord values will be given to the system of the device, so as to highlight that section and zoom it.
  • the present invention feature can be added as a configuration itemand may be enabled whenever required.
  • the amount of highlighting given to the text /image or any mime-type may be dependent on the light intensity of the surrounding.
  • Light intensity may be captured from the camera sensors.
  • a range may be provided to the user between which the user may set the highlight value.
  • the scale of highlighting will differ.
  • a setting in the device settings with which user may set a scale by which the user want to magnify the object which he is focusing on the device depending on a user comfort may be provided.
  • the magnification may differ based on the size of the device. For devices with larger screen size, the magnification scale may be more so that it comes more clearly.
  • the present invention provides a mime-type based magnification which any application can customize depending on their usage.
  • FIG. 1 an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen, is illustrated in accordance with an embodiment of the present subject matter.
  • the apparatus 100 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the apparatus 100 may be accessed by multiple users through one or more user/electronic devices (not shown) , referred to as user hereinafter, or applications residing onthe user devices. Examples of the apparatus 100 and the user devices may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices are communicatively coupled to the apparatus 100 through a network.
  • the network may be a wireless network, a wired network or a combination thereof.
  • the network can be implemented as one of the different types of networks, such as intranet, local area network (LAN) , wide area network (WAN) , the internet, and the like.
  • the network may either be a dedicated network or a shared network.
  • the shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP) , Transmission Control Protocol/Internet Protocol (TCP/IP) , Wireless Application Protocol (WAP) , and the like, to communicate with one another.
  • HTTP Hypertext Transfer Protocol
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • WAP Wireless Application Protocol
  • the network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
  • the apparatus 100 may include at least one processor 102, an input/output (I/O) interface 104, at least one sensor 106, and a memory 108.
  • the at least one processor 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the at least one processor 102 is configured to fetchand execute computer-readable instructions that may be stored in the formof module/sin the memory 108.
  • the I/O interface 104 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like.
  • the I/O interface 104 may allow the apparatus100to interact with a user directly or through the user/client devices. Further, the I/O interface 104 may enable the apparatus 100 to communicate withother computing devices, such as web servers and external data servers (not shown) .
  • the I/O interface 104 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite.
  • the I/O interface 104 may include one or more ports for connecting a number of devices to one another or to another server.
  • the memory 108 may include any computer-readable medium known in the art including, for example, volatile memory, such as static randomaccess memory (SRAM) and dynamic random access memory (DRAM) , and/or non-volatile memory, such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static randomaccess memory (SRAM) and dynamic random access memory (DRAM)
  • non-volatile memory such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • ROM read only memory
  • erasable programmable ROM erasable programmable ROM
  • the modules 110 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types.
  • the modules 110 may include a detection module 112, magnify and highlight module 114.
  • the other modules may include programs or coded instructions that supplement applications and functions of the apparatus 100.
  • the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the apparatus comprises detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus
  • the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the apparatus comprises a detection module, upon receipt of at least one sensor event from said sensor, configured to detect an eye focus area of at least one user concentrating on said screen, and detect an intensity of light nearby said apparatus.
  • the apparatus also comprises a magnify and highlight module configured to magnify said eye focus area based on eye focus area detected; and/or highlight said eye focus area based on intensity of light detected; wherein said eye focus area is current focused area of said user on said object displayed on said screen.
  • the present invention automatically magnify and/or highlight said eye focus area of said user on said object displayed on said screen, when said screen is at a predetermined distance from said user, wherein said predetermined distance is detected by at least one sensor, specifically by camera sensor/s, attached to said apparatus.
  • at least one sensor specifically by camera sensor/s, attached to said apparatus.
  • said eye focus area is detected, preferably, by means of at least one 3D sensor, in the form of a focused area coordinates.
  • said intensity of light nearby said apparatus is detected, preferably by means of at least one proximity sensor/s attached to said apparatus, and specifically by means of a camera sensor/s.
  • a proximity sensor/s attached to said apparatus, and specifically by means of a camera sensor/s.
  • the present invention receives said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
  • said eye focus area is said current focused area of said user on said object displayed on said screen is magnified and/or highlighted.
  • the present invention is configured to receive sensor events when said screen is in proximity of said user.
  • the present invention provides at least one configurable item in a settings option, wherein said configurable itemis at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
  • said highlight option comprises a range of highlight values selectable by said user.
  • said magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
  • said eye focus area is detected based on at least two cord structure, each having a start point and end point, obtained by:
  • ⁇ marking based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and capturing a next succeeding frame position, wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
  • the present invention provides a MIME-type magnification and highlighting.
  • FIG 2 illustrates a method, for authenticating at least one user, in accordance with an embodiment of the present subject matter.
  • the method may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
  • the method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network.
  • computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
  • the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the method comprises detecting, based on at least one sensor event fromsaid sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user onsaid object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
  • the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus.
  • the method comprises:
  • ⁇ detecting using a detection module, and upon receipt of at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen; and an intensity of light nearby said apparatus;
  • said eye focus area is current focused area of said user on said object displayed on said screen.
  • an eye focus area of at least one user concentrating on said screen is detected using a detection module, and upon receipt of at least one sensor event from said sensor present in said apparatus 100.
  • an intensity of light nearby said apparatus is detected using a detection module, and upon receipt of at least one sensor event from said sensor present in said apparatus.
  • said eye focus area based on eye focus area detected is magnified using magnify and highlight module of said apparatus.
  • said eye focus area based on intensity of light detected is highlighted using magnify and highlight module of said apparatus.
  • said eye focus area is current focused area of said user on said object displayed on said screen.
  • said method may further include detection of a distance between aid screen and said user and if said distance detected is equal to or less than a predetermined distance, automatically magnify and/or highlight said eye focus area of said user on said object displayed on said screen.
  • said method may further include detection of said eye focus area preferably, by means of at least one 3D sensor, and/or camera sensor, in the form of a focused area coordinates.
  • said method may further receive said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
  • said method may further receive sensor events when said screen is in proximity of said user.
  • said method may further provide at least one configurable itemin a settings option of said apparatus, wherein said configurable item is at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
  • said method may further include highlight option comprises a range of highlight values selectable by said user.
  • said method may further magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
  • said method may further include:
  • ⁇ marking based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and detecting a next succeeding frame position, wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
  • FIG. 3and Figure 4 illustrates the general overview of the present invention, in accordance with an embodiment of the present subject matter.
  • the present inventionimplemented in the smartphone is provide.
  • the user is trying to read the content displayed on the screen of the mobile phone, however he is having some difficulty in reading the content displayed.
  • he tries to get the smartphone closer to his eye.
  • this activity is detected by the smartphone, it tries to fine the exact focus of the user on the screen and automatically zooms/magnifies the contents present in this detected exact focus.
  • the smartphone detect the nearby light intensity and accordingly adjust the light intensity of the smartphone.
  • the option/feature of the magnifying/zooming and/or highlighting is configurable and selectable option/feature which may be provided in the settings page of the smartphone.
  • the present invention may be used in different scenarios and for different objects. Few of them are provided below for understanding purpose, however it is to be noted and understood that these example/scenarios shall not limit the scope of the present invention.
  • the application of the present invention is more of a text based application, then using this feature, the text (whole line or the set of lines) which user is reading will gets highlighted and magnified to the scale set by the user. For example, if a user is reading a mail, then the line read by the user, and the consecutive line will be completely highlighted and magnified.
  • image can be zoomed and highlighted. For example, if the user is viewing any image from gallery, then the image can be zoomed to give more detailing.
  • any application is used for displaying of the graphs, then depending on the user settings, a particular area in a graph or a particular segregation can be defined more clearly. For example, if the user is viewing any pie-chart, then only the majority and the minority of the section can be highlighted and zoomed.
  • the present invention provides a good solution for the people, who feels lot of strain while keeping the device very close to the eyes.
  • the present invention useful for the short-sight people and the elderly people.

Abstract

A method and an apparatus for magnifying and highlighting objects on screens are disclosed. The apparatus has at least one sensor for automatically magnifying and/or highlighting an object displayed on a screen of the apparatus. The method comprises detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.

Description

METHOD AND APPARATUS FOR MAGNIFYING AND/OR HIGHLIGHTING OBJECTS ON SCREENS
TECHNIAL FIELD
The present subject matter described herein, in general, relates to a data processing system or computing system comprising a display screen, and more particularly, the invention relates to a graphical user interface for displaying objects on the display screen, and systems, apparatus and methods for automatically changing the display magnification of the objects displayed on the display screen to a desired display magnification.
BACKGROUND
Data processing system or computing systemsuch as mobile phones that includes but not limited to Smartphone’s , personal digital assistant (PDA) , etc., are widely used by almost each and every individual today. These data processing system or computing systems provides a graphical screen for displaying objects of a graphical user interface, such as text strings and hyperlinks, graphical buttons, etc. These data processing system or computing systems are highly advanced and includes various inbuilt sensors like camera sensors, accelerometer sensors, touch sensors, and the like.
Apart from browsing the contents using these data processing system or computing systems, they are also widely used for reading, watching videos/animations/digital contents. A user may read his/her e-mails, documents, e-books, view and image/picture, view presentation/figures like pie-chart, etc.
However, while reading/viewing, if the object/content displayed on screen is not visible to users eye sight, small or if the surrounding light is not proper (sufficient enough to read/view) , the user may need to strain his/her eyes for reading/viewing ultimately affecting his/her eye-sight.
In view of the above limitation, the prior-art provides various solutions wherein, when user is reading/viewing any object, and if he/she wants to see the object with more clarity, the user has to manually zoom in or zoom out the content. Hence, a manual  intervention is always required to magnify the contents to make in more visible/clearer in order to avoid strain on the user eyes. Also, if the surrounding light is not proper, it may result in the straining of eye-sight, to avoid this, the user has to manually adjust the screen/display light and/or further magnify the object.
Hence, the major problem in the existing techniques available/provided in the prior-art is that a manual intervention is required for user’s reading/viewing convenience.
Thus, there is a need to provide an automated solution to above mentioned technical problem for automatically magnifying and highlighting the objects which he/she is currently viewing /reading, and further reducing the amount of strain on eyes of the user.
SUMMARY
This summary is provided to introduce concepts related to a system, method and apparatus for magnifying and highlighting objects on screens are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
One aspect of the present invention is to provide a mechanism to magnify and/or highlight the objects on screen.
Another aspect of the present invention is to provide a mechanism to magnify and highlight the objects on screen based on the eye focus.
Another aspect of the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on the eye focus.
Another aspect of the present invention is to an automated mechanism provide a mechanism to magnify and highlight the objects on screen based on the eye focus thereby reducing feel of strain while reading/viewing the objects.
Another aspect of the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on an eye focus of a user and/or based on the surrounding light thereby reducing feel of strain while reading/viewing the objects.
Accordingly, in one implementation, the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The apparatus comprises detecting, based on at least one sensor event fromsaid sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
In one implementation, the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The apparatus comprises a detection module, upon receipt of at least one sensor event from said sensor, configured to detect an eye focus area of at least one user concentrating on said screen, and detect an intensity of light nearby said apparatus. The apparatus also comprises a magnify and highlight module configured to magnify said eye focus area based on eye focus area detected; and/or highlight said eye focus area based on intensity of light detected; wherein said eye focus area is current focused area of said user on said object displayed on said screen.
In one implementation, the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The method comprises detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
In one implementation, the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The method comprises:
●   detecting, using a detection module, and upon receipt of at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen; and an intensity of light nearby said apparatus;
●   magnifying, using a magnify and highlight module, said eye focus area based on eye focus area detected; and
●   highlighting, using a magnify and highlight module, said eye focus area based on intensity of light detected;
●   Wherein, said eye focus area is current focused area of said user on said object displayed on said screen.
The present invention provides a technical solution which solves a technical problem for a certain group of people who have their daily business with the documents and the e-books. The solution reduces pain for a lot of other people as well like the aged people who reads newspapers every day in morning, or for the people who feels lot of strain while reading any document on the device.
In one implementation, the present invention makes the user life simple, by providing a feature which can be used in the device that can capture the eye focus of the user and highlight and magnify the surrounding area where the user is currently focusing while reading.
In one implementation, the present invention improves user experience of reading the documents on the Smartphone which is a very common operation done for many different activities of day–to–day life like reading news articles, blogs, e-books, and the like.
In one implementation, a user needs to get the device closer to his eyes to get the exact focus area of the object which he/she is concentrating, to get the following effects:
a) The exact focus area of the object which comes under the focus area of the user will be zoomed to give more clarity.
b) The same exact focus area of the object will be highlighted depending upon the intensity of the surrounding light.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit (s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
Figure 1 illustrates an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen n, in accordance with an embodiment of the present subject matter.
Figure 2 illustrates method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen, in accordance with an embodiment of the present subject matter.
Figure 3 and Figure 4 illustrates the general overview of the present invention, in accordance with an embodiment of the present subject matter.
It is to be understood that the attached drawings are for purposes of illustrating the concepts of the invention and may not be to scale.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
The following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
The invention can be implemented in numerous ways, including as a process,  an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Methods and apparatus magnify and highlight the objects on screen are disclosed.
While aspects are described for magnify and highlight the objects on screen may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary systems, apparatus, and methods.
In one implementation, the present invention is to provide an automated mechanism to magnify and highlight the objects on screen based on an eye focus of a user and/or based on the surrounding light thereby reducing feel of strain while reading/viewing the objects.
The need for present invention may be best explained with some of the real time user scenarios and may be very useful to the regular user. Some of the use cases are  documented below for illustration. However, it is to be noted that, the real time user scenarios are just provided for understanding purpose and shall not be considered as scope limitation for the present invention.
USE CASE 1:
Stella is very fond of writing blogs and reading them as well, but her daily routine is very hectic so she is able to do that work only in late nights. So for doing this it puts lots of strain to her eyes for focusing over the text and manually she has to do zoom in and zoom out of the text which is very small.
According to the present invention, if Stella to get her phone close to her eyes, the text whatever she is going through gets automatically highlighted and the screen zooms around that text, which will make her reading experience way better. Even if there is no power outside, still the text reading becomes very comfortable as it will not strain your eyes.
USE CASE 2:
This solution provided in the present invention is very helpful for the students or people who are doing some research work for which they need to read daily lot of articles.
For example, Smith is doing some research project where he has to read so many articles daily and for that he has to read day and night. Every time reading on the screen gives lot of stress and reduces his efficiency as well. During night time going through the documents becomes even more hectic if there is no enough light also in the surrounding areas. And manually when he has to zoom the screen to see the content more properly then also he has to manually keep on dragging through the screen to see the full content.
So with the present invention, he can continue his reading in a normal way and the present invention, help him to go through the content in a much better and efficient way without putting much stress on the person’s mind.
USE CASE 3:
This solution can be helpful to aged people or senior citizens who often have a habit of reading newspapers where the text written is usually very small for them.
The present invention will be helpful for such people who find it very difficult to read small characters and put lot of pressure to understand them.
The above use cases are some instances where this solution will find its use. There are so many other scenarios where we can apply the usage of this method for making users life easy.
In one implementation, a user needs to get the device closer to his eyes to get the exact focus area of the object which he/she is concentrating, to get the following effects:
a) The exact focus area of the object which comes under the focus area of the user will be zoomed to give more clarity.
b) The same exact focus area of the object will be highlighted depending upon the intensity of the surrounding light.
In one implementation, the present invention detects an eye focus of the user to understand the current focused area on the screen and also detects the intensity of the surrounding light so as to magnify and highlight the text for the user accordingly.
In one implementation, the present invention enables the device to get the sensor events to detect the current eye focus and the surrounding light intensity. In one example, if the device is based on android operating system, the Android sensors events may be received by the device. When the user gets the device closer to the eye, device needs to continue to get the sensor events in an android service.
In one implementation, the options for magnifying/zooming and/or highlighting may be provided as configurable items to the user in a device settings page.
In one implementation, when device (operating system) , for example the android system, receives the particular sensor events from the user that the device is very close to user, the below mentioned actions can be provided to the user:
a) If the user has enabled the highlight feature then the device should highlight the focused text which is compatible to the surrounding light.
b) Also it zoomthe focused text to make it more clearly visible to the user.
In one implementation, when the device screen is very close to the user then a camera sensors captures the eye focus and send that event to the user with the focused area coordinates. The camera sensors (3D sensors) may be used to recognize the light intensity of the surrounding area which will tell the highlighter to how to highlight the text so it becomes clearer to the user. So, after getting the focused area and the light intensity, the present invention zooms and/or highlights (if either of these options are enabled) the text which is currently in the focus area of the user.
In one implementation, the present invention is based on the data based magnification and highlighting which is in contrast with the eye-tracking techniques already available in the prior-art.
In one implementation, while tracking the eye movement, the present invention marks a first frame position as the start point and then captures a position of the next frame. Join each frame points to create a cord like structure which will give the direction in which the user is viewing. If the next pixel calculated crosses a threshold, then end the cord there and start with the new pixel point. If the next pixel position crosses the threshold value then it means the line ends there and it should start with a new cord.
In one implementation, these cord values will be given to the system of the device, so as to highlight that section and zoom it. The present invention feature can be added as a configuration itemand may be enabled whenever required.
In one implementation, the amount of highlighting given to the text /image or any mime-type may be dependent on the light intensity of the surrounding. Light intensity may be captured from the camera sensors. Depending upon the surrounding light, a range may be provided to the user between which the user may set the highlight value. Depending on the surrounding light, the scale of highlighting will differ.
In one implementation, a setting in the device settings with which user may set a scale by which the user want to magnify the object which he is focusing on the device depending on a user comfort, may be provided.
In one implementation, the magnification may differ based on the size of the device. For devices with larger screen size, the magnification scale may be more so that it comes more clearly.
In one implementation, the present invention provides a mime-type based magnification which any application can customize depending on their usage.
Referring now to figure 1, an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen, is illustrated in accordance with an embodiment of the present subject matter.
Although the present invention is explained considering implementation as a an apparatus 100, it may be understood that the apparatus 100 may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the apparatus 100 may be accessed by multiple users through one or more user/electronic devices (not shown) , referred to as user hereinafter, or applications residing onthe user devices. Examples of the apparatus 100 and the user devices may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices are communicatively coupled to the apparatus 100 through a network.
In one implementation, the network may be a wireless network, a wired network or a combination thereof. The network can be implemented as one of the different types of networks, such as intranet, local area network (LAN) , wide area network (WAN) , the internet, and the like. The network may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP) , Transmission Control Protocol/Internet Protocol (TCP/IP) , Wireless Application Protocol (WAP) , and the like, to communicate with one another. Further the network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
In one embodiment, the apparatus 100 may include at least one processor 102, an input/output (I/O) interface 104, at least one sensor 106, and a memory 108. The at least  one processor 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 102 is configured to fetchand execute computer-readable instructions that may be stored in the formof module/sin the memory 108.
The I/O interface 104 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 104 may allow the apparatus100to interact with a user directly or through the user/client devices. Further, the I/O interface 104 may enable the apparatus 100 to communicate withother computing devices, such as web servers and external data servers (not shown) . The I/O interface 104 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 104 may include one or more ports for connecting a number of devices to one another or to another server.
The memory 108 may include any computer-readable medium known in the art including, for example, volatile memory, such as static randomaccess memory (SRAM) and dynamic random access memory (DRAM) , and/or non-volatile memory, such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 108 may include modules 110 and database (not shown) .
The modules 110 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. In one implementation, the modules 110 may include a detection module 112, magnify and highlight module 114. The other modules (not shown) may include programs or coded instructions that supplement applications and functions of the apparatus 100.
In one implementation, the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The apparatus comprises detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or  highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus
In one implementation, the present invention provides an apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The apparatus comprises a detection module, upon receipt of at least one sensor event from said sensor, configured to detect an eye focus area of at least one user concentrating on said screen, and detect an intensity of light nearby said apparatus. The apparatus also comprises a magnify and highlight module configured to magnify said eye focus area based on eye focus area detected; and/or highlight said eye focus area based on intensity of light detected; wherein said eye focus area is current focused area of said user on said object displayed on said screen.
In one implementation, the present invention automatically magnify and/or highlight said eye focus area of said user on said object displayed on said screen, when said screen is at a predetermined distance from said user, wherein said predetermined distance is detected by at least one sensor, specifically by camera sensor/s, attached to said apparatus. In one implementation, it may be understood by the person skilled in that art that in order to find the predetermined distance between the screen and the user, any of the existing sensors already available in the apparatus may be used.
In one implementation, said eye focus area is detected, preferably, by means of at least one 3D sensor, in the form of a focused area coordinates.
In one implementation, said intensity of light nearby said apparatus is detected, preferably by means of at least one proximity sensor/s attached to said apparatus, and specifically by means of a camera sensor/s. In one implementation, it may be understood by the person skilled in that art that in order to find intensity of light, any of the existing sensors already available in the apparatus may be used.
In one implementation, the present invention receives said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby  magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
In one implementation, said eye focus area is said current focused area of said user on said object displayed on said screen is magnified and/or highlighted.
In one implementation, the present invention is configured to receive sensor events when said screen is in proximity of said user.
In one implementation, the present invention provides at least one configurable item in a settings option, wherein said configurable itemis at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
In one implementation, said highlight option comprises a range of highlight values selectable by said user.
In one implementation, said magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
In one implementation, said eye focus area is detected based on at least two cord structure, each having a start point and end point, obtained by:
●   marking, based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and capturing a next succeeding frame position, wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
●   joining said plurality of frame points/pixels to create a first cord structure with said start point, wherein said first cordstructure provides a direction in which said user is concentrating on said screen;
●   checking, if a frame points/pixels crosses a threshold value, and if crossed, marking said frame points/pixels as end point of said first cord structure; and
●   providing said first cord structure as sensor event to said apparatus so as to magnify and/or highlight said eye focus area; and
●   Repeating above steps to obtain a new cord structure.
In one implementation, the present invention provides a MIME-type magnification and highlighting.
Referring now to figure 2 illustrates a method, for authenticating at least one user, in accordance with an embodiment of the present subject matter. The method may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
The order in which the method described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method or alternate methods. Additionally, individual blocks may be deleted fromthe method without departing fromthe protection scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented inthe above described apparatus100.
In one implementation, the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The method comprises detecting, based on at least one sensor event fromsaid sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said  user onsaid object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
In one implementation, the present invention provides a method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus. The method comprises:
●   detecting, using a detection module, and upon receipt of at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen; and an intensity of light nearby said apparatus;
●   magnifying, using a magnify and highlight module, said eye focus area based on eye focus area detected; and
●   highlighting, using a magnify and highlight module, said eye focus area based on intensity of light detected;
●   Wherein said eye focus area is current focused area of said user on said object displayed on said screen.
At block 202, an eye focus area of at least one user concentrating on said screen is detected using a detection module, and upon receipt of at least one sensor event from said sensor present in said apparatus 100.
At block 204, an intensity of light nearby said apparatus is detected using a detection module, and upon receipt of at least one sensor event from said sensor present in said apparatus.
At block 206, said eye focus area based on eye focus area detected is magnified using magnify and highlight module of said apparatus.
At block 208, said eye focus area based on intensity of light detected is highlighted using magnify and highlight module of said apparatus.
In one implementation, said eye focus area is current focused area of said user on said object displayed on said screen.
In one implementation, said method may further include detection of a distance between aid screen and said user and if said distance detected is equal to or less than a predetermined distance, automatically magnify and/or highlight said eye focus area of said user on said object displayed on said screen.
In one implementation, said method may further include detection of said eye focus area preferably, by means of at least one 3D sensor, and/or camera sensor, in the form of a focused area coordinates.
In one implementation, said method may further receive said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
In one implementation, said method may further receive sensor events when said screen is in proximity of said user.
In one implementation, said method may further provide at least one configurable itemin a settings option of said apparatus, wherein said configurable item is at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
In one implementation, said method may further include highlight option comprises a range of highlight values selectable by said user.
In one implementation, said method may further magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
In one implementation, said method may further include:
●   detecting said eye focus area based on at least two cord structure, each having a start point and end point, obtained by:
●   marking, based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and detecting a next succeeding frame position,  wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
●   joining said plurality of frame points/pixels to create a first cord structure with said start point, wherein said first cord structure provides a direction in which said user is concentrating on said screen;
●   checking, if a frame points/pixels crosses a threshold value, and if crossed, marking said frame points/pixels as end point of said first cord structure; and
●   providing said first cord structure as sensor event to said apparatus so as to magnify and/or highlight said eye focus area; and
●   repeating above steps to obtain a new cord structure.
Figure 3and Figure 4 illustrates the general overview of the present invention, in accordance with an embodiment of the present subject matter.
As shown in figure 3, if user is reading some content having the text as “ALICE” , “BOB” , and “MARK” on the display screen of a device /apparatus, and when the user is reading a specific content / concentrating on “BOB” , however, the user is facing some difficulty in reading this particular content, so as per the present invention, when the user bring the device/apparatus closer to his eyes, this device (having the present invention embedded in it) will automatically zoomthis particular content. Further, the device/apparatus may also highlight this content. The highlighting and the zooming is a feature which may be optionally selected by the user fromthe setting option.
As shown in figure 4, the present inventionimplemented in the smartphone is provide. Here, the user is trying to read the content displayed on the screen of the mobile phone, however he is having some difficulty in reading the content displayed. To avoid the strain on his eyes, he tries to get the smartphone closer to his eye. As soon as this activity is detected by the smartphone, it tries to fine the exact focus of the user on the screen and automatically zooms/magnifies the contents present in this detected exact focus. If the user has enabled a feature/option of highlighting the content in the exact focus of his eye, the smartphone detect the nearby light intensity and accordingly adjust the light intensity of the smartphone. The option/feature of the magnifying/zooming and/or highlighting is  configurable and selectable option/feature which may be provided in the settings page of the smartphone.
In one implementation, the present invention may be used in different scenarios and for different objects. Few of them are provided below for understanding purpose, however it is to be noted and understood that these example/scenarios shall not limit the scope of the present invention.
If the application of the present invention is more of a text based application, then using this feature, the text (whole line or the set of lines) which user is reading will gets highlighted and magnified to the scale set by the user. For example, if a user is reading a mail, then the line read by the user, and the consecutive line will be completely highlighted and magnified.
If any application has more usage of the image mime-type, then depending on the user settings, image can be zoomed and highlighted. For example, if the user is viewing any image from gallery, then the image can be zoomed to give more detailing.
If any application is used for displaying of the graphs, then depending on the user settings, a particular area in a graph or a particular segregation can be defined more clearly. For example, if the user is viewing any pie-chart, then only the majority and the minority of the section can be highlighted and zoomed.
If any excel based application is used, then this feature can highlight a particular set/combination of cells and magnify them.
Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include:
1. The present invention provides a good solution for the people, who feels lot of strain while keeping the device very close to the eyes.
2. The present invention useful for the short-sight people and the elderly people.
Although implementations for a system, method and apparatus for magnifying and highlighting objects on screens have been described in language specific to structural  features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations of a system, method and apparatus for magnifying and highlighting objects on screens.

Claims (25)

  1. An apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus configured to:
    detect, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnify and/or highlight said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
  2. An apparatus, having at least one sensor, for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus, comprising:
    a detection module, upon receipt of at least one sensor event from said sensor, configured to:
    detect an eye focus area of at least one user concentrating on said screen;
    detect an intensity of light nearby said apparatus; and
    a magnify and highlight module configured to:
    magnify said eye focus area based on eye focus area detected; and/or
    highlight said eye focus area based on intensity of light detected; wherein said eye focus area is current focused area of said user on said object displayed on said screen.
  3. The apparatus as claimed in claims 1 and 2 automatically magnify and/or highlight said eye focus area of said user on said object displayed on said screen, when said screen is at a predetermined distance from said user, wherein said predetermined distance is detected by at least one sensor, preferably by a camera sensor/s, attached to said apparatus.
  4. The apparatus as claimed in claims 1 and 2, wherein said eye focus area is detected, preferably, by means of at least one 3D sensor, in the form of a focused area coordinates.
  5. The apparatus as claimed in claims 1 and 2, wherein said intensity of light nearby said apparatus is detected, preferably by means of at least one proximity sensor/sattached to said apparatus, and specifically by means of a camera sensor/s.
  6. The apparatus as claimed in claims 4 and 5 receives said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
  7. The apparatus as claimed in claims 1 and 2, wherein said eye focus area is said current focused area of said user on said object displayed on said screen is magnified and/or highlighted.
  8. The apparatus as claimed in claims 1 and 2 is configured to receive sensor events when said screen is in proximity of said user.
  9. The apparatus as claimed in claims 1 and 2 comprises at least one configurable item in a settings option, wherein said configurable item is at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
  10. The apparatus as claimed in claim 9, wherein said highlight option comprises a range of highlight values selectable by said user.
  11. The apparatus as claimed in claim 9, wherein said magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
  12. The apparatus as claimed in claim 1 and 2, wherein said eye focus area is detected based on at least two cord structure, each having a start point and end point, and to detect said cord structure said apparatus is configured to:
    mark, based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and capturing a next succeeding frame position, wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
    join said plurality of frame points/pixels to create a first cord structure with said start point, wherein said first cord structure provides a direction in which said user is concentrating on said screen;
    check, if a frame points/pixels crosses a threshold value, and if crossed, mark said frame points/pixels as end point of said first cord structure; and
    provide said first cord structure as sensor event to said apparatus so as to magnify and/or highlight said eye focus area; and
    repeat above steps to obtain a new cord structure.
  13. The apparatus as claimed in claims 1 and 2 provides a MIME-type magnification and highlighting.
  14. A method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus, comprising:
    detecting, based on at least one sensor event from said sensor, an eye focus area of at least one user concentrating on said screen and an intensity of light nearby said apparatus, and thereby, automatically magnifying and/or highlighting said eye focus area, wherein said eye focus area is current focused area of said user on said object displayed on said screen, and said highlighting is based on said intensity of light nearby said apparatus.
  15. A method performed by an apparatus for automatically magnifying and/or highlighting an object displayed on a screen of said apparatus, comprising:
    detecting, using a detection module, and upon receipt of at least one sensor event from said sensor,
    an eye focus area of at least one user concentrating on said screen; and
    an intensity of light nearby said apparatus;
    magnifying, using a magnify and highlight module, said eye focus area based on eye focus area detected; and/or
    highlighting, using a magnify and highlight module, said eye focus area based on intensity of light detected;
    wherein said eye focus area is current focused area of said user on said object displayed on said screen.
  16. The method as claimed in claims 14 and 15 comprises, detecting a distance between said screen and said user and if said distance detected is equal to or less than a predetermined distance, automatically magnify and/or highlight said eye focus area of said user on said  object displayed on said screen, wherein said predetermined distance is detected by at least one sensor, specifically by camera sensor/s, attached to said apparatus.
  17. The method as claimed in claims 14 and 15, comprises, detecting, said eye focus area preferably, by means of at least one 3D sensor or a camera sensor, in the form of a focused area coordinates.
  18. The method as claimed in claims 14 and 15 comprises, detecting, said intensity of light nearby said apparatus is detected preferably by means of at least one proximity sensor/s attached to said apparatus, and specifically by means of a camera sensor/s.
  19. The method as claimed in claims 14 and 15 comprises, receiving, said focused area coordinates, and said intensity of light data in the form of at least one sensor event, thereby magnifying and/or highlighting said eye focus area is current focused area of said user on said object displayed on said screen.
  20. The method as claimed in claims 14 and 15 comprises, receiving sensor events when said screen is in proximity of said user.
  21. The method as claimed in claims 14 and 15 comprises, providing at least one configurable item in a settings option of said apparatus, wherein said configurable item is at least one of magnify (zoom) option to magnify said eye focus area, or a highlight option to highlight said eye focus area, or any combination thereof.
  22. The method as claimed in claim 21, wherein said highlight option comprises a range of highlight values selectable by said user.
  23. The method as claimed in claim 21, wherein said magnify (zoom) option comprises a range of magnify (zoom) values selectable by said user.
  24. The method as claimed in claims 14 and 15, comprises detecting said eye focus area based on at least two cord structure, each having a start point and end point, obtained by:
    marking, based on an eye movement of said user concentrating on said screen, a first frame position, as a start point, and detecting a next succeeding frame position,  wherein said first frame position and said next succeeding frame position comprises a plurality of frame points/pixels;
    joining said plurality of frame points/pixels to create a first cord structure with said start point, wherein said first cord structure provides a direction in which said user is concentrating on said screen;
    checking, if a frame points/pixels crosses a threshold value, and if crossed, marking said frame points/pixels as end point of said first cord structure; and
    providing said first cord structure as sensor event to said apparatus so as to magnify and/or highlight said eye focus area; and
    repeating above steps to obtain a new cord structure.
  25. The method as claimed in claims 14 and 15, comprises, providing, a MIME-type magnification and highlighting.
PCT/CN2016/077401 2015-04-10 2016-03-25 Method and apparatus for magnifying and/or highlighting objects on screens WO2016161905A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201680019576.XA CN107430441A (en) 2015-04-10 2016-03-25 Method and apparatus for amplifying and/or protruding the object on screen

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN1890CH2015 2015-04-10
ININ1890/CHE/2015 2015-04-10

Publications (1)

Publication Number Publication Date
WO2016161905A1 true WO2016161905A1 (en) 2016-10-13

Family

ID=57071717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/077401 WO2016161905A1 (en) 2015-04-10 2016-03-25 Method and apparatus for magnifying and/or highlighting objects on screens

Country Status (2)

Country Link
CN (1) CN107430441A (en)
WO (1) WO2016161905A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407831A (en) * 2018-09-28 2019-03-01 维沃移动通信有限公司 A kind of exchange method and terminal

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491123B (en) * 2018-02-12 2021-05-28 维沃移动通信有限公司 Method for adjusting application program icon and mobile terminal
WO2020034083A1 (en) * 2018-08-14 2020-02-20 Huawei Technologies Co., Ltd. Image processing apparatus and method for feature extraction
CN111506195A (en) * 2020-04-17 2020-08-07 南京巨鲨显示科技有限公司 Focal region highlighting system based on eyeball tracking equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN103019560A (en) * 2012-11-29 2013-04-03 中兴通讯股份有限公司 Method and device for controlling interface of mobile terminal and mobile terminal
CN103309451A (en) * 2013-06-20 2013-09-18 上海华勤通讯技术有限公司 Mobile terminal and display method thereof
US20140035466A1 (en) * 2012-08-06 2014-02-06 Wistron Corp. Backlight driving circuit and backlight driving circuit

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096757A (en) * 2010-09-08 2011-06-15 浙江大学 Regard point cluster data processing method based on time domain constraint
CN103902179A (en) * 2012-12-28 2014-07-02 华为技术有限公司 Interaction method and device
CN103885589B (en) * 2014-03-06 2017-01-25 华为技术有限公司 Eye movement tracking method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
US20140035466A1 (en) * 2012-08-06 2014-02-06 Wistron Corp. Backlight driving circuit and backlight driving circuit
CN103019560A (en) * 2012-11-29 2013-04-03 中兴通讯股份有限公司 Method and device for controlling interface of mobile terminal and mobile terminal
CN103309451A (en) * 2013-06-20 2013-09-18 上海华勤通讯技术有限公司 Mobile terminal and display method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407831A (en) * 2018-09-28 2019-03-01 维沃移动通信有限公司 A kind of exchange method and terminal

Also Published As

Publication number Publication date
CN107430441A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
US10228842B2 (en) Provision of extended content on a flexible display
US8463075B2 (en) Dynamically resizing text area on a display device
US11216152B2 (en) Shared three-dimensional user interface with personal space
US9423901B2 (en) System and method to control screen capture
US9979880B2 (en) Systems and methods for gesture-based control of equipment in video communication
WO2016045523A1 (en) Display method and device for interface contents of mobile terminal and terminal
WO2016161905A1 (en) Method and apparatus for magnifying and/or highlighting objects on screens
US20130339868A1 (en) Social network
US9170728B2 (en) Electronic device and page zooming method thereof
US20160063672A1 (en) Electronic device and method for generating thumbnail picture
US20120064946A1 (en) Resizable filmstrip view of images
WO2010081374A1 (en) Screenshot method and screenshot device
US10824320B2 (en) Systems and methods for presenting content
US9262389B2 (en) Resource-adaptive content delivery on client devices
US20140043255A1 (en) Electronic device and image zooming method thereof
US20150261802A1 (en) Index configuration for searchable data in network
KR20170098326A (en) Systems and methods for multiple photo feed stories
US20160092883A1 (en) Timeline-based visualization and handling of a customer
CN103229132B (en) Realize method and the device of remote browse
CN105718228B (en) The method and apparatus that dynamic content is shown
US10521099B2 (en) Systems and methods for providing interactivity for panoramic media content
US10331330B2 (en) Capturing objects in editable format using gestures
US20180018398A1 (en) Positioning content in computer-generated displays based on available display space
WO2016127525A1 (en) Screen displaying method and device for terminals, and computer storage medium
US10725637B2 (en) Systems and methods for providing image perspective adjustment and automatic fitting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16776080

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16776080

Country of ref document: EP

Kind code of ref document: A1