WO2007000743A2 - In-zoom gesture control for display mirror - Google Patents

In-zoom gesture control for display mirror Download PDF

Info

Publication number
WO2007000743A2
WO2007000743A2 PCT/IB2006/052142 IB2006052142W WO2007000743A2 WO 2007000743 A2 WO2007000743 A2 WO 2007000743A2 IB 2006052142 W IB2006052142 W IB 2006052142W WO 2007000743 A2 WO2007000743 A2 WO 2007000743A2
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
mirror
region
interest
display screen
Prior art date
Application number
PCT/IB2006/052142
Other languages
French (fr)
Other versions
WO2007000743A3 (en
Inventor
Tatiana A. Lashina
Talitha C.B. Boom
Original Assignee
Koninklijke Philips Electronics, N.V.
U.S. Philips Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics, N.V., U.S. Philips Corporation filed Critical Koninklijke Philips Electronics, N.V.
Publication of WO2007000743A2 publication Critical patent/WO2007000743A2/en
Publication of WO2007000743A3 publication Critical patent/WO2007000743A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the invention relates to a device and method for modifying an image on mirror displays having a camera.
  • the invention relates to a method and apparatus for detecting a gesture made by a user at a location of a region of interest of the image on mirror display by various touch sensing and/or touchless systems and interpreting this gesture as a command to enlarge (zoom- in) or reduce the region of interest.
  • Systems including a mirror and at least one display have been known and successfully used in the area of personal and health care.
  • these systems comprise a hardware architecture, including a mirror, at least one display device, one or more cameras, and a microcomputer such as a PC that runs a dedicated system software.
  • This system allows a user to magnify or reduce the captured image displayed on the mirror either optically or digitally. In a magnifying mode, the system, thus, operates similarly to a magnifying mirror.
  • the mirror of the systems referred to above is configured either as a semitransparent mirror or a polarized (cholesteric) reflective foil.
  • a user selects a region of a video image that is further zoomed in or out by manipulating the camera input and showing it on the display.
  • the above-discussed systems typically require that a user locate a region of interest by physically delimiting this region on a mirror. However, if a user is physically impaired having, for example, a poor sight, the delimiting of the region of interest may be inconvenient for such a user.
  • a need therefore, exists for a display mirror system for selecting a region of interest in a video image shown on a display mirror in response to a gesture made by a user.
  • a further need exists for a display mirror system operable to detect and locate a gesture made by a user at a location of a region of interest of the image shown on a display mirror.
  • the apparatus has, among others, a camera operative to capture an image and display the image on a display screen visible by a user through a mirror.
  • the apparatus has a detecting mechanism enabled by a touch sensing mechanism, touchless mechanism and a combination of these mechanisms that allows for detection of a user gesture at a location of a region of interest of the captured video image.
  • a software executed by a central processing unit is operative to determine the parameters of the gesture, which includes the gesture's vertical and/or horizontal dimensions, and to locate the region of interest to be zoomed in or out on the display screen.
  • a gesture by the user of the inventive apparatus may have a geometrical shape that can have substantially L-shaped, rectangularly- shaped and circularly-shaped gestures.
  • one of the upper and lower corners of the gesture corresponds to the 0,0 point of the image, whereas the vertical and horizontal dimensions of the gesture define a rectangle having its area correspond to the desired region.
  • the inventive apparatus is operable to configure a virtual rectangle with the circular gesture concentrically inscribed in the rectangle.
  • a vertical/horizontal aspect ratio of a zoomed in region of interest of image is fixed.
  • One of the advantages of this feature is the ability of the inventive apparatus to zoom in on the selected region so that the region can fill up the entire screen.
  • a further aspect of the invention includes providing a thumbnail of the originally captured image someplace along the periphery of a screen, which is fully occupied by the zoomed in region of interest of the image.
  • the inventive apparatus is configured to allow the user to re-display the original image and select a new region of interest, if the user needs it, upon activating the thumbnail.
  • the inventive method provides for capturing an image by a camera, sending the captured image via a central processing unit to a display device, and displaying the captured image on the display device, wherein the displayed image is visible through a mirror. Subsequently, the inventive method includes detecting and interpreting a user gesture at a location of a region of interest of the displayed image, determining the region of interest based on the detected gesture and enlarging/zooming in or zooming out the region of interest.
  • FIG. 1 is a diagrammatic view of an apparatus for displaying a region of interest of a captured image configured in accordance with the invention
  • FIG. 2 is a front diagrammatic view of an IR touch sensing mechanism
  • FIG. 3 is a front diagrammatic view of a capacitive touch sensing mechanism
  • FIG. 4 is a front diagrammatic view of an acoustic touch-sensing mechanism
  • FIG. 5 is a front diagrammatic view of a resistive touch sensing mechanism
  • FIG. 6 is a front diagrammatic view of an inductive touch sensing mechanism
  • FIG. 7 is a front diagrammatic view of a touchless detection mechanism
  • FIG. 8 is a schematic representation of gestures included in an interactive technique for communicating a user with the inventive apparatus of FIG. 1;
  • FIG. 9 is a flow chart of instructions for automatically adjusting the size of the selected region of the captured image.
  • FIGS. 1OA and 1OB are views of an originally captured image and an enlarged region of interest, respectively, wherein the enlarged region is determined by using the inventive apparatus of FIG. 1
  • an apparatus 10 is configured in accordance with the invention and includes one or more cameras 14 operable to capture an image of a user 18, one or more display devices 16 and a polarizing mirror 12, which is either a semitransparent mirror or a polarized reflective foil.
  • the apparatus 10 is operable to display an image captured by camera 14 such as a CCD or CMOS on a screen of display 16 which is juxtaposed with the non-viewing side of mirror 12.
  • a software executed by a central processing unit (CPU) 20 is operable to zoom in or zoom out the selected region on the mirror.
  • the CPU 20 is a conventional computer provided with a storage device.
  • the mirror 12 can be fixed or hand-held and has its rear, non- viewing side so that at least a part of which is located adjacent to display device 16.
  • the display device 16 for example, is a liquid crystal display device having a liquid crystal material that is sandwiched between two substrates (glass or plastic or any other suitable material).
  • display device 16 can be structured as (O)LED, E-ink, plasma screen or other similar display.
  • the polarizing mirror 12 is configured to reflect light of a first kind of polarization to a viewing side and passes light of a second kind of polarization, whereas a single or multiple display devices 16 provide light of the second kind of polarization.
  • the detecting mechanism 30, shown highly diagrammatically in FIG. 1 and operable to detect and interpret the user gestures, may include various touch sensing and touchless systems that are incorporated by the inventive apparatus.
  • the touch sensing systems are generally divided into two broad categories: passive and active, as discussed immediately below.
  • FIG. 2 illustrates one of the inventive embodiments of apparatus 10 which is provided with an infrared passive touch sensing mechanism 45.
  • the mirror 12 is surrounded by a frame 40 that has multiple infrared LED transmitters 44 that line two borders of the surface of frame 40, and photoreceptor receivers 46 that are substantially aligned with respective transmitters 44 along the opposite border of the frame.
  • the finger of the user comes between the transmitters and receivers touching a region of interest of the displayed image, it breaks the infrared path and registers a contact point.
  • Registering a contact corresponds to an input signal processed by CPU 20, and software executed on CPU 20 is operable to delimit and either zoom-in or zoom-out the region of interest by relaying the processed information to display 16, as will be explained below.
  • FIG. 3 illustrates another embodiment of apparatus 10 including a capacitive touch-sensing mechanism 55.
  • the viewing side of mirror 12 has a layer (not shown) that stores an electrical charge.
  • a capacitive coupling occurs between the user's finger and the layer, so that the charge on the capacitive layer decreases. This decrease is measured in circuits 50 located at each corner of mirror 12 and, since it is proportional to the distance to the finger, finger two- dimensional coordinates on the screen surface are determined and sent to CPU 20.
  • FIG. 4 illustrates still another embodiment of the inventive apparatus utilizing the passive touch technology which includes an acoustic touch-sensing mechanism 65 configured with multiple piezoelectric transducers 52, 54 (one receiving and one sending) placed along the X and Y axes of mirror 12, respectively.
  • the CPU 20 is operable to send an electrical signal to the transmitting transducer 54, which converts the signal into surface waves. These mechanical waves generated along the screen sides are then reflected to propagate across the screen surface along the equidistant lines so that the sound waves can be measured by receiving transducer 54, which reconverts them into an electrical signal.
  • a portion of the mechanical wave is absorbed, thus changing the received signal.
  • the signal is then compared to a stored reference signal by software executed by CPU 20, the change recognized, and a coordinate calculated. This process happens independently for both the X and Y axes. Accordingly a touch location is determined by translating the acoustic wave amplitude into touch coordinates providing reliability and accurate "no-drift" operation resulting in graphically determining a region of interest, as will be explained below.
  • Still another embodiment of inventive apparatus 10 utilizing a passive touch sensing technology may be configured with a resistive touch sensing system 75.
  • mirror 12 may be covered with a overlay composed of two layers 60 and 62, usually PET film and glass or acrylic panel, so that the inside surface of each layer is coated with transparent conductive material such as ITO. These two layers are held apart by spacers (not shown).
  • An electrical current runs through two layers 60, 62.
  • CPU 20 When user 18 touches mirror 12, the two layers make contact in that exact spot.
  • the change in the electrical field is noted by CPU 20 and the coordinates of the point of contact are calculated by a respective software executed by CPU 20. Once the coordinates are known, a special driver translates the touch into data understood by operating system of CPU 20.
  • the active touch screen mechanisms 95 includes ultrasonic, infrared, electromagnetic, and laser/bar code systems.
  • the ultrasonic and infrared system includes a plurality of infrared sensors and ultrasonic technology tracking a battery operated pen which touches mirror 12.
  • the laser/bar code system typically includes a laser detecting a reflective, bar-coded sleeve on a pen which touches the mirror.
  • the electromagnetic system also known as pen sensing, is typically configured with a battery-powered pen or a pen provided with an LC resonance coil 63 and a pressure sensor 67.
  • FIG. 7 illustrates an example of touchless technologies 85, which are based on digital cameras, such as cameras 64 located between the user (not shown) and mirror 12 and spaced from one another at a fixed distance so as to constantly scan the surface of mirror 12.
  • the cameras 64 may be pivotally mounted to the corners of mirror 12 or to any other support in a position allowing the cameras to have a clear field of scanning so as to accommodate the entire surface of the mirror that needs to be interactive.
  • CPU 20 calculates an angle at which the detection occurs and processes the received information so as to triangulate the location of the contact point based on the signals received from each camera.
  • software executed by CPU 20 is operable to graphically delimit a region of interest and further either enlarge or reduce the determined region.
  • a cross-capacitive sensor is composed of a matrix of transmitter-receiver pairs. Each pair is a capacitor, with a capacitive current in between. If a hand is placed near the electrodes, the capacitive current will decrease.
  • electrodes can be positioned around the screen so as to form an n 2 matrix of electrode pairs at many separations.
  • an active matrix of transmitters/receivers made of transparent electronics can be integrated into the glass itself.
  • Such cross-capacitive sensor can be used to detect not only x, y position but also finger proximity to the screen and as such does not require physically touching the screen and enables gestures executed in proximity to the screen surface.
  • FIG. 8 illustrates an L-shaped gesture 34 made by the fingers of the user who either performs the gesture while touching the region of interest on the viewing surface of mirror 12 or positioning/holding the fingers in the proximity of the mirror.
  • L-shaped gesture 34 including its vertical and horizontal dimensions are registered by any of the above-discussed touch sensing or touchless mechanisms providing input data to CPU 20.
  • the CPU 20 processes the input data and determines the location of the region of interest which is, then, for example, enlarged so as to occupy substantially the entire display 16.
  • the gesture shown in phantom lines is far from having a perfect L-shape, but because of its resemblance with the L shape, it still can be detected and correctly interpreted.
  • FIG. 8 further illustrates a rectangularly-shaped gesture 36 generated, for example, by contouring the region of interest on or in the proximity of mirror 12.
  • FIG. 8 also illustrates a circular gesture 38 and a gesture having only a substantially circular cross-section.
  • an upper left corner 76 corresponds to a starting point 0, 0 along X and Y axes, respectively, of the region of interest, and the vertical (Y) and horizontal (X) dimensions of the gesture define a rectangle delimiting this region on the mirror. Understandably, a lower right corner or any other corner can be selected to be the starting point.
  • CPU 20 process it as a virtual rectangle 78 (FIG. 8) defining the region of interest and calculated similarly to the L-shaped and rectangular gestures.
  • FIG. 9 and FIGS. 1OA, 1OB the inventive method 100 starts with a step 82 (FIG.
  • an Xmax/Ymax relationship is determined to correspond to a fixed ratio, which, for example, can coincide with the full screen aspect ratio and vary for different display, as shown in a step 88 of FIG. 9.
  • this ratio can be selected as either 4/3 or 16/9.
  • the original image 80 captured by camera 14 is also displayed as a thumbnail in one of the screen corners, as shown in FIG. 1OB and executed by a step 104 of FIG. 9.
  • a camera view thumbnail of the original image is shown in a corner of the screen in step 104.
  • the thumbnail operates as a "Back" button allowing the user to switch back to the entire original image if and when the user either touches the thumbnail on mirror 12 or places his/her finger in the proximity of the mirror, depending, of course, on the selected detecting mechanism.
  • the apparatus 10 may be further provided with a plurality of controlling features allowing the user to control numerous optical and electronic parameters. For example, as shown in FIGS. 1OA and 1OB, a ratio may be varied by virtually sliding along the bottom strip 106. Other features 110, such as backview showing the back side of the person standing in front of the mirror or light controller, that allows modifying lighting conditions in the room where mirror is positioned.
  • the specific features described herein may be used in some embodiments, but not in others, without departure from the spirit and scope of the invention as set forth. Many additional modifications are intended in the foregoing disclosure, and it will be appreciated by those of ordinary skill in the art that in some instances some features of the invention will be employed in the absence of a corresponding use of other features.
  • inventive apparatus 10 may be coupled to the external world by a communication system 23 also shown in FIG. 1 and configured as a wireless or wired system.
  • Local connectivity between CPU 20 and a variety of sensor/appliances can be achieved by wired connections, but preferably by wireless analogue or digital connections.
  • the captured and displayed image may be other than the user, whereas the user is capable of manipulating the inventive system by a gesture defining a region of interest.
  • the illustrative examples therefore do not define the metes and bounds of the invention the scope of which is defined by the following claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The inventive apparatus is operable to modify an image captured by a camera and displayed on a display screen which is located adjacent to a non-viewing side of a polarizing mirror. The apparatus has a touch sensing or touchless detecting mechanism operative to identify a user gesture locating the region of interest, and a processor operative to determine parameters of the gesture and to enlarge the region of interest based on the determined parameters.

Description

IN-ZOOM GESTURE CONTROL FOR DISPLAY MIRROR
This application claims priority to US Provisional Application Ser. No. 60/694,566 filed on June 28, 2005 and fully incorporated herein by reference.
The invention relates to a device and method for modifying an image on mirror displays having a camera. In particular, the invention relates to a method and apparatus for detecting a gesture made by a user at a location of a region of interest of the image on mirror display by various touch sensing and/or touchless systems and interpreting this gesture as a command to enlarge (zoom- in) or reduce the region of interest.
Systems including a mirror and at least one display have been known and successfully used in the area of personal and health care. Typically, these systems comprise a hardware architecture, including a mirror, at least one display device, one or more cameras, and a microcomputer such as a PC that runs a dedicated system software. This system allows a user to magnify or reduce the captured image displayed on the mirror either optically or digitally. In a magnifying mode, the system, thus, operates similarly to a magnifying mirror.
The mirror of the systems referred to above is configured either as a semitransparent mirror or a polarized (cholesteric) reflective foil. In use, a user selects a region of a video image that is further zoomed in or out by manipulating the camera input and showing it on the display.
The above-discussed systems typically require that a user locate a region of interest by physically delimiting this region on a mirror. However, if a user is physically impaired having, for example, a poor sight, the delimiting of the region of interest may be inconvenient for such a user.
A need, therefore, exists for a display mirror system for selecting a region of interest in a video image shown on a display mirror in response to a gesture made by a user. A further need exists for a display mirror system operable to detect and locate a gesture made by a user at a location of a region of interest of the image shown on a display mirror.
These needs are met by the apparatus and method configured in accordance with the present invention. The apparatus has, among others, a camera operative to capture an image and display the image on a display screen visible by a user through a mirror. In addition, the apparatus has a detecting mechanism enabled by a touch sensing mechanism, touchless mechanism and a combination of these mechanisms that allows for detection of a user gesture at a location of a region of interest of the captured video image. A software executed by a central processing unit is operative to determine the parameters of the gesture, which includes the gesture's vertical and/or horizontal dimensions, and to locate the region of interest to be zoomed in or out on the display screen. A gesture by the user of the inventive apparatus may have a geometrical shape that can have substantially L-shaped, rectangularly- shaped and circularly-shaped gestures. When using the L-shaped or rectangularly- shaped gestures, one of the upper and lower corners of the gesture corresponds to the 0,0 point of the image, whereas the vertical and horizontal dimensions of the gesture define a rectangle having its area correspond to the desired region. If the user's gesture is circular, the inventive apparatus is operable to configure a virtual rectangle with the circular gesture concentrically inscribed in the rectangle.
In accordance with one aspect of the invention, a vertical/horizontal aspect ratio of a zoomed in region of interest of image is fixed. One of the advantages of this feature is the ability of the inventive apparatus to zoom in on the selected region so that the region can fill up the entire screen.
A further aspect of the invention includes providing a thumbnail of the originally captured image someplace along the periphery of a screen, which is fully occupied by the zoomed in region of interest of the image. The inventive apparatus is configured to allow the user to re-display the original image and select a new region of interest, if the user needs it, upon activating the thumbnail.
The inventive method provides for capturing an image by a camera, sending the captured image via a central processing unit to a display device, and displaying the captured image on the display device, wherein the displayed image is visible through a mirror. Subsequently, the inventive method includes detecting and interpreting a user gesture at a location of a region of interest of the displayed image, determining the region of interest based on the detected gesture and enlarging/zooming in or zooming out the region of interest.
The above and other features and advantages of the inventive method and apparatus will become more readily apparent from the following description taken in conjunction with the drawings, in which:
FIG. 1 is a diagrammatic view of an apparatus for displaying a region of interest of a captured image configured in accordance with the invention;
FIG. 2 is a front diagrammatic view of an IR touch sensing mechanism; FIG. 3 is a front diagrammatic view of a capacitive touch sensing mechanism; FIG. 4 is a front diagrammatic view of an acoustic touch-sensing mechanism; FIG. 5 is a front diagrammatic view of a resistive touch sensing mechanism; FIG. 6 is a front diagrammatic view of an inductive touch sensing mechanism;
FIG. 7 is a front diagrammatic view of a touchless detection mechanism;
FIG. 8 is a schematic representation of gestures included in an interactive technique for communicating a user with the inventive apparatus of FIG. 1;
FIG. 9 is a flow chart of instructions for automatically adjusting the size of the selected region of the captured image; and FIGS. 1OA and 1OB are views of an originally captured image and an enlarged region of interest, respectively, wherein the enlarged region is determined by using the inventive apparatus of FIG. 1
Reference will now be made in detail to several embodiments of the invention that are illustrated in the accompanying drawings. Wherever possible, same or similar reference numerals are used in the drawings and the description to refer to the same or like parts or steps. The drawings are in simplified form and are not to precise scale. The words "connect," "couple," and similar terms do not necessarily denote direct and immediate connections, but also include connections through intermediate elements or devices.
Referring to FIG. 1, an apparatus 10 is configured in accordance with the invention and includes one or more cameras 14 operable to capture an image of a user 18, one or more display devices 16 and a polarizing mirror 12, which is either a semitransparent mirror or a polarized reflective foil. The apparatus 10 is operable to display an image captured by camera 14 such as a CCD or CMOS on a screen of display 16 which is juxtaposed with the non-viewing side of mirror 12. In response to the user's gesture indicating a region of interest 120 (FIGS. 1OA and 10B) of the displayed image and detected by a detecting mechanism 30, a software executed by a central processing unit (CPU) 20 is operable to zoom in or zoom out the selected region on the mirror. The CPU 20 is a conventional computer provided with a storage device.
The mirror 12 can be fixed or hand-held and has its rear, non- viewing side so that at least a part of which is located adjacent to display device 16. The display device 16, for example, is a liquid crystal display device having a liquid crystal material that is sandwiched between two substrates (glass or plastic or any other suitable material). Alternatively, display device 16 can be structured as (O)LED, E-ink, plasma screen or other similar display. The polarizing mirror 12 is configured to reflect light of a first kind of polarization to a viewing side and passes light of a second kind of polarization, whereas a single or multiple display devices 16 provide light of the second kind of polarization.
The detecting mechanism 30, shown highly diagrammatically in FIG. 1 and operable to detect and interpret the user gestures, may include various touch sensing and touchless systems that are incorporated by the inventive apparatus.
The touch sensing systems are generally divided into two broad categories: passive and active, as discussed immediately below.
FIG. 2 illustrates one of the inventive embodiments of apparatus 10 which is provided with an infrared passive touch sensing mechanism 45. The mirror 12 is surrounded by a frame 40 that has multiple infrared LED transmitters 44 that line two borders of the surface of frame 40, and photoreceptor receivers 46 that are substantially aligned with respective transmitters 44 along the opposite border of the frame. When the finger of the user comes between the transmitters and receivers touching a region of interest of the displayed image, it breaks the infrared path and registers a contact point. Registering a contact corresponds to an input signal processed by CPU 20, and software executed on CPU 20 is operable to delimit and either zoom-in or zoom-out the region of interest by relaying the processed information to display 16, as will be explained below. The frame 40 may adjoin the peripheral of mirror 12, or, alternatively, it may be spaced from the periphery (not shown). FIG. 3 illustrates another embodiment of apparatus 10 including a capacitive touch-sensing mechanism 55. The viewing side of mirror 12 has a layer (not shown) that stores an electrical charge. When the user touches the layer with his or her finger, a capacitive coupling occurs between the user's finger and the layer, so that the charge on the capacitive layer decreases. This decrease is measured in circuits 50 located at each corner of mirror 12 and, since it is proportional to the distance to the finger, finger two- dimensional coordinates on the screen surface are determined and sent to CPU 20. Upon processing, the user gesture is interpreted so that the region of interest of the displayed image is initially determined and then enlarged or reduced in accordance with the user's command. FIG. 4 illustrates still another embodiment of the inventive apparatus utilizing the passive touch technology which includes an acoustic touch-sensing mechanism 65 configured with multiple piezoelectric transducers 52, 54 (one receiving and one sending) placed along the X and Y axes of mirror 12, respectively. The CPU 20 is operable to send an electrical signal to the transmitting transducer 54, which converts the signal into surface waves. These mechanical waves generated along the screen sides are then reflected to propagate across the screen surface along the equidistant lines so that the sound waves can be measured by receiving transducer 54, which reconverts them into an electrical signal. When the viewing side of mirror 20 is touched, a portion of the mechanical wave is absorbed, thus changing the received signal. The signal is then compared to a stored reference signal by software executed by CPU 20, the change recognized, and a coordinate calculated. This process happens independently for both the X and Y axes. Accordingly a touch location is determined by translating the acoustic wave amplitude into touch coordinates providing reliability and accurate "no-drift" operation resulting in graphically determining a region of interest, as will be explained below.
Still another embodiment of inventive apparatus 10 utilizing a passive touch sensing technology may be configured with a resistive touch sensing system 75. Shown highly diagrammatically on FIG. 5, mirror 12 may be covered with a overlay composed of two layers 60 and 62, usually PET film and glass or acrylic panel, so that the inside surface of each layer is coated with transparent conductive material such as ITO. These two layers are held apart by spacers (not shown). An electrical current runs through two layers 60, 62. When user 18 touches mirror 12, the two layers make contact in that exact spot. The change in the electrical field is noted by CPU 20 and the coordinates of the point of contact are calculated by a respective software executed by CPU 20. Once the coordinates are known, a special driver translates the touch into data understood by operating system of CPU 20.
As shown in FIG. 6, the active touch screen mechanisms 95 includes ultrasonic, infrared, electromagnetic, and laser/bar code systems. The ultrasonic and infrared system includes a plurality of infrared sensors and ultrasonic technology tracking a battery operated pen which touches mirror 12. The laser/bar code system typically includes a laser detecting a reflective, bar-coded sleeve on a pen which touches the mirror. The electromagnetic system, also known as pen sensing, is typically configured with a battery-powered pen or a pen provided with an LC resonance coil 63 and a pressure sensor 67. FIG. 7 illustrates an example of touchless technologies 85, which are based on digital cameras, such as cameras 64 located between the user (not shown) and mirror 12 and spaced from one another at a fixed distance so as to constantly scan the surface of mirror 12. The cameras 64 may be pivotally mounted to the corners of mirror 12 or to any other support in a position allowing the cameras to have a clear field of scanning so as to accommodate the entire surface of the mirror that needs to be interactive. When cameras 64 each detect a touch point, CPU 20 calculates an angle at which the detection occurs and processes the received information so as to triangulate the location of the contact point based on the signals received from each camera. Upon detecting a user gesture by a touchless mechanism, software executed by CPU 20 is operable to graphically delimit a region of interest and further either enlarge or reduce the determined region.
An alternative method and mechanism for touchless detecting of the users' input could be realized by cross-capacitive sensing. A cross-capacitive sensor is composed of a matrix of transmitter-receiver pairs. Each pair is a capacitor, with a capacitive current in between. If a hand is placed near the electrodes, the capacitive current will decrease.
Different configurations are possible. For example, electrodes can be positioned around the screen so as to form an n2 matrix of electrode pairs at many separations. Alternatively an active matrix of transmitters/receivers made of transparent electronics can be integrated into the glass itself. Such cross-capacitive sensor can be used to detect not only x, y position but also finger proximity to the screen and as such does not require physically touching the screen and enables gestures executed in proximity to the screen surface.
In sum, all of the above methods allow for an easy, intuitive interaction between the user and inventive apparatus 10 which is operable to process a user's input and to provide the visual feedback of a region of interest of the displayed image. The user's input is generated by a gesture selecting a region of interest of the displayed image and including L-shaped, rectangularly- shaped and/or circularly- shaped gestures. Of course, these gestures do not have to be perfect geometrical; any gesture resembling any of the-above disclosed perfect shapes is detected and interpreted by the inventive apparatus. For example, FIG. 8 illustrates an L-shaped gesture 34 made by the fingers of the user who either performs the gesture while touching the region of interest on the viewing surface of mirror 12 or positioning/holding the fingers in the proximity of the mirror. The parameters of L-shaped gesture 34 including its vertical and horizontal dimensions are registered by any of the above-discussed touch sensing or touchless mechanisms providing input data to CPU 20. The CPU 20 processes the input data and determines the location of the region of interest which is, then, for example, enlarged so as to occupy substantially the entire display 16. The gesture shown in phantom lines is far from having a perfect L-shape, but because of its resemblance with the L shape, it still can be detected and correctly interpreted. FIG. 8 further illustrates a rectangularly-shaped gesture 36 generated, for example, by contouring the region of interest on or in the proximity of mirror 12. FIG. 8 also illustrates a circular gesture 38 and a gesture having only a substantially circular cross-section.
To determine the parameters of an L-shaped or rectangular gesture, an upper left corner 76 (FIG. 8), for example, corresponds to a starting point 0, 0 along X and Y axes, respectively, of the region of interest, and the vertical (Y) and horizontal (X) dimensions of the gesture define a rectangle delimiting this region on the mirror. Understandably, a lower right corner or any other corner can be selected to be the starting point. If the gesture is circular, CPU 20 process it as a virtual rectangle 78 (FIG. 8) defining the region of interest and calculated similarly to the L-shaped and rectangular gestures. Turning to FIG. 9 and FIGS. 1OA, 1OB, the inventive method 100 starts with a step 82 (FIG. 9) resulting in capturing and displaying an image 80 (FIG. 10A). If the user does not make any gesture, software executed on CPU 20 continues to wait for a user input, as indicated by a step 84. If the gesture indicating a region of interest, however, is made in the view area of camera 14, it is detected by any of the above-disclosed detecting mechanisms in a step 86, and maximum vertical (Xmax) and horizontal (Xmax) dimensions are determined by software executed by CPU 20. Preferably, an aspect ratio for an enlarged or zoomed in image is fixed. One of the advantages of the fixed ratio allows a selected image to occupy the full screen of display 16 which is viewed by the user on mirror 12.
For example, let us assume that an Xmax/Ymax relationship is determined to correspond to a fixed ratio, which, for example, can coincide with the full screen aspect ratio and vary for different display, as shown in a step 88 of FIG. 9. Preferably, this ratio can be selected as either 4/3 or 16/9. A software executed by CPU 20 is operable to increase the determined linear dimensions of the gesture by a gain factor of Xd/Xmax = Yd/Ymax, where Xd, Yd are respective horizontal and vertical dimensions of the entire display screen, as indicated by a step 94. Having completed necessary calculations, CPU 20 is operable to either electronically or optically zoom in region of interest 120 which will be displayed on the entire display screen as shown by FIG. 1OB. At the same time, the original image 80 captured by camera 14 is also displayed as a thumbnail in one of the screen corners, as shown in FIG. 1OB and executed by a step 104 of FIG. 9. If the Xmax/Ymax relationship is determined to be less than 4/3 in a step 90, the software executed by CPU 20 will be operable to increase the determined linear dimensions of the gesture by a gain factor of Xd/(4/3)Ymax = Yd/Ymax, as indicated by a step 96. As a consequence, while the selected region is displayed on the full screen, a camera view thumbnail of the original image is shown in a corner of the screen in step 104.
If the Xmax/Ymax relationship is determined to be greater than 4/3 in a step 92 of FIG. 9, the software executed by CPU 20 will be operable to increase the determined linear dimensions of the gesture by a gain factor of Yd/(3/4)Xmax = Xd/Xmax, as indicated by a step 98. The thumbnail operates as a "Back" button allowing the user to switch back to the entire original image if and when the user either touches the thumbnail on mirror 12 or places his/her finger in the proximity of the mirror, depending, of course, on the selected detecting mechanism.
The apparatus 10 may be further provided with a plurality of controlling features allowing the user to control numerous optical and electronic parameters. For example, as shown in FIGS. 1OA and 1OB, a ratio may be varied by virtually sliding along the bottom strip 106. Other features 110, such as backview showing the back side of the person standing in front of the mirror or light controller, that allows modifying lighting conditions in the room where mirror is positioned. The specific features described herein may be used in some embodiments, but not in others, without departure from the spirit and scope of the invention as set forth. Many additional modifications are intended in the foregoing disclosure, and it will be appreciated by those of ordinary skill in the art that in some instances some features of the invention will be employed in the absence of a corresponding use of other features. For example, CPU 20 shown in FIG. 1 has a local storage 21. In addition, inventive apparatus 10 may be coupled to the external world by a communication system 23 also shown in FIG. 1 and configured as a wireless or wired system. Local connectivity between CPU 20 and a variety of sensor/appliances can be achieved by wired connections, but preferably by wireless analogue or digital connections. The captured and displayed image may be other than the user, whereas the user is capable of manipulating the inventive system by a gesture defining a region of interest. The illustrative examples therefore do not define the metes and bounds of the invention the scope of which is defined by the following claims.

Claims

What is claimed is:
1. A method for interpreting a gesture modifying a video image of an object, the method comprising: capturing the video image by a camera (14); displaying the captured video image on a display screen (16) so that the displayed video image is viewed by a user (18) on a mirror (12) located between the display screen and the object; gesturing in front of the display screen (16) so as to select a region of interest of the video image on the mirror (12); and modifying the located region of interest on the display screen (16).
2. The method of claim 1, wherein the gesturing comprises making a gesture while directly touching a viewing side of the mirror (12).
3. The method of claim 2, wherein the touching of the mirror (12) comprises passively touching the region of interest on the viewing side of the mirror (12) or actively touching the region of interest on the viewing side of the mirror.
4. The method of claim 3, wherein the passive touching of the region of interest displayed on the mirror (12) comprises using a touch-sensing system selected from the group consisting of an infra-red system (45), capacitive system (55), acoustic system (65) and resistive system (75) and a combination of these.
5. The method of claim 3, wherein the active touching of the region of interest displayed on the mirror comprises using an ultrasonic and infrared system (95) operative to track a battery-operated pen (63), a laser/bar tracking system or an electromagnetic system.
6. The method of claim 1, wherein the gesturing comprises making a gesture without touching a viewing side of the mirror (12) by using a digital vision touch system (85), computer-vision based system or a cross-capacitive sensing system operable to detect the gesture.
7. The method of claim 1, wherein the gesturing further comprises making a gesture selected from the group consisting of a substantially L-shaped (34), substantially rectangularly shaped (36), substantially circularly-shaped (38) gesture and a gesture resembling these.
8. The method of claim 7, further comprising determining vertical and horizontal dimensions of the L-shaped gesture (34) or the rectangularly shaped gesture (34), thereby determining a location of the region of interest on the display screen (16), and displaying a rectangle delimiting the region of interest.
9. The method of claim 7, further comprising configuring a virtual rectangle (78) surrounding the circularly-shaped gesture (38), determining vertical and horizontal dimensions of the virtual rectangle, and displaying the rectangle on the display screen (16), thereby delimiting the region of interest.
10. The method of claim 1, wherein the modifying the located region of interest further comprises displaying the region of interest on a full screen of the display screen
(16), displaying a thumbnail (102) of the captured image along a periphery of the display screen (16), and selectively operating the thumbnail (102) so as to replace the original region of interest displayed on the full display screen (16) with the image.
11. An apparatus for interpreting a gesture to modify a video image of an object, comprising: a camera (14) operable to capture an image of the object; at least one display screen (16) ; a mirror (12) located in front of the display screen; a processor (20) coupling the camera to the display screen so that the captured image is displayed on the display screen and visible to a user through the mirror; and a detecting mechanism coupled to the processor and operable to detect the gesture made by a user (18) in front of the mirror (12) at a location of a region of interest of the displayed image, the processor (20) being operable to modify the region of interest on the display screen (16) in response to a signal generated by the detecting mechanism upon detecting the gesture.
12. The apparatus of claim 11, wherein the detecting mechanism comprises a touch-sensing system selected from the group consisting of an infra-red system (45), capacitive system (55), acoustic system (65), resistive system (75), ultrasonic and infrared system (95), laser/bar tracking system and electromagnetic system and a combination of these.
13. The apparatus of claim 11, wherein the detecting mechanism comprises a touchless sensing system including a digital vision touch system (85) or a cross- capacitive sensing system.
14. The apparatus of claim 12, wherein the infra-red system comprises a frame (40) peripherally surrounding the mirror (12), a plurality of aligned LED transmitters (44) on the frame, and a plurality of aligned photoreceptor receivers (46) spaced from the transmitters (44) on the frame (40) and operable to input a signal to the processor upon breaking an infrared path between a respective pair of transmitter and receiver by the gesture.
15. The apparatus of claim 11, wherein the mirror (12) is configured as a semitransparent mirror or as a polarized reflective foil, the display screen being a liquid crystal display device or an (O)LED, the camera (14) being a CCD or a CMOS and facing a non- viewing side of the mirror (12).
16. The apparatus of claim 11, further comprising a software executed by the processor (20) for determining a location and parameters of the gesture detected by the detecting mechanism, wherein the shape of the gesture being a substantially L-shape (34), a substantially rectangular shape (36) or a substantially circular shape (38).
17. The apparatus of claim 16, wherein the parameters of the circularly- shaped gesture (38) include horizontal and vertical dimensions of a virtual rectangle (78) surrounding the gesture.
18. The apparatus of claim 16, wherein the parameters include vertical and horizontal dimensions of the gesture, the software executed by the processor (20) for determining the parameters being operative to calculate one of the horizontal and vertical dimensions and to determine the other dimension based on a fixed aspect ratio between the horizontal and vertical dimensions.
19. The apparatus of claim 16, wherein the software executed by the processor (20) for determining the parameters is operable to enlarge the detected gesture so that the region of interest is displayed on an entire area of the display screen (16).
20. The apparatus of claim 11, further comprising a software executed by the processor for generating a thumbnail (102) of the captured image upon modifying the region of interest, and a software for re-displaying the captured image on the display screen (16) upon activating the thumbnail (102).
PCT/IB2006/052142 2005-06-28 2006-06-27 In-zoom gesture control for display mirror WO2007000743A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US69456805P 2005-06-28 2005-06-28
US60/694,568 2005-06-28
US77221406P 2006-02-10 2006-02-10
US60/772,214 2006-02-10

Publications (2)

Publication Number Publication Date
WO2007000743A2 true WO2007000743A2 (en) 2007-01-04
WO2007000743A3 WO2007000743A3 (en) 2007-03-29

Family

ID=37440595

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/052142 WO2007000743A2 (en) 2005-06-28 2006-06-27 In-zoom gesture control for display mirror

Country Status (1)

Country Link
WO (1) WO2007000743A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099301A1 (en) * 2007-02-14 2008-08-21 Koninklijke Philips Electronics N.V. Feedback device for guiding and supervising physical exercises
EP2042976A1 (en) * 2007-09-29 2009-04-01 HTC Corporation Image processing method
EP2110738A2 (en) * 2008-04-15 2009-10-21 Sony Corporation Method and apparatus for performing touch-based adjustments wthin imaging devices
EP2189886A3 (en) * 2008-11-10 2010-07-28 AVerMedia Information, Inc. A method and apparatus to define drafting position
WO2010130278A1 (en) * 2009-05-14 2010-11-18 Sony Ericsson Mobile Communications Ab Camera arrangement with image modification
CN101399892B (en) * 2007-09-30 2011-02-09 宏达国际电子股份有限公司 Image processing method
US8238662B2 (en) * 2007-07-17 2012-08-07 Smart Technologies Ulc Method for manipulating regions of a digital image
WO2013169259A1 (en) * 2012-05-10 2013-11-14 Intel Corporation Gesture responsive image capture control and/or operation on image
CN109791437A (en) * 2016-09-29 2019-05-21 三星电子株式会社 Display device and its control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088018A (en) * 1998-06-11 2000-07-11 Intel Corporation Method of using video reflection in providing input data to a computer system
DE10212200A1 (en) * 2002-03-19 2003-10-16 Klaus Mueller Video camera and screen acting as substitute for mirror enables user to see himself from different angles and zoom picture for close-up view
WO2005031552A2 (en) * 2003-09-30 2005-04-07 Koninklijke Philips Electronics, N.V. Gesture to define location, size, and/or content of content window on a display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088018A (en) * 1998-06-11 2000-07-11 Intel Corporation Method of using video reflection in providing input data to a computer system
DE10212200A1 (en) * 2002-03-19 2003-10-16 Klaus Mueller Video camera and screen acting as substitute for mirror enables user to see himself from different angles and zoom picture for close-up view
WO2005031552A2 (en) * 2003-09-30 2005-04-07 Koninklijke Philips Electronics, N.V. Gesture to define location, size, and/or content of content window on a display

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099301A1 (en) * 2007-02-14 2008-08-21 Koninklijke Philips Electronics N.V. Feedback device for guiding and supervising physical exercises
US8328691B2 (en) 2007-02-14 2012-12-11 Koninklijke Philips Electronics N.V. Feedback device for guiding and supervising physical excercises
US8238662B2 (en) * 2007-07-17 2012-08-07 Smart Technologies Ulc Method for manipulating regions of a digital image
US8432415B2 (en) 2007-09-29 2013-04-30 Htc Corporation Image processing method
EP2042976A1 (en) * 2007-09-29 2009-04-01 HTC Corporation Image processing method
CN101399892B (en) * 2007-09-30 2011-02-09 宏达国际电子股份有限公司 Image processing method
EP2110738A2 (en) * 2008-04-15 2009-10-21 Sony Corporation Method and apparatus for performing touch-based adjustments wthin imaging devices
EP2110738A3 (en) * 2008-04-15 2014-05-07 Sony Corporation Method and apparatus for performing touch-based adjustments wthin imaging devices
EP2189886A3 (en) * 2008-11-10 2010-07-28 AVerMedia Information, Inc. A method and apparatus to define drafting position
WO2010130278A1 (en) * 2009-05-14 2010-11-18 Sony Ericsson Mobile Communications Ab Camera arrangement with image modification
CN102422320A (en) * 2009-05-14 2012-04-18 索尼爱立信移动通讯有限公司 Camera arrangement with image modification
WO2013169259A1 (en) * 2012-05-10 2013-11-14 Intel Corporation Gesture responsive image capture control and/or operation on image
CN104220975A (en) * 2012-05-10 2014-12-17 英特尔公司 Gesture responsive image capture control and/or operation on image
US9088728B2 (en) 2012-05-10 2015-07-21 Intel Corporation Gesture responsive image capture control and/or operation on image
CN109791437A (en) * 2016-09-29 2019-05-21 三星电子株式会社 Display device and its control method
EP3465393A4 (en) * 2016-09-29 2019-08-14 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US10440319B2 (en) 2016-09-29 2019-10-08 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
CN109791437B (en) * 2016-09-29 2022-09-27 三星电子株式会社 Display device and control method thereof

Also Published As

Publication number Publication date
WO2007000743A3 (en) 2007-03-29

Similar Documents

Publication Publication Date Title
JP7248640B2 (en) Electronic equipment with complex human interface
WO2007000743A2 (en) In-zoom gesture control for display mirror
US8004501B2 (en) Hand-held device with touchscreen and digital tactile pixels
JP4450657B2 (en) Display device
US9064772B2 (en) Touch screen system having dual touch sensing function
JP5331887B2 (en) Touch screen display with multiple cameras
US20130038564A1 (en) Touch Sensitive Device Having Dynamic User Interface
US20130044080A1 (en) Dual-view display device operating method
US20110291961A1 (en) Touch-sensing display panel
WO2012090805A1 (en) Display device
KR102052752B1 (en) Multi human interface devide having text input unit and pointer location information input unit
KR20170124068A (en) Electrical device having multi-functional human interface
JP2002287873A (en) System for display allowing navigation
TW201220162A (en) Touch screen system
KR101268209B1 (en) Multi human interface devide having poiner location information input device and pointer excution device
KR20140075651A (en) Multi human interface devide having display unit
KR20150032950A (en) Digital device having multi human interface devide
KR20150050546A (en) Multi functional human interface apparatus
JP2008225541A (en) Input apparatus and information equipment
CN110456940A (en) Electronic equipment and its control method
US20110157059A1 (en) Touch Display Panel and Touch Sensing Method Thereof
JPH10149253A (en) Pen input device
CN110286843A (en) The display methods of electronic equipment and two dimensional code
JP2023025681A (en) Ar/vr navigation with authentication using integrated scroll wheel and fingerprint sensor user input apparatus
WO2019090504A1 (en) Intelligent terminal having dual display screen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06780007

Country of ref document: EP

Kind code of ref document: A2