US20110285669A1 - Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products - Google Patents

Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products Download PDF

Info

Publication number
US20110285669A1
US20110285669A1 US12/825,545 US82554510A US2011285669A1 US 20110285669 A1 US20110285669 A1 US 20110285669A1 US 82554510 A US82554510 A US 82554510A US 2011285669 A1 US2011285669 A1 US 2011285669A1
Authority
US
United States
Prior art keywords
interactive display
images
angles
camera
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/825,545
Inventor
Kristian LASSESSON
Jari Sassi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Mobile Communications AB
Original Assignee
Sony Ericsson Mobile Communications AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Ericsson Mobile Communications AB filed Critical Sony Ericsson Mobile Communications AB
Priority to US12/825,545 priority Critical patent/US20110285669A1/en
Assigned to SONY ERICSSON MOBILE COMMUNICATIONS AB reassignment SONY ERICSSON MOBILE COMMUNICATIONS AB ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LASSESSON, KRISTIAN, SASSI, JARI
Priority to EP11166660.8A priority patent/EP2402844B1/en
Publication of US20110285669A1 publication Critical patent/US20110285669A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0428Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by sensing at the edges of the touch surface the interruption of optical paths, e.g. an illumination plane, parallel to the touch surface which may be virtual

Definitions

  • the present invention relates generally to portable electronic devices and, more particularly, to interactive displays for electronic devices.
  • a touchscreen is an electronic display device that can detect the presence and location of a touch within the display area. The term generally refers to touching the display of the device with a finger or hand.
  • a touchscreen has two main attributes. First, it may enable one to interact directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or touchpad. Secondly, the direct interaction is performed without requiring any intermediate device that would need to be held in the hand, such as a stylus or pen.
  • Such displays can be used in combination with desk top computers, laptops, portable devices, networks, personal digital assistants (PDAs), satellite navigation, video games and the like.
  • PDAs personal digital assistants
  • Conventional interactive displays are typically implemented using a layer of sensitive material above a display for detection of the finger or stylus.
  • an electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; at least one camera coupled to the interactive display and frame; and a position determination circuit coupled to the camera and the interactive display.
  • the position determination circuit is configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • the at least one camera may include a single camera.
  • the electronic device may further include at least two mirrors attached to the frame.
  • the position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
  • the position determination circuit may be further configured to capture and store a background image of the interactive display using the single camera before a user interacts with the interactive display; obtain a plurality of images using the single camera and the at least two mirrors; subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculate the position of the object on the interactive display based on the plurality of subtracted images.
  • the position determination circuit may be configured to calculate the position of the object by calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
  • the at least one camera may be two cameras attached to the frame.
  • the position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
  • the position determination circuit may be further configured to capture and store a background image of the interactive display using the single camera before a user interacts with the interactive display; obtain a plurality of images with two cameras; subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculate the position of the object with respect to the interactive display based on the plurality of subtracted images.
  • the position determination circuit may be further configured to obtain a first image using a first of the two cameras and calculate first and second angles based on the obtained first image and the position of the object with respect to the interactive display; obtain a second image using a second of the two cameras and calculate third and forth angles based on the obtained second image and the position of the object with respect to the interactive display; compare the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point; and determine if the intersection point is located on or above the interactive display.
  • the position determination circuit may be further configured to detect contact of the object on the interactive display; and calculate coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
  • the at least one camera may be a single camera and the interactive display may have a reflective surface.
  • the position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • the at least one camera may be a single camera positioned inside the housing of the electronic device.
  • the position determination circuit may be further configured to determine a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device.
  • the position determination circuit may be configured to obtain an image of the object using the single camera positioned inside the housing of the electronic device; calculate a start angle and a stop angle of the image based on the position of the object with respect to the interactive display; calculate frame angles between two known edges of the frame and the object with respect to the interactive display; calculate a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and calculate the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
  • Still further embodiments provide methods of controlling an interactive display of an electronic device, the electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame.
  • the method includes determining a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • the at least one camera includes a single camera and the electronic device further includes at least two mirrors attached to the frame.
  • the method further includes determining a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
  • the method further includes capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display; obtaining a plurality of images using the single camera and the at least two mirrors; subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculating the position of the object on the interactive display based on the plurality of subtracted images.
  • Calculating the position of the object may include calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
  • the at least one camera may be two cameras attached to the frame.
  • the method may further include determining a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
  • the method further includes capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display; obtaining a plurality of images with two cameras; subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculating the position of the object with respect to the interactive display based on the plurality of subtracted images.
  • the method may further include obtaining a first image using a first of the two cameras and calculate first and second angles based on the obtained first image and the position of the object with respect to the interactive display; obtaining a second image using a second of the two cameras and calculate third and forth angles based on the obtained second image and the position of the object with respect to the interactive display; comparing the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point; determining if the intersection point is located on or above the interactive display; detecting contact of the object on the interactive display; and calculating coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
  • the at least one camera may include a single camera and the interactive display may have a reflective surface.
  • the method may further include determining a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • the at least one camera may include a single camera positioned inside the housing of the electronic device.
  • the method may further include determining a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device. Determining a position may include obtaining an image of the object using the single camera positioned inside the housing of the electronic device; calculating a start angle and a stop angle of the image based on the position of the object with respect to the interactive display; calculating frame angles between two known edges of the frame and the object with respect to the interactive display; calculating a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and calculating the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
  • FIG. 1 is a schematic block diagram for a portable electronic device and a cellular communication system that operate according to some embodiments of the present invention.
  • FIGS. 2A through 2C are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 3A and 3B are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 4A and 4B are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 5A through 5C are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 6A through 6D are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 7 through 12 are flowcharts illustrating various methods of controlling an interactive display of an electronic device according to some embodiments discussed herein.
  • signal may take the form of a continuous waveform and/or discrete value(s), such as digital value(s) in a memory or register.
  • various embodiments may take the form of a computer program product comprising a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
  • circuit and “controller” may take the form of digital circuitry, such as computer-readable program code executed by an instruction processing device(s) (e.g., general purpose microprocessor and/or digital signal processor), and/or analog circuitry.
  • instruction processing device e.g., general purpose microprocessor and/or digital signal processor
  • interactive display refers to any type of display, such as a touchscreen, that is activated responsive to an object in proximity thereto.
  • the object can be a finger, stylus, pencil, pen or the like with departing from the scope of embodiments discussed herein.
  • Interactive displays may be used in combination with desk top computers, laptops, portable devices, networks, personal digital assistants (PDAs), satellite navigation, video games and the like.
  • Conventional interactive displays are typically implemented using a layer of sensitive material above a display for detection of the finger or stylus.
  • Conventional interactive displays are typically activated using a single type of object, for example, a pen, a finger or a stylus.
  • Some embodiments discussed herein provide interactive displays that are configured to determine a position of an object, such as a finger or stylus, in proximity of the interactive display based on images captured by one or more cameras.
  • embodiments discussed herein may provide interactive displays that are responsive to more than one type of object, such as a finger, stylus, pen or pencil.
  • some embodiments may also enable additional features of the touch interface, for example, sensing of an object in proximity to the interactive display before the object actually makes contact with the interactive display as will be discussed further herein with respect to FIGS. 1 through 12 .
  • the portable electronic device 190 includes at least one antenna 105 .
  • the portable electronic device 190 may communicate with a cellular base station transceiver 160 connected to a mobile switching center (“MSC”) 170 , and/or it may communicate through a short range network directly with another wireless communication device (not shown).
  • the portable electronic device 190 can therefore include a transceiver 112 and a wireless communication protocol controller (“communication controller”) 114 that are configured to communicate through a wireless air interface with the base station transceiver 160 and/or with the other wireless communication devices.
  • communication controller wireless communication protocol controller
  • the transceiver 112 typically includes a transmitter circuit and a receiver circuit which cooperate to transmit and receive radio frequency signals.
  • the communication controller 114 can be configured to encode/decode and control communications according to one or more cellular protocols, which may include, but are not limited to, Global Standard for Mobile (GSM) communication, General Packet Radio Service (GPRS), enhanced data rates for GSM evolution (EDGE), code division multiple access (CDMA), wideband-CDMA, CDMA2000, and/or Universal Mobile Telecommunications System (UMTS).
  • GSM Global Standard for Mobile
  • GPRS General Packet Radio Service
  • EDGE enhanced data rates for GSM evolution
  • CDMA code division multiple access
  • CDMA2000 wideband-CDMA2000
  • UMTS Universal Mobile Telecommunications System
  • the communication controller 114 may alternatively or additionally encode/decode and control communications according to one or more short range communication protocols, which may include, but are not limited to Bluetooth and/or WiFi such as IEEE 802.11 (e.g., IEEE 802.11b-g).
  • the portable electronic device 190 can include an interactive display 189 in the housing 100 in accordance with some embodiments, a man machine interface 116 (e.g., virtual keypad of the interactive display), a speaker/microphone 117 , and/or a web browser 118 that communicate with the controller 114 . It will be understood that other circuits/modules found in portable electronic devices may be included in portable electronic device 190 without departing from the scope of embodiments discussed herein.
  • the portable electronic device 190 may further include a position determination circuit 192 , one or more cameras 138 / 139 , optionally (as indicated by dotted lines in FIG. 1 ) one or more mirrors ( 128 / 129 ) and a memory 180 that all communicate with the controller 114 .
  • the one or more cameras 138 / 139 and the one or more optional mirrors 128 / 129 may be attached to a frame (not shown) of the housing for the interactive display 189 as will be discussed further below with respect to FIGS. 2A through 6D .
  • the position determination circuit 192 coupled to the one or more cameras 138 / 139 and the interactive display 18 may be configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera in accordance with some embodiments as will be discussed further below with respect to FIGS. 2A through 6D .
  • the memory 180 may include the obtained, calculated and stored data used in accordance with some embodiments discussed herein, for example, captured images 181 , calculated angles 183 and/or calculated object positions 184 . It will be understood that although the memory 180 is illustrated as including three separate data folders, embodiments of the present invention are not limited to this configuration. For example, the folders in memory 180 may be combined to provide two or less folders or four or more folders may be provided without departing from the scope of embodiments discussed herein.
  • the portable electronic device 190 has been shown in FIG. 1 within separate blocks, it is to be understood that two or more of these functions may be combined in a single physical integrated circuit package and/or the functionality described for one or the blocks may be spread across two or more integrated circuit packages.
  • the functionally described herein for the position determination circuit 192 may split into separate execution circuitry or combined with a general purpose processor and/or a digital signal processor that executes instructions within the memory 180 .
  • the memory 180 can include data 181 , 183 , 184 , general control instructions and the like that are executed by the instruction execution circuitry to carry out one or more of the embodiments described herein.
  • FIGS. 2A through 2C are diagrams illustrating an interactive display in accordance with some embodiments discussed herein.
  • FIG. 2A is a top view of an interactive display 189 ( FIG. 1 ) in accordance with some embodiments;
  • FIG. 2B is an enlarged view as send from the camera in accordance with some embodiments;
  • FIG. 2C is a cross section of the interactive display along the line A-A′ of FIG. 2A in accordance with some embodiments. Details with respect to some embodiments will now be discussed with respect to FIGS. 1 through 2C .
  • a single camera 238 and two mirrors 228 , 229 are attached to a frame 248 .
  • the position determination circuit 192 may be configured to determine a position of an object 208 , for example, a finger or stylus, with respect to the interactive display based on images obtained from the single camera 238 and the at least two mirrors 228 , 229 .
  • the camera 238 may have a field of view of about 90 degrees horizontally and from about 10 to about 15 degrees vertically.
  • the mirrors 228 and 229 may have a cylindrical or spherical shape.
  • the mirrors 228 and 229 may be shaped to increase their field of view, thus a cylindrical or spherical shape may provide increased area.
  • Using a single camera 238 and two mirrors 228 , 229 may be more cost effective than providing three cameras.
  • the presence of the two mirrors 228 , 229 allows the position of the object 208 to be triangulated.
  • the three images may be triangulated to calculate the position of the object 208 with respect to the interactive display 208 . If one of the two mirrors 228 , 229 is obscured by, for example, the object 208 , the position of the object 208 can be determined based on the two remaining images from the other mirror 228 or 229 and the camera 238 .
  • Use of two images may allow calculation of the position and size of the object 208 .
  • Use of three images may allow further calculation of additional objects 208 or a more accurate size of the object 208 .
  • the position determination circuit 192 is configured to capture and store ( 181 ) a background image of the interactive display 189 using the single camera 238 and the two mirrors 228 , 229 before a user interacts with the interactive display 189 .
  • the stored image can be subtracted to obtain information related to the object 208 .
  • capturing and storing the background image before the user interacts with the interactive display 189 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 208 is relative to the interactive display 189 .
  • the image background calculation inside the frame may involve capturing the image inside the frame and storing the same. This can be adaptive and may be used to filter out anomolies, such as dirt on the frame. Outside the frame, the image may be captured and saved.
  • the position determination module 192 may be configured to continuously learn new backgrounds by not using foreground objects in the background image. Examples of this can be found in, for example, the Open Computer Vision Library. Background calculations may be performed in a similar manner for the embodiments discussed below with respect to FIGS. 3A through 6D .
  • the position determination module 192 may then be configured to obtain a plurality of images using the single camera 238 and the two mirrors 228 , 229 .
  • the camera 238 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power.
  • the stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 208 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • the difference between the obtained image and the background may be determined by subtracting the background image from the obtained image.
  • a typical grayscale value for intensity may be used.
  • a high value on difference is likely to be a foreground object.
  • the difference value will typically be near zero.
  • Some noise may be present due to, for example, reflections caused by sunlight.
  • a low pass filter may be used to remove noise, such as sunlight.
  • ambient light causes a linear offset on the values, it may be possible to align the difference and calculate an offset from the difference. Differences between images may be calculated similar in embodiments discussed below with respect to FIGS. 3A through 6D .
  • the position determination module 192 may be further configured to calculate the position of the object 208 by calculating first and second angles for each of the plurality of subtracted images.
  • the first and second angles may correspond to a start position and a stop position of the object 208 .
  • coordinates of the object 208 with respect to the interactive display 189 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images.
  • an absolute value of intensity versus X value will be obtained in one dimension. Then, the same will be obtained in the second dimension to calculate a distance that that object 208 is from the display 289 .
  • the left angle/position will typically be where the intensity value changes significantly from near zero to positive value.
  • a positive derivative may be obtained on the left angle.
  • a decision may be based on a predetermined threshold.
  • the right angle/position will typically change from a positive value to near zero.
  • a negative derivative may be obtained and a decision may be determined based on the result.
  • FIG. 2C further illustrates a display glass 258 and a cross section of the frame 248 along the line A-A′ of FIG. 2A .
  • objects 208 situated above the frame 248 may be detected.
  • the camera 238 may have a wider vertical viewing angle and may have spherical mirrors.
  • an infrared (IR) light may be used to enhance detection of human skin to provide more sensitive recognition.
  • FIG. 3A is a diagram illustrating a view from a second of two cameras in accordance with some embodiments.
  • FIG. 3B is a top view of an interactive display in accordance with some embodiments. Details with respect to some embodiments will now be discussed with respect to FIGS. 1 , 3 A and 3 B.
  • the position determination circuit 192 is configured to determine a position of the object 308 with respect to the interactive display 389 based on images obtained from the at least two cameras 338 and 339 . Use of two images may allow calculation or the position and size of the object 308 .
  • the cameras 338 and 339 may have a field of view of about 90 degrees horizontally and from about 10 to about 15 degrees vertically. For example, viewing angles 378 and 379 illustrated in FIG. 3B .
  • FIG. 3A illustrates the view from camera 339 of FIG. 3B with respect to a second frame edge 348 ′′ and a first frame edge 348 ′.
  • the position determination circuit 192 is configured to capture and store ( 181 ) a background image of the interactive display 189 using the two cameras 338 , 339 before a user interacts with the interactive display 189 .
  • the stored image can be subtracted to obtain information related to the object 308 .
  • capturing and storing the background image before the user interacts with the interactive display 189 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 308 is relative to the interactive display 189 .
  • the position determination module 192 may then be configured to obtain a plurality of images using the cameras 338 and 339 .
  • the cameras 338 and 339 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power.
  • the stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 308 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • the position determination module 192 may be further configured to calculate the position of the object 308 by calculating first and second angles for each of the plurality of subtracted images.
  • the first and second angles may correspond to a start position and a stop of the object 308 , for example, angles ⁇ 1 and ⁇ 2 corresponding to camera 339 of FIG. 3B and angles ⁇ 1 and ⁇ 2 corresponding to camera 338 of FIG. 3B .
  • angles ⁇ 1 and ⁇ 2 corresponding to camera 339 of FIG. 3B are calculated with respect to a first frame edge 348 ′ and angles ⁇ 1 and ⁇ 2 corresponding to camera 338 of FIG. 3B are calculated with respect to a third frame edge 348 ′′′′.
  • coordinates of the object 308 with respect to the interactive display 389 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images.
  • objects 308 situated above the frame 348 may be detected.
  • the cameras 338 and 339 may have a wider vertical viewing angle and may have spherical mirrors.
  • Embodiments illustrated in FIGS. 3A and 3B may provide a cheaper alternative to capacitive and resistive touch displays as there may not be a film or additional layer on top of the display glass.
  • FIG. 4A is a cross section of an interactive display illustrating detection of the object 408 above the interactive display 489 in accordance with some embodiments.
  • FIG. 4B is a top view of the interactive display illustrating an object 408 outside of the display and in proximity to the display in accordance with some embodiments discussed herein. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1 , 4 A and 4 B.
  • the position determination circuit 192 is configured to determine a position of the object 408 with respect to the interactive display 489 based on images obtained from the at least two cameras 438 and 439 .
  • the cameras 438 and 439 are positioned in two of the four corners of the display 489 .
  • the cameras 438 and 439 may have a field of view of about 90 degrees horizontally and more than zero degrees vertically.
  • viewing angles 478 and 479 are illustrated in FIG. 4B .
  • FIG. 4A illustrates a cross section illustrating cameras 438 , 439 , viewing angles 478 , 479 and the object 408 .
  • the position determination circuit 192 is configured to capture and store ( 181 ) a background image of the interactive display 489 using the two cameras 438 , 439 before a user interacts with the interactive display 189 .
  • the stored image can be subtracted to obtain information related to the object 408 .
  • capturing and storing the background image before the user interacts with the interactive display 489 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 408 is relative to the interactive display 489 .
  • the position determination module 192 may then be configured to obtain a plurality of images using the cameras 438 and 439 .
  • the cameras 438 and 439 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power.
  • the stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 408 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • the position determination module 192 may be further configured to calculate the position of the object 408 by calculating first and second angles for each of the plurality of subtracted images.
  • the first and second angles may correspond to a start position and a stop position of the object 408 , for example, angles ⁇ 1 and ⁇ 2 corresponding to camera 439 of FIG. 4B and angles ⁇ 1 and ⁇ 2 corresponding to camera 438 of FIG. 4B .
  • the position determination module 192 is then configured to determine an intersection point of the camera views as illustrated in FIG. 4B based on the comparison of the angles. If the intersection point is located on or above the display 489 , the intersection point is considered a pointer 408 for use with the interactive display 489 .
  • the object 408 may be detected even if it is outside the display surface.
  • coordinates of the object 408 with respect to the interactive display 389 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images.
  • Embodiments illustrated in FIGS. 4A and 4B may provide a cheaper alternative to capacitive and resistive touch displays as there may not be a film or additional layer on top of the display glass.
  • FIG. 5A is a top view of a display surface of an interactive display having a reflective surface in accordance with some embodiments.
  • FIG. 5B is a photograph of a user's finger contacting the reflective display in accordance with some embodiments discussed herein.
  • FIG. 5C is a cross section of the portable electronic device along the line A-A′ in accordance with some embodiments. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1 and 5 A- 5 C.
  • the position determination circuit 192 is configured to determine a position of the object 508 with respect to the interactive display 589 based on image obtained from the camera 538 and a reflection of the object 508 in the reflective surface 558 of the interactive display 589 as viewed by the single camera 538 .
  • the camera 538 is positioned in one of the four corners of the display 589 .
  • the camera 538 may have a field of view of about 90 degrees horizontally and less than from about 10 to about 15 degrees vertically. In some embodiments, multiple cameras may be provided to allow for multi-touch implementation.
  • the position determination circuit 192 is configured to capture and store ( 181 ) a background image of the interactive display 589 using the camera 538 and the reflection as viewed from the camera 538 before a user interacts with the interactive display 189 .
  • the stored image can be subtracted to obtain information related to the object 508 .
  • capturing and storing the background image before the user interacts with the interactive display 589 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 508 is relative to the interactive display 589 .
  • the position determination module 192 may then be configured to obtain a plurality of images using the camera 538 and the reflective surface of the display 558 .
  • the camera 538 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power.
  • the position determination module 192 is configured to perform a computer vision calculation to separate the object of interest 508 with the stored background image. Then, the object of interest 508 may be correlated with the mirror image of the same object of interest 508 in the reflective display 558 to identify the corresponding object. This may be useful if there is more then one object.
  • the position determination module 192 can detect a “touch” by the object of interest 508 when the closest distance D 1 ( FIG. 5B ) between the object and the mirror image is about zero.
  • the image illustrated in FIG. 5B is the view from the camera 538 .
  • This image is used to calculate a distance that the object 508 is from the left of the display, for example, in pixels.
  • the resulting distance may be used to calculate a horizontal angle.
  • the number of pixels in the vertical direction can be used to calculate the distance the object 508 is from the camera 538 .
  • These calculated parameters can be used to calculate the position of the object of interest 508 .
  • FIG. 6A is a cross section of an interactive display in accordance with some embodiment.
  • FIG. 6B is a top view of an interactive display in accordance with some embodiment.
  • FIG. 6C is a cross section of an interactive display in accordance with some embodiment.
  • FIG. 6D is a cross section of an interactive display in accordance with some embodiment. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1 and 6 A- 6 D.
  • a single camera 638 is provided inside a housing of the device and there are no mirrors present.
  • the position determination circuit 192 is configured to determine a position of the object 608 with respect to the interactive display 689 based on image obtained from the camera 638 positioned inside the housing of the device. As illustrated in FIGS. 6A-6D , the camera 638 is positioned in one of the four corners of the display 689 .
  • the position determination circuit 192 is configured to obtain an image of the object 608 using the single camera 638 positioned inside the housing of the electronic device.
  • the obtained image can be used to calculate a start angle ⁇ 1 and a stop angle ⁇ 2 ( FIG. 6B ) of the image based on the position of the object with respect to the interactive display 689 .
  • the obtained image can also be used to calculate frame angles between two known edges of the frame and the object 608 with respect to the interactive display 689 .
  • the distance between the object 608 and the camera 638 can be calculated.
  • the position and size of the object 608 can be determined.
  • an object 608 can be detected on and above the display with a viewing in and above the display glass 648 .
  • angle 678 , 698 and 698 respectively. Accordingly, it may be possible to detect the objection 608 in x, y and z directions and before it makes contact with the display.
  • IR light may be used to enhance the detection of human skin in embodiments where the human finger is used as the object 608 .
  • the electronic device includes a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame.
  • operations begin at block 700 by determining a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • at least one camera refers to one or more cameras as well as mirrors, reflective displays and the like that may be used in combination with the at least one camera.
  • a reflective surface of the display may be used in addition to the camera.
  • a position of the object with respect to the interactive display may be determined based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • a position of the object with respect to the interactive display may be determined based on images obtained from the single camera and the at least two mirrors.
  • operations begin at block 805 by capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display.
  • a plurality of images are obtained using the single camera and the at least two mirrors (block 815 ).
  • the stored background image is subtracted from each of the obtained plurality of images to provide a plurality of subtracted images (block 825 ).
  • the position of the object on the interactive display is calculated based on the plurality of subtracted images (block 835 ).
  • operations begin at block 937 by calculating first and second angles for each of the plurality of subtracted images.
  • the first angle corresponds to a start position of the object and the second angle corresponds to a stop position of the object.
  • Coordinates of the object are calculated with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images (block 939 ).
  • a position of the object may be determined with respect to the interactive display based on images obtained from the at least two cameras.
  • operations begin at block 1006 by capturing and storing a background image of the interactive display using the two cameras before a user interacts with the interactive display.
  • a plurality of images are obtained using the two cameras (block 1016 ).
  • the stored background image is subtracted from each of the obtained plurality of images to provide a plurality of subtracted images (block 1026 ).
  • the position of the object is calculated with respect to the interactive display based on the plurality of subtracted images (block 1036 ).
  • operations begin at block 1146 by obtaining a first image using a first of the two cameras and calculating first and second angles based on the obtained first image and the position of the object with respect to the interactive display.
  • a second image is obtained using a second of the two cameras and third and forth angles are calculated based on the obtained second image and the position of the object with respect to the interactive display (block 1156 ).
  • the first and second calculated angles of the first obtained image are compared to the third and forth angles of the second obtained image to determine an intersection point (block 1166 ). It is determined if the intersection point is located on or above the interactive display (block 1176 ).
  • Contact of the object on the interactive display is detected (block 1186 ). Coordinates of the object on the interactive display are calculated based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point (block 1196 ).
  • a position of the object on the interactive display may be determined based on images obtained from the single camera positioned inside the housing of the electronic device.
  • Operations for determining a position begin at block 1207 by obtaining an image of the object using the single camera positioned inside the housing of the electronic device.
  • a start angle and a stop angle of the image is calculated based on the position of the object with respect to the interactive display (block 1217 ).
  • Frame angles between two known edges of the frame and the object are calculated with respect to the interactive display (block 1227 ).
  • a distance between the object on the interactive display and the camera are calculated using the calculated start and stop angles and frame angles (block 1237 ).
  • the position and size of the object on the interactive display may be calculated based on the calculated distance, start and stop angles and frame angles (block 1247 ).
  • signal may take the form of a continuous waveform and/or discrete value(s), such as digital value(s) in a memory or register.
  • various embodiments may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
  • circuit and “controller” may take the form of digital circuitry, such as computer-readable program code executed by an instruction processing device(s) (e.g., general purpose microprocessor and/or digital signal processor), and/or analog circuitry.
  • instruction processing device e.g., general purpose microprocessor and/or digital signal processor

Abstract

An electronic device is provided including a housing; an interactive display connected to the housing; a frame associated with the interactive display; at least one camera coupled to the interactive display and frame; and a position determination circuit coupled to the camera and the interactive display. The position determination circuit is configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.

Description

    CLAIM OF PRIORITY
  • The present application claims priority from U.S. Provisional Application No. 61/347,008 (Attorney Docket No. 9342-494PR), filed May 21, 2010, the disclosure of which is hereby incorporated herein by reference as if set forth in its entirety.
  • FIELD
  • The present invention relates generally to portable electronic devices and, more particularly, to interactive displays for electronic devices.
  • BACKGROUND
  • Many electronic devices, such as mobile terminals and lap top computers, do not use a conventional keyboard for data entry or manipulation of applications thereon. Instead, conventional electronic devices include an interactive display configured to respond to a touch of a finger or a stylus. Thus, a virtual keypad may be presented on the interactive display and a user can type emails, phone numbers etc. by activating the virtual letters/numbers thereon. One type of interactive display is a touchscreen. A touchscreen is an electronic display device that can detect the presence and location of a touch within the display area. The term generally refers to touching the display of the device with a finger or hand.
  • A touchscreen has two main attributes. First, it may enable one to interact directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or touchpad. Secondly, the direct interaction is performed without requiring any intermediate device that would need to be held in the hand, such as a stylus or pen. Such displays can be used in combination with desk top computers, laptops, portable devices, networks, personal digital assistants (PDAs), satellite navigation, video games and the like. Conventional interactive displays are typically implemented using a layer of sensitive material above a display for detection of the finger or stylus.
  • SUMMARY
  • Some embodiments discussed herein provide an electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; at least one camera coupled to the interactive display and frame; and a position determination circuit coupled to the camera and the interactive display. The position determination circuit is configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • In further embodiments, the at least one camera may include a single camera. The electronic device may further include at least two mirrors attached to the frame. The position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
  • In still further embodiments, the position determination circuit may be further configured to capture and store a background image of the interactive display using the single camera before a user interacts with the interactive display; obtain a plurality of images using the single camera and the at least two mirrors; subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculate the position of the object on the interactive display based on the plurality of subtracted images.
  • In some embodiments, the position determination circuit may be configured to calculate the position of the object by calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
  • In further embodiments, the at least one camera may be two cameras attached to the frame. The position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
  • In still further embodiments, the position determination circuit may be further configured to capture and store a background image of the interactive display using the single camera before a user interacts with the interactive display; obtain a plurality of images with two cameras; subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculate the position of the object with respect to the interactive display based on the plurality of subtracted images.
  • In some embodiments, the position determination circuit may be further configured to obtain a first image using a first of the two cameras and calculate first and second angles based on the obtained first image and the position of the object with respect to the interactive display; obtain a second image using a second of the two cameras and calculate third and forth angles based on the obtained second image and the position of the object with respect to the interactive display; compare the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point; and determine if the intersection point is located on or above the interactive display.
  • In further embodiments, the position determination circuit may be further configured to detect contact of the object on the interactive display; and calculate coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
  • In still further embodiments, the at least one camera may be a single camera and the interactive display may have a reflective surface. The position determination circuit may be further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • In some embodiments, the at least one camera may be a single camera positioned inside the housing of the electronic device. The position determination circuit may be further configured to determine a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device.
  • In further embodiments, the position determination circuit may be configured to obtain an image of the object using the single camera positioned inside the housing of the electronic device; calculate a start angle and a stop angle of the image based on the position of the object with respect to the interactive display; calculate frame angles between two known edges of the frame and the object with respect to the interactive display; calculate a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and calculate the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
  • Still further embodiments provide methods of controlling an interactive display of an electronic device, the electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame. The method includes determining a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • In some embodiments, the at least one camera includes a single camera and the electronic device further includes at least two mirrors attached to the frame. The method further includes determining a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
  • In further embodiments, the method further includes capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display; obtaining a plurality of images using the single camera and the at least two mirrors; subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculating the position of the object on the interactive display based on the plurality of subtracted images. Calculating the position of the object may include calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
  • In still further embodiments, the at least one camera may be two cameras attached to the frame. The method may further include determining a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
  • In some embodiments, the method further includes capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display; obtaining a plurality of images with two cameras; subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and calculating the position of the object with respect to the interactive display based on the plurality of subtracted images.
  • In further embodiments, the method may further include obtaining a first image using a first of the two cameras and calculate first and second angles based on the obtained first image and the position of the object with respect to the interactive display; obtaining a second image using a second of the two cameras and calculate third and forth angles based on the obtained second image and the position of the object with respect to the interactive display; comparing the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point; determining if the intersection point is located on or above the interactive display; detecting contact of the object on the interactive display; and calculating coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
  • In still further embodiments, the at least one camera may include a single camera and the interactive display may have a reflective surface. The method may further include determining a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • In some embodiments, the at least one camera may include a single camera positioned inside the housing of the electronic device. The method may further include determining a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device. Determining a position may include obtaining an image of the object using the single camera positioned inside the housing of the electronic device; calculating a start angle and a stop angle of the image based on the position of the object with respect to the interactive display; calculating frame angles between two known edges of the frame and the object with respect to the interactive display; calculating a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and calculating the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
  • Further embodiments provide computer program products for controlling an interactive display of an electronic device. The electronic device includes a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame. The computer program product includes a computer-readable storage medium having computer-readable program code embodied in said medium. The computer-readable program code includes computer-readable program code configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.
  • Other electronic devices, methods and/or computer program products according to embodiments of the invention will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional electronic devices, methods and computer program products be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
  • BRIEF DESCRIPTION
  • The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate certain embodiments of the invention.
  • FIG. 1 is a schematic block diagram for a portable electronic device and a cellular communication system that operate according to some embodiments of the present invention.
  • FIGS. 2A through 2C are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 3A and 3B are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 4A and 4B are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 5A through 5C are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 6A through 6D are diagrams illustrating interactive displays in accordance with some embodiments.
  • FIGS. 7 through 12 are flowcharts illustrating various methods of controlling an interactive display of an electronic device according to some embodiments discussed herein.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • The present invention will be described more fully hereinafter with reference to the accompanying figures, in which embodiments of the invention are shown. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
  • Accordingly, while the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the claims. Like numbers refer to like elements throughout the description of the figures.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,” “includes” and/or “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Moreover, when an element is referred to as being “responsive” or “connected” to another element, it can be directly responsive or connected to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly responsive” or “directly connected” to another element, there are no intervening elements present. As used herein the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
  • It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element without departing from the teachings of the disclosure. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
  • Some embodiments may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Consequently, as used herein, the term “signal” may take the form of a continuous waveform and/or discrete value(s), such as digital value(s) in a memory or register. Furthermore, various embodiments may take the form of a computer program product comprising a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. Accordingly, as used herein, the terms “circuit” and “controller” may take the form of digital circuitry, such as computer-readable program code executed by an instruction processing device(s) (e.g., general purpose microprocessor and/or digital signal processor), and/or analog circuitry.
  • Embodiments are described below with reference to block diagrams and operational flow charts. It is to be understood that the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
  • For purposes of illustration and explanation only, various embodiments of the present invention are described herein in the context of portable electronic devices. It will be understood, however, that the present invention is not limited to such embodiments and may be embodied generally in any electronic device that is compatible with an interactive display. For example, embodiments of the present invention may be embodied in user interfaces for electronic games and/or music players.
  • As discussed above, many electronic devices, such as mobile terminals and laptop computers, do not use a conventional keyboard for data entry or manipulation of applications thereon. Instead, conventional electronic devices include an interactive display configured to respond to a touch of a finger or a stylus. Thus, a virtual keypad may be presented on the interactive display and a user can type emails, phone numbers etc. by activating the virtual letters/numbers thereon. As used herein, “interactive display” refers to any type of display, such as a touchscreen, that is activated responsive to an object in proximity thereto. The object can be a finger, stylus, pencil, pen or the like with departing from the scope of embodiments discussed herein. Although embodiments discussed herein are discussed as having interactive displays, device in accordance with some embodiments may have a combination of both mechanical keypads/buttons and interactive displays/virtual buttons without departing from the scope of embodiments discussed herein
  • Interactive displays may be used in combination with desk top computers, laptops, portable devices, networks, personal digital assistants (PDAs), satellite navigation, video games and the like. Conventional interactive displays are typically implemented using a layer of sensitive material above a display for detection of the finger or stylus. Conventional interactive displays are typically activated using a single type of object, for example, a pen, a finger or a stylus. Some embodiments discussed herein provide interactive displays that are configured to determine a position of an object, such as a finger or stylus, in proximity of the interactive display based on images captured by one or more cameras. Thus, embodiments discussed herein may provide interactive displays that are responsive to more than one type of object, such as a finger, stylus, pen or pencil. Furthermore, some embodiments may also enable additional features of the touch interface, for example, sensing of an object in proximity to the interactive display before the object actually makes contact with the interactive display as will be discussed further herein with respect to FIGS. 1 through 12.
  • Referring first to FIG. 1, a schematic block diagram illustrating a portable electronic device 190 and a cellular communication system in accordance with some embodiments will be discussed. As illustrated, the portable electronic device 190 includes at least one antenna 105. The portable electronic device 190 may communicate with a cellular base station transceiver 160 connected to a mobile switching center (“MSC”) 170, and/or it may communicate through a short range network directly with another wireless communication device (not shown). The portable electronic device 190 can therefore include a transceiver 112 and a wireless communication protocol controller (“communication controller”) 114 that are configured to communicate through a wireless air interface with the base station transceiver 160 and/or with the other wireless communication devices. The transceiver 112 typically includes a transmitter circuit and a receiver circuit which cooperate to transmit and receive radio frequency signals. The communication controller 114 can be configured to encode/decode and control communications according to one or more cellular protocols, which may include, but are not limited to, Global Standard for Mobile (GSM) communication, General Packet Radio Service (GPRS), enhanced data rates for GSM evolution (EDGE), code division multiple access (CDMA), wideband-CDMA, CDMA2000, and/or Universal Mobile Telecommunications System (UMTS). The communication controller 114 may alternatively or additionally encode/decode and control communications according to one or more short range communication protocols, which may include, but are not limited to Bluetooth and/or WiFi such as IEEE 802.11 (e.g., IEEE 802.11b-g).
  • As further illustrated in FIG. 1, the portable electronic device 190 can include an interactive display 189 in the housing 100 in accordance with some embodiments, a man machine interface 116 (e.g., virtual keypad of the interactive display), a speaker/microphone 117, and/or a web browser 118 that communicate with the controller 114. It will be understood that other circuits/modules found in portable electronic devices may be included in portable electronic device 190 without departing from the scope of embodiments discussed herein.
  • As further illustrated in FIG. 1, the portable electronic device 190 may further include a position determination circuit 192, one or more cameras 138/139, optionally (as indicated by dotted lines in FIG. 1) one or more mirrors (128/129) and a memory 180 that all communicate with the controller 114. The one or more cameras 138/139 and the one or more optional mirrors 128/129 may be attached to a frame (not shown) of the housing for the interactive display 189 as will be discussed further below with respect to FIGS. 2A through 6D. Furthermore, the position determination circuit 192, coupled to the one or more cameras 138/139 and the interactive display 18 may be configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera in accordance with some embodiments as will be discussed further below with respect to FIGS. 2A through 6D.
  • The memory 180 may include the obtained, calculated and stored data used in accordance with some embodiments discussed herein, for example, captured images 181, calculated angles 183 and/or calculated object positions 184. It will be understood that although the memory 180 is illustrated as including three separate data folders, embodiments of the present invention are not limited to this configuration. For example, the folders in memory 180 may be combined to provide two or less folders or four or more folders may be provided without departing from the scope of embodiments discussed herein.
  • Although various functionality of the portable electronic device 190 has been shown in FIG. 1 within separate blocks, it is to be understood that two or more of these functions may be combined in a single physical integrated circuit package and/or the functionality described for one or the blocks may be spread across two or more integrated circuit packages. For example, the functionally described herein for the position determination circuit 192 may split into separate execution circuitry or combined with a general purpose processor and/or a digital signal processor that executes instructions within the memory 180. Accordingly, the memory 180 can include data 181, 183, 184, general control instructions and the like that are executed by the instruction execution circuitry to carry out one or more of the embodiments described herein.
  • FIGS. 2A through 2C are diagrams illustrating an interactive display in accordance with some embodiments discussed herein. FIG. 2A is a top view of an interactive display 189 (FIG. 1) in accordance with some embodiments; FIG. 2B is an enlarged view as send from the camera in accordance with some embodiments; and FIG. 2C is a cross section of the interactive display along the line A-A′ of FIG. 2A in accordance with some embodiments. Details with respect to some embodiments will now be discussed with respect to FIGS. 1 through 2C. As illustrated in FIG. 2A, a single camera 238 and two mirrors 228, 229 are attached to a frame 248. In these embodiments, the position determination circuit 192 may be configured to determine a position of an object 208, for example, a finger or stylus, with respect to the interactive display based on images obtained from the single camera 238 and the at least two mirrors 228, 229. In some embodiments, the camera 238 may have a field of view of about 90 degrees horizontally and from about 10 to about 15 degrees vertically. The mirrors 228 and 229 may have a cylindrical or spherical shape. The mirrors 228 and 229 may be shaped to increase their field of view, thus a cylindrical or spherical shape may provide increased area.
  • Using a single camera 238 and two mirrors 228, 229 may be more cost effective than providing three cameras. The presence of the two mirrors 228, 229 allows the position of the object 208 to be triangulated. In other words, by using a camera 238 and two mirrors 228,229, there will be three images that can be used to calculate the position of the object 208. For example, the three images may be triangulated to calculate the position of the object 208 with respect to the interactive display 208. If one of the two mirrors 228, 229 is obscured by, for example, the object 208, the position of the object 208 can be determined based on the two remaining images from the other mirror 228 or 229 and the camera 238. Use of two images may allow calculation of the position and size of the object 208. Use of three images may allow further calculation of additional objects 208 or a more accurate size of the object 208.
  • In some embodiments, the position determination circuit 192 is configured to capture and store (181) a background image of the interactive display 189 using the single camera 238 and the two mirrors 228, 229 before a user interacts with the interactive display 189. Thus, the stored image can be subtracted to obtain information related to the object 208. In some embodiments, capturing and storing the background image before the user interacts with the interactive display 189 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 208 is relative to the interactive display 189.
  • In some embodiments, the image background calculation inside the frame may involve capturing the image inside the frame and storing the same. This can be adaptive and may be used to filter out anomolies, such as dirt on the frame. Outside the frame, the image may be captured and saved. The position determination module 192 may be configured to continuously learn new backgrounds by not using foreground objects in the background image. Examples of this can be found in, for example, the Open Computer Vision Library. Background calculations may be performed in a similar manner for the embodiments discussed below with respect to FIGS. 3A through 6D.
  • The position determination module 192 may then be configured to obtain a plurality of images using the single camera 238 and the two mirrors 228, 229. In some embodiments, the camera 238 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power. The stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 208 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • In some embodiments the difference between the obtained image and the background may be determined by subtracting the background image from the obtained image. A typical grayscale value for intensity may be used. A high value on difference is likely to be a foreground object. When pixels are similar to the background the difference value will typically be near zero. Some noise may be present due to, for example, reflections caused by sunlight. However, when the object 208 is present, the difference in the obtained image and the background image will be significant. In some embodiments, a low pass filter may be used to remove noise, such as sunlight. In embodiments where ambient light causes a linear offset on the values, it may be possible to align the difference and calculate an offset from the difference. Differences between images may be calculated similar in embodiments discussed below with respect to FIGS. 3A through 6D.
  • In particular, the position determination module 192 may be further configured to calculate the position of the object 208 by calculating first and second angles for each of the plurality of subtracted images. The first and second angles may correspond to a start position and a stop position of the object 208.
  • Once it is detected that the object 208 is touching the surface of the interactive display 189 as illustrated in FIGS. 2B and 2C, coordinates of the object 208 with respect to the interactive display 189 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images. In some embodiments, an absolute value of intensity versus X value will be obtained in one dimension. Then, the same will be obtained in the second dimension to calculate a distance that that object 208 is from the display 289. The left angle/position will typically be where the intensity value changes significantly from near zero to positive value. A positive derivative may be obtained on the left angle. A decision may be based on a predetermined threshold. The right angle/position will typically change from a positive value to near zero. A negative derivative may be obtained and a decision may be determined based on the result.
  • FIG. 2C further illustrates a display glass 258 and a cross section of the frame 248 along the line A-A′ of FIG. 2A. In some embodiments, objects 208 situated above the frame 248 may be detected. In these embodiments, the camera 238 may have a wider vertical viewing angle and may have spherical mirrors. In some embodiments, an infrared (IR) light may be used to enhance detection of human skin to provide more sensitive recognition.
  • FIG. 3A is a diagram illustrating a view from a second of two cameras in accordance with some embodiments. FIG. 3B is a top view of an interactive display in accordance with some embodiments. Details with respect to some embodiments will now be discussed with respect to FIGS. 1, 3A and 3B. As illustrated in FIG. 3B, in these embodiments, two cameras 338 and 339 are provided and there are no mirrors present. In these embodiments, the position determination circuit 192 is configured to determine a position of the object 308 with respect to the interactive display 389 based on images obtained from the at least two cameras 338 and 339. Use of two images may allow calculation or the position and size of the object 308. The cameras 338 and 339 may have a field of view of about 90 degrees horizontally and from about 10 to about 15 degrees vertically. For example, viewing angles 378 and 379 illustrated in FIG. 3B. In particular, FIG. 3A illustrates the view from camera 339 of FIG. 3B with respect to a second frame edge 348″ and a first frame edge 348′.
  • In some embodiments, the position determination circuit 192 is configured to capture and store (181) a background image of the interactive display 189 using the two cameras 338,339 before a user interacts with the interactive display 189. Thus, the stored image can be subtracted to obtain information related to the object 308. In some embodiments, capturing and storing the background image before the user interacts with the interactive display 189 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 308 is relative to the interactive display 189.
  • The position determination module 192 may then be configured to obtain a plurality of images using the cameras 338 and 339. In some embodiments, the cameras 338 and 339 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power. The stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 308 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • In particular, in some embodiments, the position determination module 192 may be further configured to calculate the position of the object 308 by calculating first and second angles for each of the plurality of subtracted images. The first and second angles may correspond to a start position and a stop of the object 308, for example, angles α1 and α2 corresponding to camera 339 of FIG. 3B and angles β1 and β2 corresponding to camera 338 of FIG. 3B. As further illustrated in FIG. 3B, angles α1 and α2 corresponding to camera 339 of FIG. 3B are calculated with respect to a first frame edge 348′ and angles β1 and β2 corresponding to camera 338 of FIG. 3B are calculated with respect to a third frame edge 348″″.
  • Once it is detected that the object 308 is touching the surface of the interactive display 389 as illustrated in FIGS. 3A and 3B, coordinates of the object 308 with respect to the interactive display 389 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images.
  • In some embodiments, objects 308 situated above the frame 348 may be detected. In these embodiments, the cameras 338 and 339 may have a wider vertical viewing angle and may have spherical mirrors. Embodiments illustrated in FIGS. 3A and 3B may provide a cheaper alternative to capacitive and resistive touch displays as there may not be a film or additional layer on top of the display glass.
  • FIG. 4A is a cross section of an interactive display illustrating detection of the object 408 above the interactive display 489 in accordance with some embodiments. FIG. 4B is a top view of the interactive display illustrating an object 408 outside of the display and in proximity to the display in accordance with some embodiments discussed herein. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1, 4A and 4B.
  • As illustrated in FIGS. 4A and 4B, in these embodiments, two cameras 438 and 439 are provided and there are no mirrors present. In these embodiments, the position determination circuit 192 is configured to determine a position of the object 408 with respect to the interactive display 489 based on images obtained from the at least two cameras 438 and 439. As illustrated in FIG. 4B, the cameras 438 and 439 are positioned in two of the four corners of the display 489. The cameras 438 and 439 may have a field of view of about 90 degrees horizontally and more than zero degrees vertically. For example, viewing angles 478 and 479 are illustrated in FIG. 4B. In particular, FIG. 4A illustrates a cross section illustrating cameras 438,439, viewing angles 478, 479 and the object 408.
  • In some embodiments, the position determination circuit 192 is configured to capture and store (181) a background image of the interactive display 489 using the two cameras 438, 439 before a user interacts with the interactive display 189. Thus, the stored image can be subtracted to obtain information related to the object 408. In some embodiments, capturing and storing the background image before the user interacts with the interactive display 489 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 408 is relative to the interactive display 489.
  • The position determination module 192 may then be configured to obtain a plurality of images using the cameras 438 and 439. In some embodiments, the cameras 438 and 439 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power. The stored background image may be subtracted from each of the obtained plurality of images to provide a plurality of subtracted images. Once the plurality of subtracted images are obtained, a position of the object 408 on the interactive display 189 may be calculated based on the plurality of subtracted images.
  • In particular, in some embodiments, the position determination module 192 may be further configured to calculate the position of the object 408 by calculating first and second angles for each of the plurality of subtracted images. The first and second angles may correspond to a start position and a stop position of the object 408, for example, angles α1 and α2 corresponding to camera 439 of FIG. 4B and angles β1 and β2 corresponding to camera 438 of FIG. 4B.
  • Once the object 408′, 408″ is detected in proximity to the interactive display 489, the calculated first and second angles, angles α1 and α2 and angles β1 and β2, are compared. The position determination module 192 is then configured to determine an intersection point of the camera views as illustrated in FIG. 4B based on the comparison of the angles. If the intersection point is located on or above the display 489, the intersection point is considered a pointer 408 for use with the interactive display 489. Thus, according to embodiments illustrated in FIGS. 4A and 4B, the object 408 may be detected even if it is outside the display surface.
  • Once it is detected that the object 408 is touching the surface of the interactive display 489, coordinates of the object 408 with respect to the interactive display 389 may be calculated based on the calculated first and second angles for each of the plurality of subtracted images. Embodiments illustrated in FIGS. 4A and 4B may provide a cheaper alternative to capacitive and resistive touch displays as there may not be a film or additional layer on top of the display glass.
  • FIG. 5A is a top view of a display surface of an interactive display having a reflective surface in accordance with some embodiments. FIG. 5B is a photograph of a user's finger contacting the reflective display in accordance with some embodiments discussed herein. FIG. 5C is a cross section of the portable electronic device along the line A-A′ in accordance with some embodiments. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1 and 5A-5C.
  • As illustrated in FIGS. 5A and 5C, in these embodiments, as single camera 438 is provided and there are no mirrors present. In these embodiments, the position determination circuit 192 is configured to determine a position of the object 508 with respect to the interactive display 589 based on image obtained from the camera 538 and a reflection of the object 508 in the reflective surface 558 of the interactive display 589 as viewed by the single camera 538. As illustrated in FIGS. 5A and 5C, the camera 538 is positioned in one of the four corners of the display 589. The camera 538 may have a field of view of about 90 degrees horizontally and less than from about 10 to about 15 degrees vertically. In some embodiments, multiple cameras may be provided to allow for multi-touch implementation.
  • In some embodiments, the position determination circuit 192 is configured to capture and store (181) a background image of the interactive display 589 using the camera 538 and the reflection as viewed from the camera 538 before a user interacts with the interactive display 189. Thus, the stored image can be subtracted to obtain information related to the object 508. In some embodiments, capturing and storing the background image before the user interacts with the interactive display 589 may be adaptable to compensate for situations, such as a dirty display, i.e. the images of the dirt on the screen will not be considered indicative of where the object 508 is relative to the interactive display 589.
  • The position determination module 192 may then be configured to obtain a plurality of images using the camera 538 and the reflective surface of the display 558. In some embodiments, the camera 538 may be sampled for images at about 100 frames per second. If power is an issue, the sample time may be reduced to save power. The position determination module 192 is configured to perform a computer vision calculation to separate the object of interest 508 with the stored background image. Then, the object of interest 508 may be correlated with the mirror image of the same object of interest 508 in the reflective display 558 to identify the corresponding object. This may be useful if there is more then one object. The position determination module 192 can detect a “touch” by the object of interest 508 when the closest distance D1 (FIG. 5B) between the object and the mirror image is about zero.
  • The image illustrated in FIG. 5B is the view from the camera 538. This image is used to calculate a distance that the object 508 is from the left of the display, for example, in pixels. The resulting distance may be used to calculate a horizontal angle. The number of pixels in the vertical direction can be used to calculate the distance the object 508 is from the camera 538. These calculated parameters can be used to calculate the position of the object of interest 508.
  • FIG. 6A is a cross section of an interactive display in accordance with some embodiment. FIG. 6B is a top view of an interactive display in accordance with some embodiment. FIG. 6C is a cross section of an interactive display in accordance with some embodiment. FIG. 6D is a cross section of an interactive display in accordance with some embodiment. Details with respect to embodiments illustrated in FIGS. 4A and 4B will not be discussed with respect to FIGS. 1 and 6A-6D.
  • As illustrated in FIGS. 5A and 5C, in these embodiments, a single camera 638 is provided inside a housing of the device and there are no mirrors present. In these embodiments, the position determination circuit 192 is configured to determine a position of the object 608 with respect to the interactive display 689 based on image obtained from the camera 638 positioned inside the housing of the device. As illustrated in FIGS. 6A-6D, the camera 638 is positioned in one of the four corners of the display 689.
  • In some embodiments, the position determination circuit 192 is configured to obtain an image of the object 608 using the single camera 638 positioned inside the housing of the electronic device. The obtained image can be used to calculate a start angle α1 and a stop angle α2 (FIG. 6B) of the image based on the position of the object with respect to the interactive display 689. The obtained image can also be used to calculate frame angles between two known edges of the frame and the object 608 with respect to the interactive display 689. Thus, the distance between the object 608 and the camera 638 can be calculated. Using the calculated start angle α1 and a stop angle α2, frame angle and calculated distance between the object 608 and the camera 638, the position and size of the object 608 can be determined.
  • According to embodiments illustrated in FIGS. 6A through 6D, an object 608 can be detected on and above the display with a viewing in and above the display glass 648. For example, as illustrated in FIGS. 6A, 6C and 6D, angle 678, 698 and 698, respectively. Accordingly, it may be possible to detect the objection 608 in x, y and z directions and before it makes contact with the display. In some embodiments, IR light may be used to enhance the detection of human skin in embodiments where the human finger is used as the object 608.
  • It will be understood that in embodiments where the frame is not present and the background image changes significantly, for example, when the device is moving, it is important to calculate a good prediction to reconstruct background so that the foreground can be determined. The foreground and background will be used to determine the position of the object.
  • Referring now to the flowcharts of FIGS. 7 through 12, various methods of controlling an interactive display of an electronic device will be discussed. As illustrated in FIG. 1, the electronic device includes a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame. Referring first to FIG. 7, operations begin at block 700 by determining a position of an object in proximity to the interactive display based on images captured by the at least one camera. As used herein, “at least one camera” refers to one or more cameras as well as mirrors, reflective displays and the like that may be used in combination with the at least one camera.
  • In some embodiments including as single camera, a reflective surface of the display may be used in addition to the camera. In these embodiments, a position of the object with respect to the interactive display may be determined based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
  • Referring now to FIG. 8, methods of controlling an interactive display including a single camera at least two mirrors attached to the frame will be discussed. Using the single camera and at least two mirrors, a position of the object with respect to the interactive display may be determined based on images obtained from the single camera and the at least two mirrors. As illustrated in FIG. 8, operations begin at block 805 by capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display. A plurality of images are obtained using the single camera and the at least two mirrors (block 815). The stored background image is subtracted from each of the obtained plurality of images to provide a plurality of subtracted images (block 825). The position of the object on the interactive display is calculated based on the plurality of subtracted images (block 835).
  • Referring now to FIG. 9, details with respect to calculating the position of the object of block 835 will be discussed. As illustrated in FIG. 9, operations begin at block 937 by calculating first and second angles for each of the plurality of subtracted images. The first angle corresponds to a start position of the object and the second angle corresponds to a stop position of the object. Coordinates of the object are calculated with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images (block 939).
  • Referring now to FIG. 10, methods for controlling an interactive display including two cameras attached to the frame will be discussed. A position of the object may be determined with respect to the interactive display based on images obtained from the at least two cameras. As illustrated in FIG. 10, operations begin at block 1006 by capturing and storing a background image of the interactive display using the two cameras before a user interacts with the interactive display. A plurality of images are obtained using the two cameras (block 1016). The stored background image is subtracted from each of the obtained plurality of images to provide a plurality of subtracted images (block 1026). The position of the object is calculated with respect to the interactive display based on the plurality of subtracted images (block 1036).
  • Referring now to FIG. 11, methods for controlling an interactive display including two cameras attached to the frame will be discussed. As illustrated in FIG. 11, operations begin at block 1146 by obtaining a first image using a first of the two cameras and calculating first and second angles based on the obtained first image and the position of the object with respect to the interactive display. A second image is obtained using a second of the two cameras and third and forth angles are calculated based on the obtained second image and the position of the object with respect to the interactive display (block 1156). The first and second calculated angles of the first obtained image are compared to the third and forth angles of the second obtained image to determine an intersection point (block 1166). It is determined if the intersection point is located on or above the interactive display (block 1176). Contact of the object on the interactive display is detected (block 1186). Coordinates of the object on the interactive display are calculated based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point (block 1196).
  • Referring now to FIG. 12, methods for controlling an interactive display including a single camera situated inside the housing of the electronic device will be discussed. A position of the object on the interactive display may be determined based on images obtained from the single camera positioned inside the housing of the electronic device. Operations for determining a position begin at block 1207 by obtaining an image of the object using the single camera positioned inside the housing of the electronic device. A start angle and a stop angle of the image is calculated based on the position of the object with respect to the interactive display (block 1217). Frame angles between two known edges of the frame and the object are calculated with respect to the interactive display (block 1227). A distance between the object on the interactive display and the camera are calculated using the calculated start and stop angles and frame angles (block 1237). The position and size of the object on the interactive display may be calculated based on the calculated distance, start and stop angles and frame angles (block 1247).
  • Some embodiments discussed above may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Consequently, as used herein, the term “signal” may take the form of a continuous waveform and/or discrete value(s), such as digital value(s) in a memory or register. Furthermore, various embodiments may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. Accordingly, as used herein, the terms “circuit” and “controller” may take the form of digital circuitry, such as computer-readable program code executed by an instruction processing device(s) (e.g., general purpose microprocessor and/or digital signal processor), and/or analog circuitry.
  • Embodiments are described above with reference to block diagrams and operational flow charts. It is to be understood that the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
  • Although various embodiments of the present invention are described in the context of portable electronic devices for purposes of illustration and explanation only, the present invention is not limited thereto. It is to be understood that the present invention can be more broadly used in any sort of electronic device having an interactive display in accordance with some embodiments discussed herein.
  • In the drawings and specification, there have been disclosed exemplary embodiments of the invention. However, many variations and modifications can be made to these embodiments without substantially departing from the principles of the present invention. Accordingly, although specific terms are used, they are used in a generic and descriptive sense only and not for purposes of limitation, the scope of the invention being defined by the following claims.

Claims (20)

1. An electronic device comprising:
a housing;
an interactive display connected to the housing;
a frame associated with the interactive display;
at least one camera coupled to the interactive display and frame; and
a position determination circuit coupled to the camera and the interactive display, the position determination circuit configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.
2. The electronic device of claim 1, wherein the at least one camera comprises a single camera, the electronic device further comprising:
at least two mirrors attached to the frame, the position determination circuit being further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
3. The electronic device of claim 2, wherein the position determination circuit is further configured to:
capture and store a background image of the interactive display using the single camera before a user interacts with the interactive display;
obtain a plurality of images using the single camera and the at least two mirrors;
subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and
calculate the position of the object on the interactive display based on the plurality of subtracted images.
4. The electronic device of claim 3, wherein the position determination circuit is configured to calculate the position of the object by:
calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and
calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
5. The electronic device of claim 1, wherein the at least one camera comprises two cameras attached to the frame, the position determination circuit being further configured to determine a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
6. The electronic device of claim 5, wherein the position determination circuit is further configured to:
capture and store a background image of the interactive display using the two cameras before a user interacts with the interactive display;
obtain a plurality of images with two cameras;
subtract the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and
calculate the position of the object with respect to the interactive display based on the plurality of subtracted images.
7. The electronic device of claim 5, wherein the position determination circuit is further configured to:
obtain a first image using a first of the two cameras and calculate first and second angles based on the obtained first image and the position of the object with respect to the interactive display;
obtain a second image using a second of the two cameras and calculate third and forth angles based on the obtained second image and the position of the object with respect to the interactive display;
compare the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point; and
determine if the intersection point is located on or above the interactive display.
8. The electronic device of claim 7, wherein the position determination circuit is further configured to:
detect contact of the object on the interactive display; and
calculate coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
9. The electronic device of claim 1, wherein the at least one camera comprises a single camera and wherein the interactive display has a reflective surface, the position determination circuit being further configured to determine a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
10. The electronic device of claim 1, wherein the at least one camera comprises a single camera positioned inside the housing of the electronic device, the position determination circuit being further configured to determine a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device.
11. The electronic device of claim 10, wherein the position determination circuit is configured to:
obtain an image of the object using the single camera positioned inside the housing of the electronic device;
calculate a start angle and a stop angle of the image based on the position of the object with respect to the interactive display;
calculate frame angles between two known edges of the frame and the object with respect to the interactive display;
calculate a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and
calculate the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
12. A method of controlling an interactive display of an electronic device, the electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame, the method comprising:
determining a position of an object in proximity to the interactive display based on images captured by the at least one camera.
13. The method of claim 12, wherein the at least one camera comprises a single camera and wherein the electronic device further comprises at least two mirrors attached to the frame, the method further comprising:
determining a position of the object with respect to the interactive display based on images obtained from the single camera and the at least two mirrors.
14. The method of claim 13 further comprising:
capturing and storing a background image of the interactive display using the single camera before a user interacts with the interactive display;
obtaining a plurality of images using the single camera and the at least two mirrors;
subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and
calculating the position of the object on the interactive display based on the plurality of subtracted images, wherein calculating the position of the object comprises:
calculating first and second angles for each of the plurality of subtracted images, the first angle corresponding to a start position of the object and the second angle corresponding to a stop position of the object; and
calculating coordinates of the object with respect to the interactive display based on the calculated first and second angles for each of the plurality of subtracted images.
15. The method of claim 12, wherein the at least one camera comprises two cameras attached to the frame, the method further comprising determining a position of the object with respect to the interactive display based on images obtained from the at least two cameras.
16. The method of claim 15 further comprising:
capturing and storing a background image of the interactive display using the two cameras before a user interacts with the interactive display;
obtaining a plurality of images using the two cameras;
subtracting the stored background image from each of the obtained plurality of images to provide a plurality of subtracted images; and
calculating the position of the object with respect to the interactive display based on the plurality of subtracted images.
17. The method of claim 15 further comprising:
obtaining a first image using a first of the two cameras and calculating first and second angles based on the obtained first image and the position of the object with respect to the interactive display;
obtaining a second image using a second of the two cameras and calculating third and forth angles based on the obtained second image and the position of the object with respect to the interactive display;
comparing the first and second calculated angles of the first obtained image to the third and forth angles of the second obtained image to determine an intersection point;
determining if the intersection point is located on or above the interactive display;
detecting contact of the object on the interactive display; and
calculating coordinates of the object on the interactive display based on the obtained first and second images, the calculated first through fourth angles and the determined intersection point.
18. The method of claim 12, wherein the at least one camera comprises a single camera and wherein the interactive display has a reflective surface, the method further comprising:
determining a position of the object with respect to the interactive display based on images obtained from the single camera and a reflection of the object in the reflective surface of the interactive display as viewed by the single camera.
19. The method of claim 12, wherein the at least one camera comprises a single camera positioned inside the housing of the electronic device, the method further comprising:
determining a position of the object on the interactive display based on images obtained from the single camera positioned inside the housing of the electronic device, wherein determining a position comprises:
obtaining an image of the object using the single camera positioned inside the housing of the electronic device;
calculating a start angle and a stop angle of the image based on the position of the object with respect to the interactive display;
calculating frame angles between two known edges of the frame and the object with respect to the interactive display;
calculating a distance between the object on the interactive display and the camera using the calculated start and stop angles and frame angles; and
calculating the position and size of the object on the interactive display based on the calculated distance, start and stop angles and frame angles.
20. A computer program product for controlling an interactive display of an electronic device, the electronic device including a housing; an interactive display connected to the housing; a frame associated with the interactive display; and at least one camera coupled to the interactive display and frame, the computer program product comprising:
a computer-readable storage medium having computer-readable program code embodied in said medium, said computer-readable program code comprising:
computer-readable program code configured to determine a position of an object in proximity to the interactive display based on images captured by the at least one camera.
US12/825,545 2010-05-21 2010-06-29 Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products Abandoned US20110285669A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/825,545 US20110285669A1 (en) 2010-05-21 2010-06-29 Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products
EP11166660.8A EP2402844B1 (en) 2010-06-29 2011-05-19 Electronic devices including interactive displays and related methods and computer program products

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US34700810P 2010-05-21 2010-05-21
US12/825,545 US20110285669A1 (en) 2010-05-21 2010-06-29 Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products

Publications (1)

Publication Number Publication Date
US20110285669A1 true US20110285669A1 (en) 2011-11-24

Family

ID=44483927

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/825,545 Abandoned US20110285669A1 (en) 2010-05-21 2010-06-29 Electronic Devices Including Interactive Displays Implemented Using Cameras and Related Methods and Computer Program Products

Country Status (2)

Country Link
US (1) US20110285669A1 (en)
EP (1) EP2402844B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019646A1 (en) * 2009-10-30 2012-01-26 Fred Charles Thomas Video display systems
GB2500006A (en) * 2012-03-06 2013-09-11 Teknologian Tutkimuskeskus Vtt Oy Optical touch screen using cameras in the frame.
US20150268798A1 (en) * 2014-03-20 2015-09-24 National Chiao Tung University Touch display apparatus and touch sensing method
US20150346946A1 (en) * 2014-06-03 2015-12-03 Lenovo (Singapore) Pte. Ltd. Presenting user interface on a first device based on detection of a second device within a proximity to the first device
US20170147142A1 (en) * 2015-11-20 2017-05-25 International Business Machines Corporation Dynamic image compensation for pre-touch localization on a reflective surface
US9823782B2 (en) * 2015-11-20 2017-11-21 International Business Machines Corporation Pre-touch localization on a reflective surface
US11256410B2 (en) 2014-01-22 2022-02-22 Lenovo (Singapore) Pte. Ltd. Automatic launch and data fill of application
US20220365674A1 (en) * 2019-08-13 2022-11-17 B. Braun Avitum Ag Interface for a medical device with an adaptive actuation sensor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6803906B1 (en) * 2000-07-05 2004-10-12 Smart Technologies, Inc. Passive touch system and method of detecting user input
US20050190162A1 (en) * 2003-02-14 2005-09-01 Next Holdings, Limited Touch screen signal processing
US20060289760A1 (en) * 2005-06-28 2006-12-28 Microsoft Corporation Using same optics to image, illuminate, and project
US7538759B2 (en) * 2004-05-07 2009-05-26 Next Holdings Limited Touch panel display system with illumination and detection provided from a single edge
US20090146972A1 (en) * 2004-05-05 2009-06-11 Smart Technologies Ulc Apparatus and method for detecting a pointer relative to a touch surface

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5484966A (en) * 1993-12-07 1996-01-16 At&T Corp. Sensing stylus position using single 1-D image sensor
US7274356B2 (en) * 2003-10-09 2007-09-25 Smart Technologies Inc. Apparatus for determining the location of a pointer within a region of interest
WO2009152715A1 (en) * 2008-06-18 2009-12-23 北京汇冠新技术有限公司 Sensing apparatus for touch checking
US8305363B2 (en) * 2008-10-10 2012-11-06 Pixart Imaging Sensing system and locating method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6803906B1 (en) * 2000-07-05 2004-10-12 Smart Technologies, Inc. Passive touch system and method of detecting user input
US20050190162A1 (en) * 2003-02-14 2005-09-01 Next Holdings, Limited Touch screen signal processing
US20090146972A1 (en) * 2004-05-05 2009-06-11 Smart Technologies Ulc Apparatus and method for detecting a pointer relative to a touch surface
US7538759B2 (en) * 2004-05-07 2009-05-26 Next Holdings Limited Touch panel display system with illumination and detection provided from a single edge
US20060289760A1 (en) * 2005-06-28 2006-12-28 Microsoft Corporation Using same optics to image, illuminate, and project

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019646A1 (en) * 2009-10-30 2012-01-26 Fred Charles Thomas Video display systems
US8964018B2 (en) * 2009-10-30 2015-02-24 Hewlett-Packard Development Company, L.P. Video display systems
GB2500006A (en) * 2012-03-06 2013-09-11 Teknologian Tutkimuskeskus Vtt Oy Optical touch screen using cameras in the frame.
US11256410B2 (en) 2014-01-22 2022-02-22 Lenovo (Singapore) Pte. Ltd. Automatic launch and data fill of application
US20150268798A1 (en) * 2014-03-20 2015-09-24 National Chiao Tung University Touch display apparatus and touch sensing method
US10817124B2 (en) * 2014-06-03 2020-10-27 Lenovo (Singapore) Pte. Ltd. Presenting user interface on a first device based on detection of a second device within a proximity to the first device
CN105138247A (en) * 2014-06-03 2015-12-09 联想(新加坡)私人有限公司 Presenting user interface on a first device based on detection of a second device within a proximity to the first device
US20150346946A1 (en) * 2014-06-03 2015-12-03 Lenovo (Singapore) Pte. Ltd. Presenting user interface on a first device based on detection of a second device within a proximity to the first device
US20170147142A1 (en) * 2015-11-20 2017-05-25 International Business Machines Corporation Dynamic image compensation for pre-touch localization on a reflective surface
US9823782B2 (en) * 2015-11-20 2017-11-21 International Business Machines Corporation Pre-touch localization on a reflective surface
US10606468B2 (en) * 2015-11-20 2020-03-31 International Business Machines Corporation Dynamic image compensation for pre-touch localization on a reflective surface
US20220365674A1 (en) * 2019-08-13 2022-11-17 B. Braun Avitum Ag Interface for a medical device with an adaptive actuation sensor
US11861164B2 (en) * 2019-08-13 2024-01-02 B. Braun Avitum Ag Interface for a medical device with an adaptive actuation sensor

Also Published As

Publication number Publication date
EP2402844B1 (en) 2019-02-27
EP2402844A1 (en) 2012-01-04

Similar Documents

Publication Publication Date Title
EP2402844B1 (en) Electronic devices including interactive displays and related methods and computer program products
US20210181536A1 (en) Eyewear device with finger activated touch sensor
TWI599922B (en) Electronic device with a user interface that has more than two degrees of freedom, the user interface comprising a touch-sensitive surface and contact-free detection means
EP2711825B1 (en) System for providing a user interface for use by portable and other devices
EP2795450B1 (en) User gesture recognition
US10261630B2 (en) Input device, input support method, and program
US20190384450A1 (en) Touch gesture detection on a surface with movable artifacts
US9201521B2 (en) Storing trace information
US20130050133A1 (en) Method and apparatus for precluding operations associated with accidental touch inputs
KR102496531B1 (en) Method for providing fingerprint recognition, electronic apparatus and storage medium
US20150268789A1 (en) Method for preventing accidentally triggering edge swipe gesture and gesture triggering
KR101208783B1 (en) Wireless communication device and split touch sensitive user input surface
US9575604B2 (en) Operation input device, operation input method, and program
CN109857306B (en) Screen capturing method and terminal equipment
KR20160132994A (en) Conductive trace routing for display and bezel sensors
EP2341418A1 (en) Device and method of control
US20150268743A1 (en) Device and method for controlling a display panel
US10345912B2 (en) Control method, control device, display device and electronic device
KR20120085392A (en) Terminal having touch-screen and method for identifying touch event thereof
US9122337B2 (en) Information processing terminal, and method for controlling same
US9141224B1 (en) Shielding capacitive touch display
US8952934B2 (en) Optical touch systems and methods for determining positions of objects using the same
CN107943406B (en) touch point determining method of touch screen and terminal
US9836082B2 (en) Wearable electronic apparatus
US9235338B1 (en) Pan and zoom gesture detection in a multiple touch display

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY ERICSSON MOBILE COMMUNICATIONS AB, SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LASSESSON, KRISTIAN;SASSI, JARI;REEL/FRAME:024607/0648

Effective date: 20100622

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION