US20120218456A1 - Auto-focus tracking - Google Patents

Auto-focus tracking Download PDF

Info

Publication number
US20120218456A1
US20120218456A1 US13/034,577 US201113034577A US2012218456A1 US 20120218456 A1 US20120218456 A1 US 20120218456A1 US 201113034577 A US201113034577 A US 201113034577A US 2012218456 A1 US2012218456 A1 US 2012218456A1
Authority
US
United States
Prior art keywords
auto
focus
window
natural feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/034,577
Other versions
US9077890B2 (en
Inventor
Charles Wheeler Sweet, III
Serafin Diaz Spindola
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to US13/034,577 priority Critical patent/US9077890B2/en
Assigned to QUALCOMM INCORPORATED reassignment QUALCOMM INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIAZ SPINDOLA, SERAFIN, SWEET, CHARLES WHEELER, III
Priority to CN201280019909.0A priority patent/CN103535021B/en
Priority to PCT/US2012/026655 priority patent/WO2012116347A1/en
Priority to JP2013555625A priority patent/JP6327858B2/en
Priority to EP12716780.7A priority patent/EP2679001A1/en
Priority to KR1020137025173A priority patent/KR101517315B1/en
Publication of US20120218456A1 publication Critical patent/US20120218456A1/en
Publication of US9077890B2 publication Critical patent/US9077890B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Definitions

  • This disclosure relates generally to apparatus and methods for augmented reality and other computer vision application, and more particularly to integration of camera auto-focus with computer vision-based recognition and tracking.
  • Augmented reality systems use natural features as reference points within a sequence of images to place computer generated icons and images.
  • a natural feature processing engine including a natural feature detection module and a natural feature tracking module, is used to find and follow these reference points.
  • Mobile devices may be enhanced with such augmented reality engines.
  • Many mobile devices also have cameras with auto-focus capabilities provided by an auto-focus engine. Both natural feature and auto-focus engines track changes from image to image, however, known systems fail to allow communication between these engines.
  • Object tracking functionally in a processor operates separately from auto-focus functionality at the front end of a camera.
  • Auto-focus functionality is typically performed in hardware or with hardware acceleration. Auto-focus operations may result in information useful for improving natural feature detection and/or tracking. Similarly, natural feature detection and tracking may result in information useful for improving auto-focus functionality.
  • Many existing mobile devices 10 contain a camera and a processor.
  • the camera provides images to the processor, which may modify the image by various augmented reality techniques.
  • the processor may send a control signal trigger to camera activation and the camera provides the image or sequence of images to the processor for image processing in response. No information obtained from natural feature processing is returned to the camera to assist in obtaining an improved image. That is, control information beyond triggering does not flow from the processor to the camera.
  • FIG. 1 shows a known system containing a natural feature processing engine 110 and an auto-focus engine 300 , which are uncoupled and therefore do not communicate information as shown by delineation 400 .
  • An existing mobile device 10 contains one or more processors that function as a natural feature processing engine 110 and also as an auto-focus engine 300 .
  • the natural feature processing engine 110 includes a natural feature detection module 120 and a natural feature tracking module 125 .
  • operations in the natural feature detection module 120 and the natural feature tracking module 125 function in parallel, however, for a particular natural feature, these operations appear to occur in sequence where a natural feature is first detected within an image then tracked through subsequent images.
  • the location of the natural feature within the image is used by a separate processing for augmented reality module 130 .
  • Each image undergoes processing through the natural feature detection module 120 to detect new natural features and also undergoes processing through the natural feature tracking module 125 to follow the movement of already detected natural features from image to image.
  • the auto-focus engine 300 has no communication with the natural feature processing engine 110 and may run as a parallel task.
  • the auto-focus engine 300 may be implemented in hardware or may be implemented in a combination of hardware and software.
  • the auto-focus engine 300 operates in real-time or near real-time to capture new images. Thus, a continued need exists to improve both natural feature processing as well as auto focusing.
  • a mobile device for use in computer vision, the mobile device comprising: a natural feature processing engine comprising a natural feature detection module and a natural feature tracking module; and an auto-focus engine coupled to the natural feature processing engine to communicate information to set a location of a window comprising at least one of a natural feature window and/or an auto-focus window.
  • a method in a mobile device for use in computer vision comprising: selecting an auto-focus window within an image; auto-focusing on the selected window; communicating a location of the auto-focus window; limiting an area of a natural feature detection based on the location of the auto-focus window; and finding a natural feature within the limited area.
  • a method in a mobile device for use in computer vision comprising: setting a first auto-focus window within a first image; setting a second auto-focus window within a second image; communicating a change from the first auto-focus window to the second auto-focus window; setting a next tracking search window based on the change; and tracking a natural within the next tracking search window.
  • a method in a mobile device for use in computer vision comprising: tracking a natural feature to a first location within a first image; tracking the natural feature to a second location within a second image; communicating a change from the first location to the second location; setting a next auto-focus window based on the change; and auto-focusing within the auto-focus window.
  • a mobile device for use in computer vision, the mobile device comprising: a camera and an auto-focus engine; and a processor and memory comprising code for performing the methods described above.
  • a mobile device for use in computer vision comprising means for performing the methods described above.
  • nonvolatile computer-readable storage medium including program code stored thereon, comprising program code for performing the methods described above.
  • FIG. 1 shows a known system containing a natural feature processing engine and an auto-focus engine, which do not communicate information.
  • FIG. 2 shows known states within a natural feature processing engine detecting and tracking natural features.
  • FIG. 3 illustrates an image containing a building and a tree with features to be tracked.
  • FIG. 4 illustrates natural features overlying the image.
  • FIG. 5 illustrates locations of various natural features.
  • FIG. 6 illustrates changes in locations of various natural features between two images.
  • FIG. 7 illustrates a change from a previous location of a natural feature to a next location of the same natural feature.
  • FIG. 8 shows an auto-focus window within an image.
  • FIG. 9 shows a mobile device containing a natural feature processing engine and an auto-focus engine communicating information, in accordance with some embodiments of the present invention.
  • FIG. 10 shows a location of an auto-focus window being used to limit an area for detecting natural features, in accordance with some embodiments of the present invention.
  • FIG. 11 shows a change in location from a previous auto-focus window to a next auto-focus window.
  • FIG. 12 shows setting a size of a next tracking search window based on magnitude of change in location from previous auto-focus window to next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 13 shows setting a center of a next tracking search window based on a direction of change in location from previous auto-focus window to next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 14 shows setting a change in location (center and/or size) of a previous auto-focus window to a next auto-focus window based on change from previous location of a natural feature to a next location of the natural feature, in accordance with some embodiments of the present invention.
  • FIG. 15 shows a method for limiting an area of for natural feature detection based on a location of an auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 16 shows a method for setting a next tracking search window based on a change between a previous and a next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 17 shows a method for setting a next auto-focus window based on a change from a previous location to a next location of a natural feature, in accordance with some embodiments of the present invention.
  • Position determination techniques described herein may be implemented in conjunction with various wireless communication networks such as a wireless wide area network (WWAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on.
  • WWAN wireless wide area network
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-Carrier Frequency Division Multiple Access
  • LTE Long Term Evolution
  • a CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), and so on.
  • Cdma2000 includes IS-95, IS-2000, and IS-856 standards.
  • a TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT.
  • GSM and W-CDMA are described in documents from a consortium named “3rd Generation Partnership Project” (3GPP).
  • Cdma2000 is described in documents from a consortium named “3rd Generation Partnership Project 2” (3GPP2).
  • 3GPP and 3GPP2 documents are publicly available.
  • a WLAN may be an IEEE 802.11x network
  • a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network.
  • the techniques may also be implemented in conjunction with any combination of WWAN, WLAN and/or WPAN.
  • a satellite positioning system typically includes a system of transmitters positioned to enable entities to determine their location on or above the Earth based, at least in part, on signals received from the transmitters.
  • Such a transmitter typically transmits a signal marked with a repeating pseudo-random noise (PN) code of a set number of chips and may be located on ground based control stations, user equipment and/or space vehicles. In a particular example, such transmitters may be located on Earth orbiting satellite vehicles (SVs).
  • PN pseudo-random noise
  • a SV in a constellation of Global Navigation Satellite System such as Global Positioning System (GPS), Galileo, GLONASS or Compass may transmit a signal marked with a PN code that is distinguishable from PN codes transmitted by other SVs in the constellation (e.g., using different PN codes for each satellite as in GPS or using the same code on different frequencies as in GLONASS).
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • GLONASS Global Positioning System
  • Compass may transmit a signal marked with a PN code that is distinguishable from PN codes transmitted by other SVs in the constellation (e.g., using different PN codes for each satellite as in GPS or using the same code on different frequencies as in GLONASS).
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • GLONASS Global Positioning System
  • Compass may transmit a signal marked with a PN code that is distinguishable from PN codes transmitted by other SVs in the constellation (e.g., using different
  • the techniques provided herein may be applied to or otherwise enabled for use in various regional systems, such as, e.g., Quasi-Zenith Satellite System (QZSS) over Japan, Indian Regional Navigational Satellite System (IRNSS) over India, Beidou over China, etc., and/or various augmentation systems (e.g., an Satellite Based Augmentation System (SBAS)) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems.
  • QZSS Quasi-Zenith Satellite System
  • IRNSS Indian Regional Navigational Satellite System
  • SBAS Satellite Based Augmentation System
  • an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as, e.g., Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like.
  • WAAS Wide Area Augmentation System
  • GNOS European Geostationary Navigation Overlay Service
  • MSAS Multi-functional Satellite Augmentation System
  • GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like such as, e.g., a Global Navigation Satellite Navigation System (GNOS), and/or the like.
  • SPS may include any combination of one or more global and/or regional navigation satellite systems and/or augmentation systems, and SPS signals may include SPS, SPS-like, and/or other signals associated with such one or more SPS.
  • a mobile device 100 sometimes referred to as a mobile station (MS) or user equipment (UE), such as a cellular phone, mobile phone or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile device which is capable of receiving wireless communication and/or navigation signals.
  • MS mobile station
  • UE user equipment
  • PCS personal communication system
  • PND personal navigation device
  • PIM Personal Information Manager
  • PDA Personal Digital Assistant
  • laptop laptop or other suitable mobile device which is capable of receiving wireless communication and/or navigation signals.
  • the term “mobile station” is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection—regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND.
  • PND personal navigation device
  • mobile station 100 is intended to include all devices, including wireless communication devices, computers, laptops, etc. which are capable of communication with a server, such as via the Internet, WiFi, or other network, and regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combination of the above are also considered a “mobile station.”
  • a mobile device 100 in accordance with the present invention allows communication between the auto-focus engine 300 and the natural feature processing engine 110 , as described below.
  • the mobile device 100 contains memory, one or more processors, which function as a natural feature processing engine 110 and an auto-focus engine 300 , and user interface, such as a display, speaker, touch screen and/or buttons.
  • the natural feature processing engine 110 also referred to as computer vision-based recognition and tracking, includes a natural feature detection module 120 and a natural feature tracking module 125 .
  • FIG. 2 shows known states within a natural feature processing engine 110 detecting and tracking natural features.
  • the processor search for new or undetected natural features in the natural feature detection module 120 . Once a natural feature is detect, the processor follows the detected natural feature with the natural feature tracking module 120 . Once the natural feature can no longer be followed (e.g., the natural feature is no longer in the image or is no longer distinguishable), the natural feature is declared lost.
  • FIG. 3 illustrates an image containing a building and a tree 210 with features to be tracked.
  • the image 200 may undergo various processing, including, for example, corner, line or edge detection.
  • the image 200 shows a tree 210 next to a building with a side 220 and a window 230 .
  • the image 200 may undergo corner detection.
  • FIG. 4 illustrates natural features overlying the image. In this case, various corners are detected as natural features ( 240 , 250 , 260 ) in the image 200 .
  • FIG. 5 illustrates locations of various natural features.
  • the processor attempts to track the natural features ( 240 , 250 , 260 ) by matching the natural feature to a new location. Matching may be performed with various criteria that results on some measure of similarity to the natural feaute. For example, the processor may use correlation (e.g., normalized cross correlation) to match the natural feature to its new location.
  • the processor may correlate pixels within a grid around each natural feature in a first image to pixels in the general grid location in a second image.
  • the natural feature tracking module 125 identifies an 8-by-8 grid of pixels at a particular location on a first image.
  • the area defined by the pixel dimension and location may be referred as a natural feature detection window.
  • a natural feature detection window is substantially smaller than an auto-focus window, where the natural feature detection window encompasses few than 200 pixels and the auto-focus window encompasses more than 200 pixels.
  • Processing speed directly correlates to the size of the natural feature detection window covers; smaller windows covering only a small area each are able to be processed more quickly.
  • Other pixel dimensions are also possible for the natural feature detection window.
  • tracking may use other square or non-square fixed-dimension grid sizes (e.g., 4 ⁇ 4, 10 ⁇ 10 or 16 ⁇ 16) or variable-dimensions grid sizes (e.g., where the size depend on characteristics of the natural feature). Tracking will examine the same location defined by the 8-by-8 grid in a second image. If the correlation results in a high result, no movement has occurred between images and as expected the pixel location of the natural feature is expected to be in the same location on the second image.
  • the natural features will have appeared to move from the first image to the second image as shown in the following figure. In this case, a high correlation result will occur at the new location in the second image if the natural feature detection window encompasses the natural feature.
  • FIG. 6 illustrates changes in locations of various natural features between two images.
  • locations of natural features e.g., locations 240 , 250 , 260
  • locations of same natural features e.g., locations 240 ′, 250 ′, 260 ′
  • the “next” location of the natural feature detection windows most likely containing the natural feature is shown with a prime indicator. In this case, a large number, a majority or all of the natural features may appear to have moved down and to the left. Most likely, the camera has moved down and to the right but the actual object pictured have not moved.
  • the natural feature tracking module 125 may limit processing used in searching across an otherwise larger correlation area. That is, according to some embodiments, each natural feature detection window may be smaller and still obtain a high correlation result within a similar or shorter time period.
  • FIG. 7 illustrates a change from a previous location of a natural feature to a next location of the same natural feature.
  • a location of natural feature 260 from a first or previous image is first detected then tracked.
  • the location of natural feature 260 ′ is then tracked to a second or next location in the second or next image.
  • the apparent movement may be caused by the natural feature actually moving from image to image and/or the camera moving and/or rotating.
  • a natural feature or a group of natural features often appear to move from a previous location on one image to a next location on the next image as described above.
  • Cameras in mobile devices 100 often contain an auto-focus engine 300 , which fix focusing based on a detected object.
  • the auto-focus engine 300 may operate on a continuous analog image or may operate on a digital image to focus on an area of the image defined by an auto-focus window 310 . From image to image, the auto-focus window 310 may appear to move in the sequence of images. In this sense, the auto-focus engine 300 appears to track an object within the sequence of images.
  • a mobile device 100 integrates a camera's auto-focus engine 300 with natural feature processing engine 110 performing computer vision-based recognition and tracking.
  • the auto-focus engine 300 and a natural feature processing engine 110 are allowed to communicate information such as a position or change in position of auto-focus window 310 and/or natural features.
  • the auto-focus engine 300 may use information from the natural feature processing engine 110 to better position its auto-focus window 310 (i.e., a location of a box within the image).
  • the natural feature processing engine 110 may use information from the auto-focus engine 300 to better position correlation windows for finding a new position of a natural feature.
  • natural feature processing engine 110 disregards this information from the auto-focus engine 300 .
  • FIG. 8 shows an auto-focus window 310 within an image 200 .
  • an auto-focus engine 300 search through an entire image to find an object or objects (e.g., one or more faces). The auto-focus engine 300 then displays the auto-focus window 310 around the found object and performs focusing on the found object. For a subsequent image, the auto-focus engine 300 searches the entire image area again for objects in the next image, and then updates the position of the auto-focus window 310 and refocuses the camera if necessary.
  • object or objects e.g., one or more faces
  • Such found objects may contain one or several natural features that the natural feature tracking module 125 is following.
  • the auto-focus engine 300 may advantageously use locations within an image as determined by the natural feature processing engine 110 to limit the search area from the entire image to an area in proximity to the detected and tracked natural features.
  • FIG. 9 shows a mobile device 100 containing a natural feature processing engine 110 and an auto-focus engine 300 communicating information, in accordance with some embodiments of the present invention.
  • the mobile device 100 are coupled, which allows the natural feature processing engine 110 and auto-focus engine 300 to communicate information in one direction or in both directions as shown along line 405 .
  • some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 that indicates the current size and/or location of an auto-focus window within an image, as described below with reference to FIG. 10 .
  • some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 that indicates a change in size and/or a change in location from previous auto-focus window to next auto-focus window, as described below with reference to FIGS. 11 , 12 and 13 .
  • some embodiments allow the natural feature processing engine 110 to send information to the auto-focus engine 300 that indicates a change from a previous location of natural feature and/or a natural feature detection window (e.g., 270 of FIG. 6 ) to a next location of natural feature and/or natural feature detection window (e.g., 270 ′), as described below with reference to FIG. 14 .
  • a natural feature detection window e.g., 270 of FIG. 6
  • a next location of natural feature and/or natural feature detection window e.g., 270 ′
  • Embodiments include at least one or more of 410 , 420 and/or 430 as information communicated between the auto-focus engine 300 and the natural feature processing engine 110 .
  • some embodiments communicate only one of 410 , 420 and 430 : (1) a first embodiment communicates 410 but not 420 or 430 ; (2) a second embodiment communicates 410 but not 410 or 430 ; and (3) a third embodiment communicates 430 but not 410 or 420 .
  • Additional examples communicate two of 410 , 420 and 430 : (4) a fourth embodiment communicates both 410 and 420 but not 430 ; (5) a fifth embodiment communicates both 420 and 430 but not 410 ; and (6) a sixth embodiment communicates both 410 and 430 but not 430 . Finally, further examples communicate all three: (7) a seventh embodiments communicates 410 , 420 and 430 . Therefore, when an embodiment communicates information between the auto-focus engine and the natural feature processing engine, some embodiments communicate just one of 410 , 420 or 430 , other embodiments communicate two of 410 , 420 and 430 , while still other embodiments communicate all three of 410 , 420 and 430 .
  • This communicated information is used to set a location of a natural feature window and/or an auto-focus window. For example, some embodiments only communicate information shown at 410 to limit the area of the next natural feature windows. Other embodiments only communicate information shown at 420 to change a center location of the next natural feature windows. Still other embodiments only communicate information shown at 430 to change a location of the next auto-focus window(s). As stated above, some embodiments implement two of 410 , 420 and 430 as information communicated between the auto-focus engine 300 coupled and the natural feature processing engine 110 , while other embodiments implement all three of 410 , 420 and 430 as information communicated between the auto-focus engine 300 coupled and the natural feature processing engine 110 . In some embodiments, the auto-focus engine 300 acts as a slave and the natural feature processing engine 110 acts as its master.
  • the natural feature processing engine 110 acts as a means for detecting and tracking natural features in the image with a natural feature processing engine.
  • the natural feature detection module 120 acts as a means for detecting natural features.
  • the natural feature tracking module 125 acts as a means for tracking natural features.
  • a processor or processors may act as a means for performing each of the functions of the natural feature processing engine 110 , such as selecting the auto-focus window within the image, limiting an area of a natural feature detection based on the location of the auto-focus window, finding a natural feature within the limited area, setting a next tracking search window based on a change, tracking a natural within the next tracking search window, tracking a natural feature to the first location within a first image, and/or tracking the natural feature to the second location within a second image.
  • the auto-focus engine 300 acts as a means for auto-focusing in an auto-focus window in an image.
  • a processor or processors may act as a means for performing each of the functions of the auto-focus engine 300 , such as setting a first auto-focus window within a first image, setting a second auto-focus window within a second image, setting a next auto-focus window based on the change, and auto-focusing within the auto-focus window.
  • processor(s), engines and modules may act as means for communicating information between the auto-focus engine and the natural feature processing engine.
  • the information may include a location of the auto-focus window, a change, a change from a first location to a second location, a change in location from a previous to a next auto-focus window, and/or a change from a previous to a next location of a natural feature.
  • FIG. 10 shows a location of an auto-focus window 310 being used to limit an area 500 for detecting natural features, in accordance with some embodiments of the present invention.
  • the auto-focus engine 300 sends information to the natural feature processing engine 110 regarding the current size and/or location of an auto-focus window within an image.
  • the natural feature processing engine 110 may limit its search area to area 500 for detecting new natural features and/or tracking already-detected natural features by allowing for natural feature detection windows to exist only within an area 500 defined by a threshold distance to the boarders of the auto-focus window. By limiting detection and/or search to an area 500 , processing power otherwise used may be substantially reduced.
  • this threshold distance may be zero while in other embodiments the threshold distance may allow for natural feature detection windows to be tracked just outside of the auto-focus window 310 .
  • the auto-focus engine 300 may send to the natural feature processing engine 110 parameters identifying multiple auto-focus windows 310 within a single image. In these embodiments, detecting and/or tracking may be limited to areas 500 defined by these multiple auto-focus windows 310 .
  • FIG. 11 shows a change in location from a previous auto-focus window 320 to a next auto-focus window 330 .
  • some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 regarding a change in size and/or a change in location from previous auto-focus window to next auto-focus window.
  • FIG. 12 shows setting a size of a next tracking search window ( 290 ′-S, 290 ′-M, 290 ′-L) based on magnitude of change in location from previous auto-focus window 320 to next auto-focus window 330 , in accordance with some embodiments of the present invention.
  • the natural feature processing engine 110 and in particular the natural feature tracking module 125 , may use this indicated change in location of the auto-focus window 310 to determine how to change the size of a previous natural feature detection window 290 to a next natural feature detection window 290 ′.
  • a small magnitude of change in position from the previous auto-focus window 320 to the next auto-focus window 330 could be used by the natural feature processing engine 110 to limit the size of the next natural feature tracking search window 290 ′ to a smaller sized window 290 ′-S.
  • a medium or mid-range change could be used to limit the size to a midsized window 290 ′-M.
  • a large change could be used to limit the size to a large window 290 ′-L.
  • a previous location of a natural feature 260 is shown at the center of each of the windows 290 ′-S/M/L .
  • FIG. 13 shows setting a center of a next tracking search window 290 ′ based on a direction of change in location from previous auto-focus window 320 to next auto-focus window 330 , in accordance with some embodiments of the present invention.
  • This change indication shown as change 520 , may assist the natural feature tracking module 125 in setting a next tracking search window 290 ′.
  • the tracking window may be center as the next tracking search window 290 ′- 1 on the previous location of the natural feature 260 .
  • the next tracking search window 290 ′- 1 is co-located with the previous tracking search window 290 .
  • the natural feature tracking module 125 may set a next tracking search window 290 ′- 2 based on the direction and magnitude of the next auto-focus window 330 as compared to the previous auto-focus window 330 .
  • the next location of the natural feature 260 ′ which at this point the location is unknown and still to be tracked, would fall inside of the next tracking search window 290 ′- 2 .
  • FIG. 14 shows setting a change 510 in location (center and/or size) of a previous auto-focus window 320 to a next auto-focus window 330 based on change from previous location of a natural feature 260 to a next location of the natural feature 260 ′, in accordance with some embodiments of the present invention.
  • the natural feature processing engine 110 sends information to the auto-focus engine 300 regarding a change from a previous location of natural feature 260 and/or a natural feature detection window 290 to a next location of natural feature 260 ′ and/or natural feature detection window 290 ′.
  • This information may include a magnitude of change and/or a direction of change of the natural feature 260 ′ and/or a natural feature detection window 290 ′.
  • the auto-focus engine 300 may use the magnitude of change to broaden or narrow the size of the next auto-focus window 330 . For example, a large magnitude of change may indicate a larger area of uncertainty; thus, the auto-focus engine 300 may increase the area of the next auto-focus window 330 . Similarly, a small to zero magnitude may be used by the auto-focus engine 300 to keep the size of the next auto-focus window 330 constant or slightly reduce the size of the auto-focus window. Alternatively, the auto-focus engine 300 may use a direction of change to change the size or move the location the next auto-focus window 330 . For example, a direction of change may change the center point of the next auto-focus window 330 .
  • the direction of change may change the size of the next auto-focus window 330 .
  • a 10-pixel movement of the location of the natural feature 260 ′ or the next natural feature detection window 290 ′ may expand the next auto-focus window 330 by 10 pixels in each linear direction (i.e., up, down, left, right).
  • the auto-focus engine 300 change the center and size of the next auto-focus window 330 based on the combined change in direction and magnitude of the location of natural feature 260 ′.
  • FIG. 15 shows a method for limiting an area of for natural feature detection based on a location of an auto-focus window 310 , in accordance with some embodiments of the present invention.
  • the auto-focus engine 300 in the mobile device 100 selects an auto-focus window 310 within an image 200 .
  • the camera of the mobile device 100 performs auto-focusing in selecting auto-focus window 310 .
  • the auto-focus engine 300 communicates a location of the auto-focus window 310 to the natural feature processing engine 110 , the natural feature detection module 120 , and/or the natural feature tracking module 125 .
  • the natural feature tracking module 125 limits an area 500 for natural feature detection based on the location of the auto-focus window 310 .
  • a threshold is used to expand or restrict the area 500 to an area greater or less than the auto-focus window 310 .
  • the natural feature detection module 120 , and/or the natural feature tracking module 125 detects and/or tracks natural feature(s) within limited area 500 .
  • FIG. 16 shows a method for setting a next tracking search window based on a change between a previous and a next auto-focus window, in accordance with some embodiments of the present invention.
  • the auto-focus engine 300 in the mobile device 100 sets a first or previous auto-focus window 320 within a first or previous image 200 .
  • the auto-focus engine 300 sets a second or next auto-focus window 330 within a second or next image 200 .
  • the auto-focus engine 300 communicates a change from the previous auto-focus window 320 to the next auto-focus window 330 to the natural feature processing engine 110 , the natural feature detection module 120 , and/or the natural feature tracking module 125 .
  • the natural feature tracking module 125 sets a next tracking search window 290 ′ based on the change 510 .
  • the natural feature tracking module 125 tracks one or more natural features within the next tracking search window 290 ′.
  • FIG. 17 shows a method for setting a next auto-focus window 330 based on a change from a previous location to a next location of a natural feature, in accordance with some embodiments of the present invention.
  • the natural feature tracking module 125 tracks natural features from a first or previous location 260 within a first or previous image 200 .
  • the natural feature tracking module 125 tracks the natural features to a second or next location 260 ′ within a second or next image 200 .
  • the natural feature tracking module 125 communicates a change 520 from the previous location 260 to the next location 260 ′ to the auto-focus engine 300 .
  • the auto-focus engine 300 sets a next auto-focus window 330 based on the change 520 .
  • the auto-focus engine 300 auto-focuses within the next auto-focus window 330 .
  • embodiments are described with relationship to mobile devices implementing augmented reality functionality tracking natural features.
  • these methods and apparatus are equally applicable to other application that uses computer vision related technologies and may benefit from the teachings herein.
  • embodiments above may have the function of tracking natural features replaced or augmented with marker tracking and/or hand tracking.
  • Embodiments may track and focus on a man-made marker (rather than a natural feature) such as a posted QR code (quick response code).
  • embodiments may track and focus on a moving hand (rather than a fixed natural feature or man-made marker), for example, in order to capture gesture commands from a user.
  • These embodiments may provide gesturing interfaces with or without augmented reality functionality.
  • the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof
  • the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein.
  • Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein.
  • software codes may be stored in a memory and executed by a processor unit.
  • Memory may be implemented within the processor unit or external to the processor unit.
  • the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
  • the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • a communication apparatus may include a transceiver having signals indicative of instructions and data.
  • the instructions and data are configured to cause one or more processors to implement the functions outlined in the claims. That is, the communication apparatus includes transmission media with signals indicative of information to perform disclosed functions. At a first time, the transmission media included in the communication apparatus may include a first portion of the information to perform the disclosed functions, while at a second time the transmission media included in the communication apparatus may include a second portion of the information to perform the disclosed functions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)

Abstract

An apparatus and method for supporting augmented reality or other computer vision applications are presented. Embodiments enable communication between natural feature and auto-focus engines to increase an engine's accuracy or decrease a processing time of the engine. An auto-focus engine may communicate a location of an auto-focus window to a natural feature detection module and/or a change in location of a previous auto-focus window to a next auto-focus window. The natural feature detection module uses the communicated information to limit an initial search area and/or to set a next tracking search window. A natural feature tracking module may communicate a changes from a previous location of a natural feature to a next location of the natural feature to an auto-focus engine. The auto-focus engine uses the change to set a next auto-focus window.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • Not Applicable.
  • BACKGROUND
  • I. Field of the Invention
  • This disclosure relates generally to apparatus and methods for augmented reality and other computer vision application, and more particularly to integration of camera auto-focus with computer vision-based recognition and tracking.
  • II. Background
  • Augmented reality systems use natural features as reference points within a sequence of images to place computer generated icons and images. A natural feature processing engine, including a natural feature detection module and a natural feature tracking module, is used to find and follow these reference points. Mobile devices may be enhanced with such augmented reality engines. Many mobile devices also have cameras with auto-focus capabilities provided by an auto-focus engine. Both natural feature and auto-focus engines track changes from image to image, however, known systems fail to allow communication between these engines.
  • In augmented reality, tracking that accurately follows the tracked object's movement and position creates a significantly improved user experience. Consequently, much effort is put into improving tracking performance. Object tracking functionally in a processor operates separately from auto-focus functionality at the front end of a camera. Auto-focus functionality is typically performed in hardware or with hardware acceleration. Auto-focus operations may result in information useful for improving natural feature detection and/or tracking. Similarly, natural feature detection and tracking may result in information useful for improving auto-focus functionality.
  • Many existing mobile devices 10 contain a camera and a processor. The camera provides images to the processor, which may modify the image by various augmented reality techniques. The processor may send a control signal trigger to camera activation and the camera provides the image or sequence of images to the processor for image processing in response. No information obtained from natural feature processing is returned to the camera to assist in obtaining an improved image. That is, control information beyond triggering does not flow from the processor to the camera.
  • In other existing mobile devices 10, image processing associated with natural feature detection and tracking is disassociated with image processing associated with auto-focusing. FIG. 1 shows a known system containing a natural feature processing engine 110 and an auto-focus engine 300, which are uncoupled and therefore do not communicate information as shown by delineation 400. An existing mobile device 10 contains one or more processors that function as a natural feature processing engine 110 and also as an auto-focus engine 300. The natural feature processing engine 110 includes a natural feature detection module 120 and a natural feature tracking module 125.
  • In general, operations in the natural feature detection module 120 and the natural feature tracking module 125 function in parallel, however, for a particular natural feature, these operations appear to occur in sequence where a natural feature is first detected within an image then tracked through subsequent images. The location of the natural feature within the image is used by a separate processing for augmented reality module 130. Each image undergoes processing through the natural feature detection module 120 to detect new natural features and also undergoes processing through the natural feature tracking module 125 to follow the movement of already detected natural features from image to image.
  • As shown at delineation 400, the auto-focus engine 300 has no communication with the natural feature processing engine 110 and may run as a parallel task. The auto-focus engine 300 may be implemented in hardware or may be implemented in a combination of hardware and software. The auto-focus engine 300 operates in real-time or near real-time to capture new images. Thus, a continued need exists to improve both natural feature processing as well as auto focusing.
  • BRIEF SUMMARY
  • Disclosed is an apparatus and method for coupling a natural feature processing engine with an auto-focus engine.
  • According to some aspects, disclosed is a mobile device for use in computer vision, the mobile device comprising: a natural feature processing engine comprising a natural feature detection module and a natural feature tracking module; and an auto-focus engine coupled to the natural feature processing engine to communicate information to set a location of a window comprising at least one of a natural feature window and/or an auto-focus window.
  • According to some aspects, disclosed is a method in a mobile device for use in computer vision, the method comprising: selecting an auto-focus window within an image; auto-focusing on the selected window; communicating a location of the auto-focus window; limiting an area of a natural feature detection based on the location of the auto-focus window; and finding a natural feature within the limited area.
  • According to some aspects, disclosed is a method in a mobile device for use in computer vision, the method comprising: setting a first auto-focus window within a first image; setting a second auto-focus window within a second image; communicating a change from the first auto-focus window to the second auto-focus window; setting a next tracking search window based on the change; and tracking a natural within the next tracking search window.
  • According to some aspects, disclosed is a method in a mobile device for use in computer vision, the method comprising: tracking a natural feature to a first location within a first image; tracking the natural feature to a second location within a second image; communicating a change from the first location to the second location; setting a next auto-focus window based on the change; and auto-focusing within the auto-focus window.
  • According to some aspects, disclosed is a mobile device for use in computer vision, the mobile device comprising: a camera and an auto-focus engine; and a processor and memory comprising code for performing the methods described above.
  • According to some aspects, disclosed is a mobile device for use in computer vision, the mobile device comprising means for performing the methods described above.
  • According to some aspects, disclosed is a nonvolatile computer-readable storage medium including program code stored thereon, comprising program code for performing the methods described above.
  • It is understood that other aspects will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described various aspects by way of illustration. The drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
  • BRIEF DESCRIPTION OF THE DRAWING
  • Embodiments of the invention will be described, by way of example only, with reference to the drawings.
  • FIG. 1 shows a known system containing a natural feature processing engine and an auto-focus engine, which do not communicate information.
  • FIG. 2 shows known states within a natural feature processing engine detecting and tracking natural features.
  • FIG. 3 illustrates an image containing a building and a tree with features to be tracked.
  • FIG. 4 illustrates natural features overlying the image.
  • FIG. 5 illustrates locations of various natural features.
  • FIG. 6 illustrates changes in locations of various natural features between two images.
  • FIG. 7 illustrates a change from a previous location of a natural feature to a next location of the same natural feature.
  • FIG. 8 shows an auto-focus window within an image.
  • FIG. 9 shows a mobile device containing a natural feature processing engine and an auto-focus engine communicating information, in accordance with some embodiments of the present invention.
  • FIG. 10 shows a location of an auto-focus window being used to limit an area for detecting natural features, in accordance with some embodiments of the present invention.
  • FIG. 11 shows a change in location from a previous auto-focus window to a next auto-focus window.
  • FIG. 12 shows setting a size of a next tracking search window based on magnitude of change in location from previous auto-focus window to next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 13 shows setting a center of a next tracking search window based on a direction of change in location from previous auto-focus window to next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 14 shows setting a change in location (center and/or size) of a previous auto-focus window to a next auto-focus window based on change from previous location of a natural feature to a next location of the natural feature, in accordance with some embodiments of the present invention.
  • FIG. 15 shows a method for limiting an area of for natural feature detection based on a location of an auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 16 shows a method for setting a next tracking search window based on a change between a previous and a next auto-focus window, in accordance with some embodiments of the present invention.
  • FIG. 17 shows a method for setting a next auto-focus window based on a change from a previous location to a next location of a natural feature, in accordance with some embodiments of the present invention.
  • DETAILED DESCRIPTION
  • The detailed description set forth below in connection with the appended drawings is intended as a description of various aspects of the present disclosure and is not intended to represent the only aspects in which the present disclosure may be practiced. Each aspect described in this disclosure is provided merely as an example or illustration of the present disclosure, and should not necessarily be construed as preferred or advantageous over other aspects. The detailed description includes specific details for the purpose of providing a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the present disclosure. Acronyms and other descriptive terminology may be used merely for convenience and clarity and are not intended to limit the scope of the disclosure.
  • Position determination techniques described herein may be implemented in conjunction with various wireless communication networks such as a wireless wide area network (WWAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term “network” and “system” are often used interchangeably. A WWAN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, Long Term Evolution (LTE), and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), and so on. Cdma2000 includes IS-95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named “3rd Generation Partnership Project” (3GPP). Cdma2000 is described in documents from a consortium named “3rd Generation Partnership Project 2” (3GPP2). 3GPP and 3GPP2 documents are publicly available. A WLAN may be an IEEE 802.11x network, and a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network. The techniques may also be implemented in conjunction with any combination of WWAN, WLAN and/or WPAN.
  • A satellite positioning system (SPS) typically includes a system of transmitters positioned to enable entities to determine their location on or above the Earth based, at least in part, on signals received from the transmitters. Such a transmitter typically transmits a signal marked with a repeating pseudo-random noise (PN) code of a set number of chips and may be located on ground based control stations, user equipment and/or space vehicles. In a particular example, such transmitters may be located on Earth orbiting satellite vehicles (SVs). For example, a SV in a constellation of Global Navigation Satellite System (GNSS) such as Global Positioning System (GPS), Galileo, GLONASS or Compass may transmit a signal marked with a PN code that is distinguishable from PN codes transmitted by other SVs in the constellation (e.g., using different PN codes for each satellite as in GPS or using the same code on different frequencies as in GLONASS). In accordance with certain aspects, the techniques presented herein are not restricted to global systems (e.g., GNSS) for SPS. For example, the techniques provided herein may be applied to or otherwise enabled for use in various regional systems, such as, e.g., Quasi-Zenith Satellite System (QZSS) over Japan, Indian Regional Navigational Satellite System (IRNSS) over India, Beidou over China, etc., and/or various augmentation systems (e.g., an Satellite Based Augmentation System (SBAS)) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems. By way of example but not limitation, an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as, e.g., Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like. Thus, as used herein an SPS may include any combination of one or more global and/or regional navigation satellite systems and/or augmentation systems, and SPS signals may include SPS, SPS-like, and/or other signals associated with such one or more SPS.
  • As used herein, a mobile device 100, sometimes referred to as a mobile station (MS) or user equipment (UE), such as a cellular phone, mobile phone or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile device which is capable of receiving wireless communication and/or navigation signals. The term “mobile station” is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wireline connection, or other connection—regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND. Also, mobile station 100 is intended to include all devices, including wireless communication devices, computers, laptops, etc. which are capable of communication with a server, such as via the Internet, WiFi, or other network, and regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combination of the above are also considered a “mobile station.”
  • Unlike existing mobile devices 10, a mobile device 100 in accordance with the present invention allows communication between the auto-focus engine 300 and the natural feature processing engine 110, as described below. Similar to existing mobile devices 10, the mobile device 100 contains memory, one or more processors, which function as a natural feature processing engine 110 and an auto-focus engine 300, and user interface, such as a display, speaker, touch screen and/or buttons. The natural feature processing engine 110, also referred to as computer vision-based recognition and tracking, includes a natural feature detection module 120 and a natural feature tracking module 125.
  • FIG. 2 shows known states within a natural feature processing engine 110 detecting and tracking natural features. Within each image, the processor search for new or undetected natural features in the natural feature detection module 120. Once a natural feature is detect, the processor follows the detected natural feature with the natural feature tracking module 120. Once the natural feature can no longer be followed (e.g., the natural feature is no longer in the image or is no longer distinguishable), the natural feature is declared lost.
  • FIG. 3 illustrates an image containing a building and a tree 210 with features to be tracked. Using the natural feature detection module 120, the image 200 may undergo various processing, including, for example, corner, line or edge detection. The image 200 shows a tree 210 next to a building with a side 220 and a window 230. Next, the image 200 may undergo corner detection. FIG. 4 illustrates natural features overlying the image. In this case, various corners are detected as natural features (240, 250, 260) in the image 200.
  • FIG. 5 illustrates locations of various natural features. Next, the processor attempts to track the natural features (240, 250, 260) by matching the natural feature to a new location. Matching may be performed with various criteria that results on some measure of similarity to the natural feaute. For example, the processor may use correlation (e.g., normalized cross correlation) to match the natural feature to its new location. The processor may correlate pixels within a grid around each natural feature in a first image to pixels in the general grid location in a second image. For example, the natural feature tracking module 125 identifies an 8-by-8 grid of pixels at a particular location on a first image. The area defined by the pixel dimension and location may be referred as a natural feature detection window. In general, a natural feature detection window is substantially smaller than an auto-focus window, where the natural feature detection window encompasses few than 200 pixels and the auto-focus window encompasses more than 200 pixels.
  • Processing speed directly correlates to the size of the natural feature detection window covers; smaller windows covering only a small area each are able to be processed more quickly. Other pixel dimensions are also possible for the natural feature detection window. For example, rather than using an 8×8 square grid, tracking may use other square or non-square fixed-dimension grid sizes (e.g., 4×4, 10×10 or 16×16) or variable-dimensions grid sizes (e.g., where the size depend on characteristics of the natural feature). Tracking will examine the same location defined by the 8-by-8 grid in a second image. If the correlation results in a high result, no movement has occurred between images and as expected the pixel location of the natural feature is expected to be in the same location on the second image. If the camera is moving linearly and/or rotating, or if objects in the image are moving relative to the mobile device 100, then the natural features will have appeared to move from the first image to the second image as shown in the following figure. In this case, a high correlation result will occur at the new location in the second image if the natural feature detection window encompasses the natural feature.
  • FIG. 6 illustrates changes in locations of various natural features between two images. In the figure, locations of natural features (e.g., locations 240, 250, 260) from a first image are shown overlapped with the locations of same natural features (e.g., locations 240′, 250′, 260′) from a second image. The “next” location of the natural feature detection windows most likely containing the natural feature is shown with a prime indicator. In this case, a large number, a majority or all of the natural features may appear to have moved down and to the left. Most likely, the camera has moved down and to the right but the actual object pictured have not moved. In any case, by moving the natural feature detection windows to a new location within a subsequent image, the natural feature tracking module 125 may limit processing used in searching across an otherwise larger correlation area. That is, according to some embodiments, each natural feature detection window may be smaller and still obtain a high correlation result within a similar or shorter time period.
  • FIG. 7 illustrates a change from a previous location of a natural feature to a next location of the same natural feature. A location of natural feature 260 from a first or previous image is first detected then tracked. The location of natural feature 260′ is then tracked to a second or next location in the second or next image. The apparent movement may be caused by the natural feature actually moving from image to image and/or the camera moving and/or rotating.
  • A natural feature or a group of natural features often appear to move from a previous location on one image to a next location on the next image as described above.
  • Cameras in mobile devices 100 often contain an auto-focus engine 300, which fix focusing based on a detected object. The auto-focus engine 300 may operate on a continuous analog image or may operate on a digital image to focus on an area of the image defined by an auto-focus window 310. From image to image, the auto-focus window 310 may appear to move in the sequence of images. In this sense, the auto-focus engine 300 appears to track an object within the sequence of images.
  • According to some embodiments of the present invention, a mobile device 100 integrates a camera's auto-focus engine 300 with natural feature processing engine 110 performing computer vision-based recognition and tracking. The auto-focus engine 300 and a natural feature processing engine 110 are allowed to communicate information such as a position or change in position of auto-focus window 310 and/or natural features. The auto-focus engine 300 may use information from the natural feature processing engine 110 to better position its auto-focus window 310 (i.e., a location of a box within the image). Similarly, the natural feature processing engine 110 may use information from the auto-focus engine 300 to better position correlation windows for finding a new position of a natural feature. Alternatively, natural feature processing engine 110 disregards this information from the auto-focus engine 300.
  • FIG. 8 shows an auto-focus window 310 within an image 200. Typically, an auto-focus engine 300 search through an entire image to find an object or objects (e.g., one or more faces). The auto-focus engine 300 then displays the auto-focus window 310 around the found object and performs focusing on the found object. For a subsequent image, the auto-focus engine 300 searches the entire image area again for objects in the next image, and then updates the position of the auto-focus window 310 and refocuses the camera if necessary.
  • Such found objects may contain one or several natural features that the natural feature tracking module 125 is following. When searching for objects, the auto-focus engine 300 may advantageously use locations within an image as determined by the natural feature processing engine 110 to limit the search area from the entire image to an area in proximity to the detected and tracked natural features.
  • FIG. 9 shows a mobile device 100 containing a natural feature processing engine 110 and an auto-focus engine 300 communicating information, in accordance with some embodiments of the present invention. Instead of isolated engines in existing mobile devices 10, the mobile device 100 are coupled, which allows the natural feature processing engine 110 and auto-focus engine 300 to communicate information in one direction or in both directions as shown along line 405.
  • As shown at 410, some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 that indicates the current size and/or location of an auto-focus window within an image, as described below with reference to FIG. 10.
  • As shown at 420, some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 that indicates a change in size and/or a change in location from previous auto-focus window to next auto-focus window, as described below with reference to FIGS. 11, 12 and 13.
  • As shown at 430, some embodiments allow the natural feature processing engine 110 to send information to the auto-focus engine 300 that indicates a change from a previous location of natural feature and/or a natural feature detection window (e.g., 270 of FIG. 6) to a next location of natural feature and/or natural feature detection window (e.g., 270′), as described below with reference to FIG. 14.
  • Embodiments include at least one or more of 410, 420 and/or 430 as information communicated between the auto-focus engine 300 and the natural feature processing engine 110. For example, some embodiments communicate only one of 410, 420 and 430: (1) a first embodiment communicates 410 but not 420 or 430; (2) a second embodiment communicates 410 but not 410 or 430; and (3) a third embodiment communicates 430 but not 410 or 420. Additional examples communicate two of 410, 420 and 430: (4) a fourth embodiment communicates both 410 and 420 but not 430; (5) a fifth embodiment communicates both 420 and 430 but not 410; and (6) a sixth embodiment communicates both 410 and 430 but not 430. Finally, further examples communicate all three: (7) a seventh embodiments communicates 410, 420 and 430. Therefore, when an embodiment communicates information between the auto-focus engine and the natural feature processing engine, some embodiments communicate just one of 410, 420 or 430, other embodiments communicate two of 410, 420 and 430, while still other embodiments communicate all three of 410, 420 and 430.
  • This communicated information is used to set a location of a natural feature window and/or an auto-focus window. For example, some embodiments only communicate information shown at 410 to limit the area of the next natural feature windows. Other embodiments only communicate information shown at 420 to change a center location of the next natural feature windows. Still other embodiments only communicate information shown at 430 to change a location of the next auto-focus window(s). As stated above, some embodiments implement two of 410, 420 and 430 as information communicated between the auto-focus engine 300 coupled and the natural feature processing engine 110, while other embodiments implement all three of 410, 420 and 430 as information communicated between the auto-focus engine 300 coupled and the natural feature processing engine 110. In some embodiments, the auto-focus engine 300 acts as a slave and the natural feature processing engine 110 acts as its master.
  • The natural feature processing engine 110 acts as a means for detecting and tracking natural features in the image with a natural feature processing engine. The natural feature detection module 120 acts as a means for detecting natural features. The natural feature tracking module 125 acts as a means for tracking natural features. A processor or processors may act as a means for performing each of the functions of the natural feature processing engine 110, such as selecting the auto-focus window within the image, limiting an area of a natural feature detection based on the location of the auto-focus window, finding a natural feature within the limited area, setting a next tracking search window based on a change, tracking a natural within the next tracking search window, tracking a natural feature to the first location within a first image, and/or tracking the natural feature to the second location within a second image.
  • The auto-focus engine 300 acts as a means for auto-focusing in an auto-focus window in an image. A processor or processors may act as a means for performing each of the functions of the auto-focus engine 300, such as setting a first auto-focus window within a first image, setting a second auto-focus window within a second image, setting a next auto-focus window based on the change, and auto-focusing within the auto-focus window.
  • These processor(s), engines and modules, separately or in combination, may act as means for communicating information between the auto-focus engine and the natural feature processing engine. The information may include a location of the auto-focus window, a change, a change from a first location to a second location, a change in location from a previous to a next auto-focus window, and/or a change from a previous to a next location of a natural feature.
  • FIG. 10 shows a location of an auto-focus window 310 being used to limit an area 500 for detecting natural features, in accordance with some embodiments of the present invention. As mentioned above at 410, the auto-focus engine 300 sends information to the natural feature processing engine 110 regarding the current size and/or location of an auto-focus window within an image. In turn, the natural feature processing engine 110 may limit its search area to area 500 for detecting new natural features and/or tracking already-detected natural features by allowing for natural feature detection windows to exist only within an area 500 defined by a threshold distance to the boarders of the auto-focus window. By limiting detection and/or search to an area 500, processing power otherwise used may be substantially reduced. In some embodiments, this threshold distance may be zero while in other embodiments the threshold distance may allow for natural feature detection windows to be tracked just outside of the auto-focus window 310. In other embodiments, the auto-focus engine 300 may send to the natural feature processing engine 110 parameters identifying multiple auto-focus windows 310 within a single image. In these embodiments, detecting and/or tracking may be limited to areas 500 defined by these multiple auto-focus windows 310.
  • FIG. 11 shows a change in location from a previous auto-focus window 320 to a next auto-focus window 330. As discussed above with reverence to 420, some embodiments allow the auto-focus engine 300 to send information to the natural feature processing engine 110 regarding a change in size and/or a change in location from previous auto-focus window to next auto-focus window.
  • FIG. 12 shows setting a size of a next tracking search window (290′-S, 290′-M, 290′-L) based on magnitude of change in location from previous auto-focus window 320 to next auto-focus window 330, in accordance with some embodiments of the present invention. The natural feature processing engine 110, and in particular the natural feature tracking module 125, may use this indicated change in location of the auto-focus window 310 to determine how to change the size of a previous natural feature detection window 290 to a next natural feature detection window 290′. A small magnitude of change in position from the previous auto-focus window 320 to the next auto-focus window 330 could be used by the natural feature processing engine 110 to limit the size of the next natural feature tracking search window 290′ to a smaller sized window 290′-S. A medium or mid-range change could be used to limit the size to a midsized window 290′-M. A large change could be used to limit the size to a large window 290′-L. A previous location of a natural feature 260 is shown at the center of each of the windows 290′-S/M/L . As in the example shown, if the magnitude of the change in location from the previous auto-focus window 320 to the next auto-focus window 330 is large, then the next location of the natural feature 260′, which is currently unknown and still to be tracked, would probably be inside the large window 290′-L.
  • FIG. 13 shows setting a center of a next tracking search window 290′ based on a direction of change in location from previous auto-focus window 320 to next auto-focus window 330, in accordance with some embodiments of the present invention. This change indication, shown as change 520, may assist the natural feature tracking module 125 in setting a next tracking search window 290′. For example, if no change 520 was indicated or available, the tracking window may be center as the next tracking search window 290′-1 on the previous location of the natural feature 260. In this case, the next tracking search window 290′-1 is co-located with the previous tracking search window 290. However, if a change 520 exists and is provided to the natural feature processing engine 110, the natural feature tracking module 125 may set a next tracking search window 290′-2 based on the direction and magnitude of the next auto-focus window 330 as compared to the previous auto-focus window 330. Presumably, the next location of the natural feature 260′, which at this point the location is unknown and still to be tracked, would fall inside of the next tracking search window 290′-2.
  • FIG. 14 shows setting a change 510 in location (center and/or size) of a previous auto-focus window 320 to a next auto-focus window 330 based on change from previous location of a natural feature 260 to a next location of the natural feature 260′, in accordance with some embodiments of the present invention. As mentioned above, the natural feature processing engine 110 sends information to the auto-focus engine 300 regarding a change from a previous location of natural feature 260 and/or a natural feature detection window 290 to a next location of natural feature 260′ and/or natural feature detection window 290′. This information may include a magnitude of change and/or a direction of change of the natural feature 260′ and/or a natural feature detection window 290′. The auto-focus engine 300 may use the magnitude of change to broaden or narrow the size of the next auto-focus window 330. For example, a large magnitude of change may indicate a larger area of uncertainty; thus, the auto-focus engine 300 may increase the area of the next auto-focus window 330. Similarly, a small to zero magnitude may be used by the auto-focus engine 300 to keep the size of the next auto-focus window 330 constant or slightly reduce the size of the auto-focus window. Alternatively, the auto-focus engine 300 may use a direction of change to change the size or move the location the next auto-focus window 330. For example, a direction of change may change the center point of the next auto-focus window 330. Alternately, the direction of change may change the size of the next auto-focus window 330. For example, a 10-pixel movement of the location of the natural feature 260′ or the next natural feature detection window 290′ may expand the next auto-focus window 330 by 10 pixels in each linear direction (i.e., up, down, left, right). If both direction and magnitude are available, the auto-focus engine 300 change the center and size of the next auto-focus window 330 based on the combined change in direction and magnitude of the location of natural feature 260′.
  • FIG. 15 shows a method for limiting an area of for natural feature detection based on a location of an auto-focus window 310, in accordance with some embodiments of the present invention. At step 600, the auto-focus engine 300 in the mobile device 100 selects an auto-focus window 310 within an image 200. At step 610, the camera of the mobile device 100 performs auto-focusing in selecting auto-focus window 310. At step 620, the auto-focus engine 300 communicates a location of the auto-focus window 310 to the natural feature processing engine 110, the natural feature detection module 120, and/or the natural feature tracking module 125. Next, for example, at step 630, the natural feature tracking module 125 limits an area 500 for natural feature detection based on the location of the auto-focus window 310. In some cases, a threshold is used to expand or restrict the area 500 to an area greater or less than the auto-focus window 310. At step 640, the natural feature detection module 120, and/or the natural feature tracking module 125 detects and/or tracks natural feature(s) within limited area 500.
  • FIG. 16 shows a method for setting a next tracking search window based on a change between a previous and a next auto-focus window, in accordance with some embodiments of the present invention. At step 700, the auto-focus engine 300 in the mobile device 100 sets a first or previous auto-focus window 320 within a first or previous image 200. After selecting the first auto-focus window 320, at step 710, the auto-focus engine 300 sets a second or next auto-focus window 330 within a second or next image 200. At step 720, the auto-focus engine 300 communicates a change from the previous auto-focus window 320 to the next auto-focus window 330 to the natural feature processing engine 110, the natural feature detection module 120, and/or the natural feature tracking module 125. At step 730, the natural feature tracking module 125 sets a next tracking search window 290′ based on the change 510. At step 740, the natural feature tracking module 125 tracks one or more natural features within the next tracking search window 290′.
  • FIG. 17 shows a method for setting a next auto-focus window 330 based on a change from a previous location to a next location of a natural feature, in accordance with some embodiments of the present invention. At step 800, the natural feature tracking module 125 tracks natural features from a first or previous location 260 within a first or previous image 200. At step 810, the natural feature tracking module 125 tracks the natural features to a second or next location 260′ within a second or next image 200. At step 820, the natural feature tracking module 125 communicates a change 520 from the previous location 260 to the next location 260′ to the auto-focus engine 300. At step 830, the auto-focus engine 300 sets a next auto-focus window 330 based on the change 520. At step 840, the auto-focus engine 300 auto-focuses within the next auto-focus window 330.
  • The above embodiments are described with relationship to mobile devices implementing augmented reality functionality tracking natural features. In general, these methods and apparatus are equally applicable to other application that uses computer vision related technologies and may benefit from the teachings herein. For example, embodiments above may have the function of tracking natural features replaced or augmented with marker tracking and/or hand tracking. Embodiments may track and focus on a man-made marker (rather than a natural feature) such as a posted QR code (quick response code). Alternatively, embodiments may track and focus on a moving hand (rather than a fixed natural feature or man-made marker), for example, in order to capture gesture commands from a user. These embodiments may provide gesturing interfaces with or without augmented reality functionality.
  • The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware, firmware, software, or any combination thereof For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof
  • For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory and executed by a processor unit. Memory may be implemented within the processor unit or external to the processor unit. As used herein the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
  • If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims. That is, the communication apparatus includes transmission media with signals indicative of information to perform disclosed functions. At a first time, the transmission media included in the communication apparatus may include a first portion of the information to perform the disclosed functions, while at a second time the transmission media included in the communication apparatus may include a second portion of the information to perform the disclosed functions.
  • The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the disclosure.

Claims (29)

1. A mobile device for use in computer vision, the mobile device comprising:
a natural feature processing engine comprising a natural feature detection module and a natural feature tracking module; and
an auto-focus engine coupled to the natural feature processing engine to communicate information to set a location of a window comprising at least one of a natural feature window or an auto-focus window.
2. The mobile device of claim 1, wherein the window comprises a natural feature detection window having a limited area within an image.
3. The mobile device of claim 2, wherein the information indicates a location of an auto-focus window.
4. The mobile device of claim 1, wherein the window comprises a next tracking search window.
5. The mobile device of claim 4, wherein the information indicates a change in location from a previous auto-focus window to a next auto-focus window.
6. The mobile device of claim 4, wherein the information comprises a magnitude of the change, and wherein the next tracking search window is set based on the magnitude.
7. The mobile device of claim 4, wherein the information comprises a direction of the change, and wherein the next tracking search window is set based on the direction.
8. The mobile device of claim 1, wherein the window comprises an auto-focus window.
9. The mobile device of claim 8, wherein the information indicates a change from a previous location of a natural feature to a next location of the natural feature.
10. The mobile device of claim 1, further comprising an augmented reality module coupled to the natural feature processing engine.
11. A method in a mobile device for use in computer vision, the method comprising:
auto-focusing in an auto-focus window in an image using an auto-focus engine;
detecting and tracking natural features in the image with a natural feature processing engine; and
communicating information between the auto-focus engine and the natural feature processing engine.
12. The method of claim 11, wherein communicating information comprises communicating a location of the auto-focus window, and the method further comprises:
selecting the auto-focus window within the image;
limiting an area of a natural feature detection based on the location of the auto-focus window; and
finding a natural feature within the limited area.
13. The method of claim 11, wherein communicating information comprises communicating a change, and the method further comprises:
setting a first auto-focus window within a first image;
setting a second auto-focus window within a second image, wherein the change comprises a change from the first auto-focus window to the second auto-focus window;
setting a next tracking search window based on the change; and
tracking a natural within the next tracking search window.
14. The method of claim 11, wherein communicating information comprises communicating a change from a first location to a second location, the method further comprises:
tracking a natural feature to the first location within a first image;
tracking the natural feature to the second location within a second image;
setting a next auto-focus window based on the change; and
auto-focusing within the auto-focus window.
15. A mobile device for use in computer vision, the mobile device comprising:
a camera and an auto-focus engine; and
a processor and memory comprising code for
auto-focusing in an auto-focus window in an image using an auto-focus engine;
detecting and tracking natural features in the image with a natural feature processing engine; and
communicating information between the auto-focus engine and the natural feature processing engine.
16. The mobile device of claim 15, wherein the code for communicating information comprises code for communicating a location of the auto-focus window, and the mobile device further comprises code for:
selecting the auto-focus window within the image;
limiting an area of a natural feature detection based on the location of the auto-focus window; and
finding a natural feature within the limited area.
17. The mobile device of claim 15, wherein the code for communicating information comprises code for communicating a change, and the mobile device further comprises code for:
setting a first auto-focus window within a first image;
setting a second auto-focus window within a second image, wherein the change comprises a change from the first auto-focus window to the second auto-focus window;
setting a next tracking search window based on the change; and
tracking a natural within the next tracking search window.
18. A mobile device for use in computer vision, the mobile device comprising:
a camera and an auto-focus engine; and
a processor and memory comprising code for
auto-focusing in an auto-focus window in an image using an auto-focus engine;
detecting and tracking natural features in the image with a natural feature processing engine; and
communicating information between the auto-focus engine and the natural feature processing engine.
19. The mobile device of claim 18, wherein the code for communicating information comprises code for communicating a location of the auto-focus window, and the mobile device further comprising code for:
selecting the auto-focus window within the image;
limiting an area of a natural feature detection based on the location of the auto-focus window; and
finding a natural feature within the limited area.
20. The mobile device of claim 18, wherein the code for communicating information comprises code for communicating a change, and the mobile device further comprising code for:
setting a first auto-focus window within a first image;
setting a second auto-focus window within a second image, wherein the change comprises a change from the first auto-focus window to the second auto-focus window;
setting a next tracking search window based on the change; and
tracking a natural within the next tracking search window.
21. The mobile device of claim 18, wherein code for communicating information comprises code for communicating a change from a first location to a second location, the mobile device further comprising code for:
tracking a natural feature to the first location within a first image;
tracking the natural feature to the second location within a second image;
setting a next auto-focus window based on the change; and
auto-focusing within the auto-focus window.
22. A mobile device for use in computer vision, the mobile device comprising:
means for auto-focusing in an auto-focus window in an image using an auto-focus engine;
means for detecting and tracking natural features in the image with a natural feature processing engine; and
means for communicating information between the auto-focus engine and the natural feature processing engine.
23. The mobile device of claim 22, wherein the means for communicating information comprises means for communicating a location of the auto-focus window, and the mobile device further comprises:
means for selecting the auto-focus window within the image;
means for limiting an area of a natural feature detection based on the location of the auto-focus window; and
means for finding a natural feature within the limited area.
24. The mobile device of claim 22, wherein the means for communicating information comprises means for communicating a change, and the mobile device further comprises:
means for setting a first auto-focus window within a first image;
means for setting a second auto-focus window within a second image, wherein the change comprises a change from the first auto-focus window to the second auto-focus window;
means for setting a next tracking search window based on the change; and
means for tracking a natural within the next tracking search window.
25. The mobile device of claim 22, wherein the means for communicating information comprises means for communicating a change from a first location to a second location, and the mobile device further comprises:
means for tracking a natural feature to the first location within a first image;
means for tracking the natural feature to the second location within a second image;
means for setting a next auto-focus window based on the change; and
means for auto-focusing within the auto-focus window.
26. A nonvolatile computer-readable storage medium including program code stored thereon, comprising program code for:
auto-focusing in an auto-focus window in an image using an auto-focus engine;
detecting and tracking natural features in the image with a natural feature processing engine; and
communicating information between the auto-focus engine and the natural feature processing engine.
27. The nonvolatile computer-readable storage medium of claim 26, wherein the code for communicating information comprises code for communicating a location of the auto-focus window, and further comprises program code for:
selecting the auto-focus window within the image;
limiting an area of a natural feature detection based on the location of the auto-focus window; and
finding a natural feature within the limited area.
28. The nonvolatile computer-readable storage medium of claim 26, wherein the code for communicating information comprises code for communicating a change, and further comprises program code for:
setting a first auto-focus window within a first image;
setting a second auto-focus window within a second image, wherein the change comprises a change from the first auto-focus window to the second auto-focus window;
setting a next tracking search window based on the change; and
tracking a natural within the next tracking search window.
29. The nonvolatile computer-readable storage medium of claim 26, wherein the code for communicating information comprises code for communicating a change, and further comprises program code for:
tracking a natural feature to a first location within a first image;
tracking the natural feature to a second location within a second image;
setting a next auto-focus window based on the change; and
auto-focusing within the auto-focus window.
US13/034,577 2011-02-24 2011-02-24 Auto-focus tracking Active 2033-02-03 US9077890B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US13/034,577 US9077890B2 (en) 2011-02-24 2011-02-24 Auto-focus tracking
EP12716780.7A EP2679001A1 (en) 2011-02-24 2012-02-24 Auto-focus tracking
PCT/US2012/026655 WO2012116347A1 (en) 2011-02-24 2012-02-24 Auto-focus tracking
JP2013555625A JP6327858B2 (en) 2011-02-24 2012-02-24 Autofocus tracking
CN201280019909.0A CN103535021B (en) 2011-02-24 2012-02-24 Auto-focusing is followed the trail of
KR1020137025173A KR101517315B1 (en) 2011-02-24 2012-02-24 Auto-focus tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/034,577 US9077890B2 (en) 2011-02-24 2011-02-24 Auto-focus tracking

Publications (2)

Publication Number Publication Date
US20120218456A1 true US20120218456A1 (en) 2012-08-30
US9077890B2 US9077890B2 (en) 2015-07-07

Family

ID=46000327

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/034,577 Active 2033-02-03 US9077890B2 (en) 2011-02-24 2011-02-24 Auto-focus tracking

Country Status (6)

Country Link
US (1) US9077890B2 (en)
EP (1) EP2679001A1 (en)
JP (1) JP6327858B2 (en)
KR (1) KR101517315B1 (en)
CN (1) CN103535021B (en)
WO (1) WO2012116347A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140226023A1 (en) * 2013-02-13 2014-08-14 Sony Corporation Imaging apparatus, control method, and program
US20140334683A1 (en) * 2011-12-13 2014-11-13 Sony Corporation Image processing apparatus, image processing method, and recording medium
CN104297896A (en) * 2014-09-01 2015-01-21 联想(北京)有限公司 Focusing method and electronic equipment
US20150256740A1 (en) * 2014-03-05 2015-09-10 Disney Enterprises, Inc. Method for capturing photographs and videos on a handheld client device without continually observing the device's screen
US20160165129A1 (en) * 2014-12-09 2016-06-09 Fotonation Limited Image Processing Method
US20160196284A1 (en) * 2013-09-13 2016-07-07 Kyocera Corporation Mobile terminal and method for searching for image
US20170160550A1 (en) * 2014-07-31 2017-06-08 Seiko Epson Corporation Display device, control method for display device, and program
US9891069B2 (en) 2014-09-27 2018-02-13 Intel Corporation Location based haptic direction finding
US10798292B1 (en) 2019-05-31 2020-10-06 Microsoft Technology Licensing, Llc Techniques to set focus in camera in a mixed-reality environment with hand gesture interaction
US11178324B2 (en) * 2019-06-28 2021-11-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Focusing method and device, electronic device and computer-readable storage medium
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
US11227146B2 (en) * 2018-05-04 2022-01-18 Google Llc Stabilizing video by accounting for a location of a feature in a stabilized view of a frame
US11563884B2 (en) * 2019-01-31 2023-01-24 Canon Kabushiki Kaisha Focus detection apparatus, imaging apparatus, and focus detection method
US11683586B2 (en) 2017-10-03 2023-06-20 Google Llc Video stabilization

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945126B (en) * 2014-04-21 2015-03-04 中国人民解放军国防科学技术大学 Automatic focusing and locating method
EP3061237B1 (en) 2014-09-26 2021-11-10 SZ DJI Technology Co., Ltd. System and method for automatic focusing based on statistic data
US9953247B2 (en) * 2015-01-29 2018-04-24 Samsung Electronics Co., Ltd. Method and apparatus for determining eye position information
CN111050060B (en) * 2018-10-12 2021-08-31 华为技术有限公司 Focusing method and device applied to terminal equipment and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115945A1 (en) * 2009-11-17 2011-05-19 Fujifilm Corporation Autofocus system
US20110234885A1 (en) * 2006-10-03 2011-09-29 Nikon Corporation Tracking device and image-capturing apparatus

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6727948B1 (en) 1997-07-15 2004-04-27 Silverbrook Research Pty Ltd Utilizing autofocus information for image processing in a digital camera
US7769285B2 (en) * 2005-02-07 2010-08-03 Panasonic Corporation Imaging device
JP5045125B2 (en) 2006-03-15 2012-10-10 株式会社ニコン Subject tracking device and optical device
JP4457358B2 (en) 2006-05-12 2010-04-28 富士フイルム株式会社 Display method of face detection frame, display method of character information, and imaging apparatus
JP2008263478A (en) 2007-04-13 2008-10-30 Fujifilm Corp Imaging apparatus
JP4900014B2 (en) 2007-04-16 2012-03-21 カシオ計算機株式会社 Imaging apparatus and program thereof
JP2008287064A (en) 2007-05-18 2008-11-27 Sony Corp Imaging apparatus
JP5004726B2 (en) 2007-09-05 2012-08-22 キヤノン株式会社 Imaging apparatus, lens unit, and control method
JP5115210B2 (en) 2008-01-24 2013-01-09 株式会社ニコン Imaging device
EP2104338A3 (en) 2008-03-19 2011-08-31 FUJIFILM Corporation Autofocus system
JP2009229568A (en) 2008-03-19 2009-10-08 Fujinon Corp Autofocus system
KR20090113076A (en) 2008-04-25 2009-10-29 삼성디지털이미징 주식회사 Apparatus and method for braketing capture in digital image processing device
JP5335302B2 (en) 2008-06-30 2013-11-06 キヤノン株式会社 Focus detection apparatus and control method thereof
FR2933218B1 (en) * 2008-06-30 2011-02-11 Total Immersion METHOD AND APPARATUS FOR REAL-TIME DETECTION OF INTERACTIONS BETWEEN A USER AND AN INCREASED REALITY SCENE
JP2010015024A (en) 2008-07-04 2010-01-21 Canon Inc Image pickup apparatus, control method thereof, program and storage medium
JP2010096962A (en) * 2008-10-16 2010-04-30 Fujinon Corp Auto focus system with af frame auto-tracking function
JP2010113130A (en) 2008-11-06 2010-05-20 Nikon Corp Focus detecting device, imaging apparatus, focus detecting method
CN101750845B (en) 2008-12-12 2012-02-15 三星电机株式会社 Auto-focusing method
EP2207342B1 (en) * 2009-01-07 2017-12-06 LG Electronics Inc. Mobile terminal and camera image control method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234885A1 (en) * 2006-10-03 2011-09-29 Nikon Corporation Tracking device and image-capturing apparatus
US20110115945A1 (en) * 2009-11-17 2011-05-19 Fujifilm Corporation Autofocus system

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140334683A1 (en) * 2011-12-13 2014-11-13 Sony Corporation Image processing apparatus, image processing method, and recording medium
US9818202B2 (en) * 2011-12-13 2017-11-14 Sony Corporation Object tracking based on distance prediction
US20140226023A1 (en) * 2013-02-13 2014-08-14 Sony Corporation Imaging apparatus, control method, and program
US20160196284A1 (en) * 2013-09-13 2016-07-07 Kyocera Corporation Mobile terminal and method for searching for image
US20150256740A1 (en) * 2014-03-05 2015-09-10 Disney Enterprises, Inc. Method for capturing photographs and videos on a handheld client device without continually observing the device's screen
US10027884B2 (en) * 2014-03-05 2018-07-17 Disney Enterprises, Inc. Method for capturing photographs and videos on a handheld client device without continually observing the device's screen
US10725300B2 (en) * 2014-07-31 2020-07-28 Seiko Epson Corporation Display device, control method for display device, and program
US20170160550A1 (en) * 2014-07-31 2017-06-08 Seiko Epson Corporation Display device, control method for display device, and program
CN104297896A (en) * 2014-09-01 2015-01-21 联想(北京)有限公司 Focusing method and electronic equipment
US9891069B2 (en) 2014-09-27 2018-02-13 Intel Corporation Location based haptic direction finding
US10455147B2 (en) * 2014-12-09 2019-10-22 Fotonation Limited Image processing method
US20160165129A1 (en) * 2014-12-09 2016-06-09 Fotonation Limited Image Processing Method
US11683586B2 (en) 2017-10-03 2023-06-20 Google Llc Video stabilization
US11227146B2 (en) * 2018-05-04 2022-01-18 Google Llc Stabilizing video by accounting for a location of a feature in a stabilized view of a frame
US11563884B2 (en) * 2019-01-31 2023-01-24 Canon Kabushiki Kaisha Focus detection apparatus, imaging apparatus, and focus detection method
US10798292B1 (en) 2019-05-31 2020-10-06 Microsoft Technology Licensing, Llc Techniques to set focus in camera in a mixed-reality environment with hand gesture interaction
WO2020242680A1 (en) * 2019-05-31 2020-12-03 Microsoft Technology Licensing, Llc Techniques to set focus in camera in a mixed-reality environment with hand gesture interaction
US11178324B2 (en) * 2019-06-28 2021-11-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Focusing method and device, electronic device and computer-readable storage medium
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
US11856295B2 (en) 2020-07-29 2023-12-26 Google Llc Multi-camera video stabilization

Also Published As

Publication number Publication date
CN103535021B (en) 2017-09-12
EP2679001A1 (en) 2014-01-01
CN103535021A (en) 2014-01-22
US9077890B2 (en) 2015-07-07
KR101517315B1 (en) 2015-05-04
KR20130124984A (en) 2013-11-15
JP6327858B2 (en) 2018-05-23
JP2014510943A (en) 2014-05-01
WO2012116347A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
US9077890B2 (en) Auto-focus tracking
US9524434B2 (en) Object tracking based on dynamically built environment map data
US11263475B2 (en) Incremental learning for dynamic feature database management in an object recognition system
US9811731B2 (en) Dynamic extension of map data for object detection and tracking
US8427536B2 (en) Orientation determination of a mobile station using side and top view images
US10595162B2 (en) Access point environment characterization
JP6965253B2 (en) Alignment of reference frames for visual inertia odometry and satellite positioning systems
US9906921B2 (en) Updating points of interest for positioning
US10502840B2 (en) Outlier detection for satellite positioning system using visual inertial odometry
JP2017516079A (en) System, method and device for distributing positioning assistance data
WO2015183490A1 (en) Methods and apparatus for position estimation
US9407809B2 (en) Strategies for triggering depth sensors and transmitting RGBD images in a cloud-based object recognition system
US9870514B2 (en) Hypotheses line mapping and verification for 3D maps

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SWEET, CHARLES WHEELER, III;DIAZ SPINDOLA, SERAFIN;SIGNING DATES FROM 20110302 TO 20110323;REEL/FRAME:026021/0929

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8