CN110908516A - Facilitating user proficiency in using radar gestures to interact with electronic devices - Google Patents

Facilitating user proficiency in using radar gestures to interact with electronic devices Download PDF

Info

Publication number
CN110908516A
CN110908516A CN201911194059.8A CN201911194059A CN110908516A CN 110908516 A CN110908516 A CN 110908516A CN 201911194059 A CN201911194059 A CN 201911194059A CN 110908516 A CN110908516 A CN 110908516A
Authority
CN
China
Prior art keywords
radar
gesture
user
motion
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911194059.8A
Other languages
Chinese (zh)
Inventor
丹尼尔·佩尔·耶普松
劳伦·玛丽·贝达尔
维格内什·萨奇达南达姆
莫尔格温·奎因·麦卡蒂
布兰东·查尔斯·巴尔贝洛
亚历山大·李
莱昂纳多·朱斯蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/601,452 external-priority patent/US20210103337A1/en
Application filed by Google LLC filed Critical Google LLC
Publication of CN110908516A publication Critical patent/CN110908516A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types

Abstract

This document describes techniques that enable a user to be facilitated to interact with an electronic device using radar gestures. Using the described techniques, an electronic device may employ a radar system to detect and determine radar-based touch-independent gestures (radar gestures) made by a user to interact with the electronic device and applications running on the electronic device. In order to use radar gestures for controlling or interacting with an electronic device, a user must correctly perform the radar gestures. Thus, the described technology also provides a game or tutorial environment that allows users to learn and practice radar gestures in a natural manner. The game or tutorial environment also provides visual feedback elements that give user feedback when the radar gesture is made correctly and when the radar gesture is not made correctly, making learning and practice a pleasant and enjoyable experience for the user.

Description

Facilitating user proficiency in using radar gestures to interact with electronic devices
Priority application
Priority of U.S. provisional patent application No.62/910,135 entitled "facility user-processing in Using radio fixtures to Interact with an electronic device", filed 2019, 10/3/35 U.S. 119(e), the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to facilitating a user to be proficient in using radar gestures to interact with an electronic device.
Background
Smartphones, wearable computers, tablets, and other electronic devices are used for personal and business purposes. Users communicate with them via voice and touch and view them as virtual assistants to schedule meetings and events, consume digital media, and share presentations and other documents. Additionally, machine learning techniques may help these devices predict the preferences of some of their users to use the device. However, for all such computing power and artificial intelligence, these devices remain passive communicators. That is, regardless of how "smart" the smartphone is, and regardless of how the user is talking to it as if it were a person, the electronic device can only perform tasks and provide feedback after the user interacts with the device. A user may interact with the electronic device in a variety of ways, including voice, touch, and other input techniques. With the introduction of new technical capabilities and features, users may have to learn new input techniques or different ways to use existing input techniques. Only after learning these new techniques and methods can the user utilize the new features, applications and functions available. The lack of experience with new features and input methods often results in a poor user experience with the device.
Disclosure of Invention
This document describes techniques and systems that facilitate a user to be proficient in using radar gestures to interact with an electronic device. The techniques and systems use radar fields to enable an electronic device to accurately determine the presence or absence of a user in proximity to the electronic device and detect an extension or other radar gesture made by the user interacting with the electronic device. Further, the electronic device includes an application that can help a user learn how to properly make radar gestures that can be used to interact with the electronic device. The application may be a game, tutorial, or other format that allows the user to learn how to make radar gestures that effectively interact with or control the electronic device. The application may also use machine learning techniques and models to help the radar system and the electronic device better recognize how different users made radar gestures. The application and machine learning functionality may improve a user's proficiency in using radar gestures and allow the user to take advantage of additional functionality and features provided by the availability of radar gestures, resulting in a better user experience.
The aspects described below include a method performed by a radar gesture enabled electronic device. The method includes presenting a first visual game element on a display of a radar gesture-enabled electronic device. The method also includes receiving first radar data corresponding to a first motion of a user in a radar field provided by a radar system included in or associated with a radar gesture-enabled electronic device. The method includes determining, based on the first radar data, whether a first motion of the user in the radar field includes a first radar gesture. The method further includes presenting a successful visual animation of the first visual game element in response to determining that the first motion of the user in the radar field includes the first radar gesture, the successful visual animation of the first visual game element indicating a successful advancement of the visual game play. Alternatively, the method includes, in response to determining that the first motion of the user in the radar field does not include the first radar gesture, presenting an unsuccessful visual animation of the first visual game element, the unsuccessful visual animation of the first visual element indicating that the visual game play cannot be advanced.
Other aspects described below include a radar gesture-enabled electronic device including a radar system, a computer processor, and a computer-readable medium. The radar system is implemented at least partially in hardware and provides a radar field. The radar system also senses reflections from users in the radar field, analyzes the reflections from the users in the radar field; and providing radar data based on the analysis of the reflections. The computer-readable medium stores instructions executable by one or more computer processors to implement a gesture training module. The gesture training module presents a first visual game element on a display of the radar gesture enabled electronic device in the context of visual game play. The gesture training module also receives a first subset of radar data corresponding to a first motion of a user in a radar field. The gesture training module determines whether a first motion of the user in the radar field includes a first radar gesture based on the first subset of radar data. In response to determining that the first motion of the user in the radar field comprises a first radar gesture, the gesture training module presents a successful visual animation of the first visual game element, the successful visual animation of the first visual game element indicating successful advancement of the visual game play. Alternatively, in response to determining that the first motion of the user in the radar field does not include the first radar gesture, the gesture training module presents an unsuccessful visual animation of the first visual game element, the unsuccessful visual animation of the first visual element indicating that the visual game play cannot be advanced.
In other aspects, a radar gesture-enabled electronic device is described that includes a radar system, a computer processor, and a computer-readable medium. The radar system is implemented at least partially in hardware and provides a radar field. The radar system also senses reflections from users in the radar field, analyzes the reflections from the users in the radar field, and provides radar data based on the analysis of the reflections. The radar gesture-enabled electronic device includes means for presenting a first visual game element on a display of the radar gesture-enabled electronic device in a context of visual game play. The radar gesture-enabled electronic device also includes means for receiving a first subset of radar data corresponding to a first motion of a user in a radar field. The radar gesture enabled electronic device also includes means for determining, based on the first subset of radar data, whether a first motion of the user in the radar field includes a first radar gesture. The radar gesture-enabled electronic device also includes means for presenting a successful visual animation of the first visual game element in response to determining that the first motion of the user in the radar field includes the first radar gesture, the successful visual animation of the first visual game element indicating a successful advancement of the visual game play. Alternatively, the radar gesture-enabled electronic device includes means for presenting an unsuccessful visual animation of the first visual game element in response to determining that the first motion of the user in the radar field does not include the first radar gesture, the unsuccessful visual animation of the first visual element indicating a failure to advance the visual game play.
This summary is provided to introduce simplified concepts that facilitate a user's proficiency in using radar gestures to interact with an electronic device. The simplified concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
Drawings
With reference to the following figures, aspects are described that facilitate a user proficient in using radar gestures to interact with an electronic device. The same numbers are used throughout the drawings to reference like features and components:
FIG. 1 illustrates an exemplary operating environment in which techniques may be implemented to enable a user to be proficient in using radar gestures to interact with an electronic device.
FIG. 2 illustrates an exemplary implementation that facilitates a user proficient in using radar gestures to interact with an electronic device in the exemplary operating environment of FIG. 1.
FIG. 3 illustrates another exemplary implementation that facilitates proficiency in user interaction with an electronic device using radar gestures in the exemplary operating environment of FIG. 1.
FIG. 4 illustrates an exemplary implementation of an electronic device including a radar system by which facilitating proficiency of a user in using radar gestures to interact with the electronic device may be implemented.
Fig. 5 shows an exemplary implementation of the radar system of fig. 1 and 4.
Fig. 6 shows an exemplary arrangement for accommodating antenna elements of the radar system of fig. 5.
Fig. 7 shows additional details of an exemplary implementation of the radar system of fig. 1 and 4.
Fig. 8 illustrates an exemplary scheme that may be implemented by the radar systems of fig. 1 and 4.
FIG. 9 illustrates an exemplary method of using a tutorial environment with visual elements and visual feedback elements to facilitate a user's proficiency in using radar gestures to interact with an electronic device.
10-22 illustrate examples of visual elements and visual feedback elements used with the course environment method described in FIG. 9.
FIG. 23 illustrates another exemplary method of using a gaming environment that includes visual game elements and animations of visual game elements to facilitate a user's proficiency in using radar gestures to interact with an electronic device.
24-33 illustrate examples of visual game elements and animations of visual game elements for use with the game environment method described in FIG. 23.
Fig. 34 illustrates an exemplary computing system that may be implemented as, or in which techniques may be implemented to enable a user to be facilitated to interact with an electronic device using radar gestures, as any of the types of clients, servers, and/or electronic devices described with reference to fig. 1-33.
Detailed Description
SUMMARY
This document describes techniques and systems that enable a user to be facilitated to interact with an electronic device using radar gestures. The described techniques may employ a radar system that detects and determines radar-based touch-independent gestures (radar gestures) made by a user to interact with an electronic device and an application or program running on the electronic device. In order to use radar gestures for controlling or interacting with an electronic device, a user must make or perform a single radar gesture correctly (otherwise, there is a risk that radar gestures are ignored or that no gesture is detected as a gesture). Thus, the techniques described also use applications that may present a tutorial or game environment that allows users to learn and practice radar gestures in a natural way. The tutorial or game environment also provides visual feedback elements that give user feedback when the radar gesture is made correctly and when the radar gesture is not made correctly, making learning and practice a pleasant and enjoyable experience for the user.
In this description, the terms "radar-based touch-independent gesture", "3D gesture", or "radar gesture" refer to the property of a gesture being spatially far from an electronic device (e.g., the gesture does not require a user to touch the device, although the gesture does not exclude touch). The radar gesture itself may typically have only a two-dimensional component of activity information, such as a radar gesture consisting of a top-left-to-bottom-right swipe on an airplane, but because the radar gesture is a distance ("third") dimension or depth from the electronic device, the radar gestures discussed herein may be generally considered to be three-dimensional. Applications that can receive control inputs through radar-based touch-independent gestures are referred to as radar gesture applications or radar-enabled applications.
Consider an exemplary smartphone that includes the described radar system and tutorial (or game) application. In this example, the user runs a tutorial or game and interacts with elements presented on the display of the electronic device. The user interacts with the element or plays a game, which requires the user to make a radar gesture. When the user makes the radar gesture correctly, the tutorial advance or game play (game-play) is expanded (or advanced). When the user incorrectly makes a radar gesture, the application may provide other feedback to assist the user in making the gesture. The radar gesture is determined to be successful (e.g., made correctly) based on various criteria that may vary depending on factors such as the type of radar gesture application used with the gesture or the type of radar gesture (e.g., horizontal swipe, vertical swipe, or pinch-in). For example, the criteria may include a shape of the radar gesture, a speed of the radar gesture, or a proximity of a user's hand to the electronic device during completion of the radar gesture.
The techniques and systems described employ radar systems, among other features, to provide useful and beneficial user experiences, including visual feedback and game play, based on user gestures and operation of radar gesture applications on electronic devices. Rather than relying solely on the user's knowledge and knowledge of the particular radar gesture application, the electronic device also provides feedback to the user to indicate the success or failure of the radar gesture. Some conventional electronic devices may include instructions for using different input methods (e.g., as part of a device package or document). For example, the electronic device may provide some illustrations or web addresses in the package insert. In some cases, the application may also have a "help" function. However, conventional electronic devices often fail to provide a useful and rich environmental experience that can teach users the capabilities of the electronic devices and the user's interactions with the electronic devices.
These are just a few examples of how the techniques and systems described may be used to enable a user to be facilitated to interact with an electronic device proficiently using radar gestures, and other examples and embodiments will be described in this document. This document now turns to an exemplary operating environment, after which exemplary devices, methods, and systems are described.
Operating environment
FIG. 1 illustrates an exemplary environment 100 in which techniques may be implemented that facilitate proficiency of users in interacting with electronic devices using radar gestures. Exemplary environment 100 includes an electronic device 102, the electronic device 102 including or associated with a persistent radar system 104, a persistent gesture training module 106 (gesture training module 106), and optionally one or more non-radar sensors 108 (non-radar sensors 108). The term "persistence" with respect to radar system 104 or gesture training module 106 refers to the absence of user interaction to activate radar system 104 (which may operate in various modes, such as sleep mode, engage mode, or active mode) or gesture training module 106. In some implementations, the "persistent" state can be paused or closed (e.g., by a user). In other embodiments, the "persistent" state may be scheduled or otherwise managed according to one or more parameters of the electronic device 102 (or another electronic device). For example, even if the electronic device 102 is on at night and throughout the day, the user may schedule the "persistent" state to work only during the day. The non-radar sensor 108 may be any of a variety of devices, such as an audio sensor (e.g., a microphone), a touch input sensor (e.g., a touch screen), a motion sensor, or an image capture device (e.g., a camera or a video camera).
In exemplary environment 100, radar system 104 provides radar field 110 by transmitting one or more radar signals or waveforms, as described below with reference to fig. 5-8. Radar field 110 is a volume of space from which radar system 104 may detect reflections of radar signals and waveforms (e.g., radar signals and waveforms reflected from objects in the volume of space). The radar field 110 may be configured in a variety of shapes, such as a sphere, hemisphere, ellipsoid, cone, one or more lobes, or an asymmetric shape (e.g., which may cover an area on both sides of an obstacle that is not transparent to radar). Radar system 104 also enables electronic device 102 or another electronic device to sense and analyze reflections from objects or motion in radar field 110.
Some implementations of radar system 104 may be particularly advantageously applied in the context of smartphones (e.g., electronic device 102) because these smartphones have many problems, such as low power requirements, processing efficiency requirements, limitations on spacing and placement of antenna elements, and other problems, and may even be more advantageous in certain contexts of smartphones where radar detection of subtle gesture detection is desirable. Although these embodiments are particularly advantageous in the context of the described smart phones requiring fine radar detection gestures, it should be appreciated that the applicability of the features and advantages of the invention is not necessarily limited thereto, and other embodiments involving other types of electronic devices (e.g., as described with reference to FIG. 4) are also within the scope of the present teachings.
With respect to interaction with radar system 104 or interaction through radar system 104, the object may be any of a variety of objects that radar system 104 may sense and analyze for radar reflections, such as wood, plastic, metal, fabric, a human body, or a portion of a human body (e.g., a foot, hand, or finger of a user of electronic device 102). As shown in fig. 1, the object is a user's hand 112 (user 112). Based on the analysis of the reflections, radar system 104 may provide radar data that includes various types of information associated with radar field 110 and reflections from user 112 (or a portion of user 112), as described with reference to fig. 5-8 (e.g., radar system 104 may communicate radar data to other entities, such as gesture training module 106).
The radar data may be provided continuously or periodically over time based on sensed and analyzed reflections from objects (e.g., the user 112 or a portion of the user 112 in the radar field 110). The location of the user 112 may change over time (e.g., objects in the radar field may move within the radar field 110), and thus, the radar data may change over time, corresponding to the changing location, reflection, and analysis. Because radar data may vary over time, radar system 104 provides radar data that includes one or more subsets of radar data corresponding to different time periods. For example, radar system 104 may provide a first subset of radar data corresponding to a first time period, a second subset of radar data corresponding to a second time period, and so on. In some cases, different subsets of radar data may overlap in whole or in part (e.g., one subset of radar data may include the same or partially the same data as another subset of radar data).
In some implementations, radar system 104 may provide radar field 110 such that the field of view (e.g., the volume within which electronic device 102, radar system 104, or gesture training module 106 may determine radar gestures) includes a volume around the electronic device within about one meter and within an angle greater than about ten degrees of electronic device 102 as measured from a plane of a display of the electronic device. For example, the gesture may be made at an angle of about 1 meter and about ten degrees from the electronic device 102 (as measured from the plane of the display 114). In other words, the field of view of the radar system 104 may include a radar field volume of approximately 160 degrees that is substantially perpendicular to a plane or surface of the electronic device.
The electronic device 102 may also include a display 114 and an application manager 116. The display 114 may include any suitable display device, such as a touch screen, a Liquid Crystal Display (LCD), a Thin Film Transistor (TFT) LCD, an in-plane switching (IPS) LCD, a capacitive touch screen display, an Organic Light Emitting Diode (OLED) display, an Active Matrix Organic Light Emitting Diode (AMOLED) display, a super AMOLED display, or the like. The display 114 is used to display visual elements associated with various modes of the electronic device 102, which will be described in further detail with reference to fig. 10-33. The application manager 116 can communicate with and interact with applications running on the electronic device 102 to determine and resolve conflicts between applications (e.g., processor resource usage, power usage, or access to other components of the electronic device 102). The application manager 116 may also interact with the application to determine available input modes of the application, such as touch, voice, or radar gestures (and types of radar gestures), and communicate the available modes to the gesture training module 106.
The electronic device 102 may detect motion of the user 112 within the radar field 110, such as for radar gesture detection. For example, the gesture training module 106 (either separately or through the application manager 116) may determine that an application running on the electronic device has the capability to receive control inputs corresponding to radar gestures (e.g., is a radar gesture application) and what types of gestures may be received by the radar gesture application. The radar gesture may be based on (or determined by) radar data and received by the radar system 104. For example, the gesture training module 106 may present a tutorial or game environment to the user, and then the gesture training module 106 (or the radar system 104) may use one or more subsets of the radar data to detect a motion or movement performed by a portion (such as a hand) or object of the user 112 within the gesture area 118 of the electronic device 102. The gesture training module 106 may then determine whether the user's motion is a radar gesture. For example, the electronic device also includes a gesture library 120. The gesture library 120 is a storage device or location that may store data or information related to known radar gestures or radar gesture templates. The gesture training module 106 may compare the radar data associated with the motion of the user 112 within the gesture zone 118 to data or information stored in the gesture library 120 to determine whether the motion of the user 112 is a radar gesture. Additional details of the gesture area 118 and the gesture library 120 are described below.
Gesture zone 118 is an area or volume around electronic device 102 in which radar system 104 (or another module or application) may detect motion of a user or a portion of a user (e.g., user's hand 112) and determine whether the motion is a radar gesture. A gesture region of a radar field is a smaller region or area than the radar field (e.g., the gesture region has a smaller volume than and is within the radar field). For example, the gesture zone 118 may be a fixed volume around the electronic device having a static size and/or shape (e.g., a threshold distance around the electronic device 102, such as within 3, 5, 7, 9, or 12 inches) that is predefined, variable, user selectable, or determined via another method (e.g., based on power requirements, remaining battery life, imaging/depth sensor, or another factor). In addition to the advantages associated with the field of view of radar system 104, radar system 104 (and associated programs, modules, and managers) also allow electronic device 102 to detect motion of a user and determine radar gestures in low or no light environments because the radar system does not require light to operate.
In other cases, the gesture zone 118 may be a volume around the electronic device that is dynamically and automatically adjustable by the electronic device 102, the radar system 104, or the gesture training module 106 based on factors such as the speed or position of the electronic device 102, the time of day, the state of an application running on the electronic device 102, or another factor. Although radar system 104 may detect objects within radar field 110 at greater distances, gesture area 118 helps electronic device 102 and radar gesture applications to distinguish between intended radar gestures of a user and other types of motion that may be similar to radar gestures, but for which the user does not intend to do so. The gesture area 118 may be configured with a threshold distance, such as within approximately 3, 5, 7, 9, or 12 inches. In some cases, the gesture zone may extend different threshold distances from the electronic device in different directions (e.g., it may have a wedge, rectangle, oval, or asymmetric shape). The size or shape of the gesture zone may also vary over time or be based on other factors, such as the state of the electronic device (e.g., battery level, orientation, locked or unlocked) or environment (such as in a pocket or purse, in a car, or on a flat surface).
In some implementations, the gesture training module 106 can be used to provide a tutorial or game environment in which the user 112 can interact with the electronic device 102 using radar gestures in order to learn and practice making radar gestures. For example, the gesture training module may present elements on display 114 that may be used to teach a user how to make and use radar gestures. The element may be any suitable element, such as a visual element, a visual game element, or a visual feedback element. FIG. 1 shows an exemplary visual element 122, an exemplary visual game element 124, and an exemplary visual feedback element 126. For visual simplicity of fig. 1, examples are represented in general shapes. However, these exemplary elements may take any of a variety of forms, such as abstract shapes, geometric shapes, symbols, video images (e.g., embedded video presented on display 114), or a combination of one or more forms. In other cases, the elements may be real or fictional characters, such as humans or animals (real or mythical), but also media or game characters, such as PikachuTM. Additional examples and details regarding these elements are described with reference to fig. 2-33.
Consider the example shown in FIG. 2, which shows the user 112 within the gesture area 118. In FIG. 2, exemplary visual elements 122-1 (shown as a ball component and a dog component) are presented on the display 114. In this example, assume that gesture training module 106 is presenting visual element 122-1 to request that user 112 make a left-to-right swipe radar gesture (e.g., to train the user to make the gesture). Assume further that visual element 122-1 is initially presented with a ball near the left edge of display 114, as indicated by the dashed line representation of the ball component and initially presented with a dog that is equal near the right edge of display 114. Continuing the example, the user 112 makes a hand movement from left to right, as indicated by arrow 202. Assume that the gesture training module 106 determines that the user's motion is a swipe radar gesture from left to right. In response to the radar gesture, gesture training module 106 may provide an exemplary visual feedback element 126-1 indicating that the user successfully performed the requested left-to-right radar gesture. For example, visual feedback element 126-1 may be an animation of visual element 122-1, wherein the ball component of visual element 122-1 moves toward the dog component of visual element 122-1 (from left to right), as indicated by another arrow 204.
Consider another example illustrated in FIG. 3, which shows the user 112 within the gesture area 118. In FIG. 3, an exemplary visual game element 124-1 (shown as a basketball assembly and a basketball hoop assembly) is presented on the display 114. In this example, assume that gesture training module 106 is presenting visual game element 124-1 to request that user 112 make a radar gesture of a left-to-right swipe (e.g., to train the user to make the gesture). Assume further that the visual game element 124-1 is initially presented with the basketball assembly near the left edge of the display 114, as indicated by the dashed line representation of the basketball assembly, and that the basketball rim assembly is located near the right edge of the display 114. Continuing with the example, user 112 makes a left-to-right hand movement, as indicated by arrow 302. Assume that gesture training module 106 determines that the user's motion is a radar gesture of a swipe left to right. In response to the radar gesture, gesture training module 106 may provide an exemplary visual feedback element 126-2 indicating that the user successfully performed the requested left-to-right radar gesture. For example, the visual feedback element 126-2 may be a successful animation of the visual game element 124-1, wherein the basketball component of the visual game element 124-1 moves toward the basket component of the visual game element 124-1 (left to right), as indicated by another arrow 304.
In examples 200 or 300 above, gesture training module 106 may also determine that the user's motion is not the requested gesture. In response to determining that the motion is not the requested radar gesture, gesture training module 106 may provide another visual feedback element indicating that the user did not successfully perform the requested left-to-right radar gesture (not shown in fig. 2 or 3). Other examples of visual elements 122, visual play elements 124, and visual feedback elements 126 are described with reference to fig. 10-22-24-33. These examples show how the described techniques (including visual elements 122, visual game elements 124, and visual feedback elements 126) can be used to provide users with a natural and enjoyable opportunity to learn and practice radar gestures, which can improve the user's experience with electronic device 102 and radar gesture applications running on electronic device 102.
In more detail, considering FIG. 4, which illustrates an exemplary implementation 400 of the electronic device 102 (including the radar system 104, the gesture training module 106, the non-radar sensor 108, the display 114, the application manager 116, and the gesture library 120), aspects may be implemented that facilitate a user's proficiency in using radar gestures to interact with the electronic device. The electronic device 102 of FIG. 4 is illustrated by various exemplary devices, including a smart phone 102-1, a tablet computer 102-2, a laptop computer 102-3, a desktop computer 102-4, a computing watch 102-5, a gaming system 102-6, computing glasses 102-7, a home automation and control system 102-8, a smart refrigerator 102-9, and a car 102-10. The electronic device 102 may also include other devices such as televisions, entertainment systems, audio systems, drones, touch pads, graphics tablets, netbooks, e-readers, home security systems, and other household appliances. Note that the electronic device 102 may be a wearable device, a non-wearable but mobile device, or a relatively non-mobile device (e.g., desktop and home appliance). The term "wearable device" as used in this disclosure refers to any device (e.g., a watch, bracelet, ring, necklace, other jewelry, glasses, footwear, glove, headband, or other headwear, clothing, goggles, contact lenses) that can be worn at, on, or near a human body (such as a wrist, ankle, waist, chest, or other body part or prosthetic).
In some implementations, an exemplary overall lateral dimension of the electronic device 102 may be about 8 centimeters by about 15 centimeters. The exemplary coverage area of the radar system 104 may be further limited, such as about 4 millimeters by 6 millimeters including the antenna. Such a requirement for such a limited footprint of radar system 104 is to accommodate many other desirable features of electronic device 102 (e.g., fingerprint sensors, non-radar sensors 108, etc.) in such a space-constrained enclosure. In combination with power and processing limitations, such size requirements may result in a tradeoff of accuracy and effectiveness of radar gesture detection, at least some of which may be overcome in view of the teachings herein.
The electronic device 102 also includes one or more computer processors 402 and one or more computer-readable media 404, including memory media and storage media. An application and/or operating system (not shown) embodied as computer-readable instructions on computer-readable medium 404 may be executed by computer processor 402 to provide some or all of the functionality described herein. For example, the processor 402 may be used to execute instructions on the computer-readable medium 404 to implement the gesture training module 106 and/or the application manager 116. The electronic device 102 may also include a network interface 406. The electronic device 102 may use the network interface 406 to communicate data over a wired, wireless, or optical network. By way of example, and not limitation, network interface 406 may communicate data over a Local Area Network (LAN), a Wireless Local Area Network (WLAN), a Personal Area Network (PAN), a Wide Area Network (WAN), an intranet, the Internet, a peer-to-peer network, or a mesh network.
Various implementations of radar system 104 may include a system on a chip (SoC), one or more Integrated Circuits (ICs), a processor with embedded processor instructions or configured to access processor instructions stored in memory, hardware with embedded firmware, a printed circuit board with various hardware components, or any combination thereof. The radar system 104 may operate as a monostatic radar by sending and receiving its own radar signals.
In some embodiments, the radar system 104 may also cooperate with other radar systems 104 within the external environment to implement bistatic, multistatic, or network radar. However, constraints or limitations of the electronic device 102 may affect the design of the radar system 104. For example, the electronic device 102 may have limited power, limited computational power, size constraints, layout constraints, a housing that attenuates or distorts radar signals, and so forth, that may be used to operate a radar. The radar system 104 includes several features that enable advanced radar functionality and high performance in the presence of these constraints, as further described below with reference to fig. 5. Note that in fig. 1 and 4, radar system 104, gesture training module 106, application manager 116, and gesture library 120 are shown as part of electronic device 102. In other implementations, one or more of radar system 104, gesture training module 106, application manager 116, or gesture library 120 may be separate or remote from electronic device 102.
These and other capabilities and configurations, as well as the manner in which the entities of FIG. 1 act and interact, are set forth in greater detail below. These entities may be further divided, combined, and the like. The environment 100 of fig. 1 and the detailed examples of fig. 2-34 illustrate some of the many possible environments and devices in which the techniques can be employed. Fig. 5-8 depict additional details and features of radar system 104. In fig. 5-8, radar system 104 is described in the context of electronic device 102, but as noted above, the applicability of the features and advantages of the described systems and techniques is not necessarily so limited, and other implementations involving other types of electronic devices are within the scope of the present teachings.
FIG. 5 illustrates an exemplary implementation 500 of radar system 104 that may be used to facilitate proficiency in user use of radar gestures to interact with an electronic device. In example 500, radar system 104 includes at least one of each of the following components: communication interface 502, antenna array 504, transceiver 506, processor 508, and system media 510 (e.g., one or more computer-readable storage media). The processor 508 may be implemented as a digital signal processor, a controller, an application processor, another processor (e.g., the computer processor 402 of the electronic device 102), or some combination thereof. The system media 510, which may be included within or separate from the computer-readable media 404 of the electronic device 102, includes one or more of the following modules: these modules, the attenuation reducer 514, the digital beamformer 516, the angle estimator, or the power manager 520, may compensate for or reduce the impact of integrating the radar system 104 within the electronic device 102, thereby enabling the radar system 104 to recognize small or complex gestures, differentiate between different orientations of a user, continuously monitor the external environment, or achieve a target false alarm rate. With these features, radar system 104 may be implemented within a variety of different devices, such as the device shown in FIG. 1.
Using communication interface 502, radar system 104 may provide radar data to gesture training module 106. Communication interface 502 may be a wireless or wired interface, depending on whether radar system 104 is implemented separate from electronic device 102 or integrated within electronic device 102. Depending on the application, the radar data may include raw or minimally processed data, in-phase and quadrature (I/Q) data, range-doppler data, processed data including target location information (e.g., range, azimuth, elevation), cluttered map data, and so forth. Generally, the radar data includes information that may be used by the gesture training module 106 to facilitate a user's proficiency in using radar gestures to interact with the electronic device.
Antenna array 504 includes at least one transmit antenna element (not shown) and at least two receive antenna elements (as shown in fig. 6). In some cases, antenna array 504 may include multiple transmit antenna elements to implement a multiple-input multiple-output (MIMO) radar capable of transmitting multiple different waveforms at once (e.g., each transmit antenna element transmits a different waveform). The use of multiple waveforms may improve the measurement accuracy of radar system 104. For embodiments including three or more receive antenna elements, the receive antenna elements may be placed in a one-dimensional shape (e.g., a line) or a two-dimensional shape. One-dimensional shapes enable radar system 104 to measure one angular dimension (e.g., azimuth or elevation), while two-dimensional shapes enable two angular dimensions (e.g., azimuth and elevation). An exemplary two-dimensional arrangement of receive antenna elements is further described with reference to fig. 6.
Fig. 6 shows an exemplary arrangement 600 of receive antenna elements 602. For example, if antenna array 504 includes at least four receive antenna elements 602, for example, receive antenna elements 602 may be placed in a rectangular arrangement 604-1 as shown in the middle of fig. 6. Alternatively, if antenna array 504 includes at least three receive antenna elements 602, a triangular arrangement 604-2 or an L-shaped arrangement 604-3 may be used.
Due to size or layout limitations of the electronic device 102, the element spacing between the receive antenna elements 602 or the number of receive antenna elements 602 may be less than ideal for the angle to be monitored by the radar system 104. In particular, element spacing may result in angular ambiguity, which is a challenge for conventional radars to estimate the angular position of a target. Thus, conventional radars may limit the field of view (e.g., the angle to be monitored) to avoid blurry regions with angular ambiguity, thereby reducing false detections. For example, conventional radars may limit the field of view to angles between-45 degrees and 45 degrees to avoid angular ambiguity resulting from using a wavelength of 5 millimeters (mm) and an element spacing of 3.5mm (e.g., an element spacing of 70% of the wavelength). Thus, conventional radars may not be able to detect targets that exceed the 45 degree limit of the field of view. In contrast, the radar system 104 includes a digital beamformer 516 and an angle estimator 518 that resolve angle ambiguities and enable the radar system 104 to monitor angles that exceed 45 degree limits, such as angles between about-90 degrees to 90 degrees or up to between about-180 degrees and 180 degrees. These angular ranges may be applied in one or more directions (e.g., azimuth and/or elevation). Thus, the radar system 104 may achieve a low false alarm rate for a variety of different antenna array designs, including element spacings that are less than, greater than, or equal to one-half of the center wavelength of the radar signal.
Using the antenna array 504, the radar system 104 may form a beam that is steered or non-steered, wide or narrow, or shaped (e.g., shaped as a hemisphere, cube, sector, cone, or cylinder). As an example, one or more transmit antenna elements (not shown) may have an omnidirectional radiation pattern that is not steered, or may produce a wide beam, such as wide transmit beam 606. Any of these techniques can cause radar system 104 to illuminate a large volume of space. However, to achieve the target angular accuracy and angular resolution, the receive antenna elements 602 and digital beamformer 516 may be used to generate thousands of narrow and steered beams (e.g., 2000 beams, 4000 beams, or 6000 beams), such as narrow receive beam 608. In this manner, radar system 104 may effectively monitor the external environment and accurately determine the angle of arrival of the reflections within the external environment.
Returning to fig. 5, the transceiver 506 includes circuitry and logic for transmitting and receiving radar signals via the antenna array 504. Components of transceiver 506 may include amplifiers, mixers, switches, analog-to-digital converters, filters, etc. for conditioning radar signals. The transceiver 506 may also include logic to perform in-phase/quadrature (I/Q) operations such as modulation or demodulation. The transceiver 506 may be configured for continuous wave radar operation or pulsed radar operation. Various modulations may be used to generate the radar signal, including linear frequency modulation, triangular frequency modulation, stepped frequency modulation, or phase modulation.
The transceiver 506 may generate radar signals within a frequency range (e.g., a frequency spectrum), such as between 1 gigahertz (GHz) and 400GHz, between 4GHz and 100GHz, or between 57GHz and 63 GHz. The spectrum may be divided into a plurality of sub-spectra having similar bandwidths or different bandwidths. The bandwidth may be about 500 megahertz (MHz), 1GHz, 2GHz, etc. By way of example, the different frequency sub-spectrums may include frequencies between approximately 57GHz to 59GHz, 59GHz to 61GHz, or 61GHz to 63 GHz. For coherence, it is also possible to select a plurality of frequency sub-spectra that have the same bandwidth and may be continuous or discontinuous. The multi-frequency sub-spectrum may be transmitted simultaneously or separated in time using a single radar signal or multiple radar signals. The continuous frequency sub-spectrum allows the radar signal to have a wider bandwidth, while the discontinuous frequency sub-spectrum may further emphasize amplitude and phase differences, which allows the angle estimator 518 to resolve angle ambiguities. Attenuation reducer 514 or angle estimator 518 may cause transceiver 506 to utilize one or more frequency sub-spectra to improve the performance of radar system 104, as further described with reference to fig. 7 and 8.
Power manager 520 enables radar system 104 to conserve power within or outside electronic device 102. In some implementations, the power manager 520 communicates with the gesture training module 106 to save power within one or both of the radar system 104 or the electronic device 102. Internally, for example, power manager 520 may cause radar system 104 to collect data using a predetermined power pattern or a particular gesture frame update rate. The gesture frame update rate represents the frequency at which radar system 104 actively monitors the external environment by sending and receiving one or more radar signals. In general, power consumption is proportional to the gesture frame update rate. Thus, a higher gesture frame update rate may result in a greater amount of power being consumed by radar system 104.
Each predefined power mode may be associated with a particular frame structure, a particular transmit power level, or particular hardware (e.g., a low power processor or a high power processor). Adjusting one or more of these may affect the power consumption of radar system 104. However, reducing power consumption can impact performance, such as gesture frame update rate and response delay. In this case, power manager 520 dynamically switches between different power modes such that the gesture frame update rate, response delay, and power consumption are managed together based on activity within the environment. In general, power manager 520 determines when and how power may be conserved and incrementally adjusts power consumption to enable radar system 104 to operate within the power limits of electronic device 102. In some cases, power manager 520 may monitor the remaining amount of available power and adjust the operation of radar system 104 accordingly. For example, if the remaining battery is low, power manager 520 may continue to operate in the low power mode rather than switching to the high power mode.
The low power mode may, for example, use a lower gesture frame update rate of about a few hertz (e.g., about 1Hz or less than 5Hz) and consume about a few milliwatts (mW) (e.g., between about 2mW and 4 mW) of power. On the other hand, the high power mode may use a higher gesture frame update rate of about a few tens of hertz (Hz) (e.g., about 20Hz or greater than 10Hz), which causes the radar system 104 to consume about a few milliwatts (e.g., between about 6mW and 20 mW) of power. Although a low power mode may be used to monitor the external environment or detect a user in proximity, if radar system 104 determines that a user is beginning to perform a gesture, power manager 520 may switch to a higher power mode. Different triggers may cause power manager 520 to dynamically switch between different power modes. Exemplary triggers include motion or no motion, presence or absence of a user, movement of a user into or out of a designated area (e.g., an area defined by range, azimuth, or elevation), a change in velocity of motion associated with a user, or a change in reflected signal strength (e.g., due to a change in radar cross-section). In general, a trigger indicating a lower probability of user interaction with the electronic device 102 or a preference to collect data using a longer response delay may result in activating a lower power mode to conserve power.
Each power mode may be associated with a particular frame structure. The frame structure specifies the configuration, schedule, and signal characteristics associated with the transmission and reception of radar signals. Generally, the frame structure is set so that appropriate radar data can be collected based on the external environment. The frame structure may be customized to facilitate the collection of different types of radar data for different applications (e.g., proximity detection, feature recognition, or gesture recognition). During the inactive time of each stage of the overall frame structure, power manager 520 may turn off components within transceiver 506 in fig. 5 to save power. The frame structure allows power to be saved by an adjustable duty cycle within each frame type. For example, the first duty cycle may be based on a number of active feature frames relative to a total number of feature frames. The second duty cycle may be based on a number of active radar frames relative to a total number of feature frames. The third duty cycle may be based on a duration of the radar signal relative to a duration of the radar frame.
Consider an example frame structure (not shown) for a low power mode that consumes about 2mW of power and has a gesture frame update rate between about 1Hz and 4 Hz. In this example, the frame structure includes a gesture frame having a duration between approximately 250ms and 1 second. The gesture frame includes 31 pulse pattern feature frames. One of the 31 burst mode feature frames is active. This results in a duty cycle of approximately 3.2%. Each burst mode feature frame is between about 8 milliseconds and 32 milliseconds in duration. Each burst mode feature frame consists of eight radar frames. Within the active burst mode feature frame, all eight radar frames are active. This results in a duty cycle equal to 100%. Each radar frame is between about 1 millisecond and 4 milliseconds in duration. The active time within each active radar frame is approximately between 32 mus and 128 mus. Thus, the final duty cycle is about 3.2%. This exemplary frame structure has been found to produce good performance results. These good performance results are reflected in good gesture recognition and state detection, while also yielding good power efficiency results in the application environment of a handheld smartphone in a low power consumption state. Based on this exemplary frame structure, power manager 520 may determine when radar system 104 is not actively collecting radar data. Based on the period of inactivity, power manager 520 may save power by adjusting an operating state of radar system 104 and turning off one or more components in transceiver 506, as described further below.
The power manager 520 may also save power by turning off one or more components (e.g., a voltage controlled oscillator, a multiplexer, an analog-to-digital converter, a phase locked loop, or a crystal oscillator) in the transceiver 506 during periods of inactivity. These periods of inactivity, which may be on the order of microseconds (μ s), milliseconds (ms), or seconds(s), occur if radar system 104 is not actively transmitting or receiving radar signals. In addition, power manager 520 may modify the transmit power of the radar signal by adjusting the amount of amplification provided by the signal amplifier. Additionally, power manager 520 may control the use of different hardware components within radar system 104 to conserve power. For example, if the processors 508 include, for example, a lower power processor and a higher power processor (e.g., processors with different memory and computing capabilities), the power manager 520 may switch between using the lower power processor for low-level analysis (e.g., implementing an idle mode, detecting motion, determining a user's location, or monitoring an environment) and using the higher power processor for requesting high-fidelity or accurate radar data through the gesture training module 106 (e.g., for implementing a perception mode, an engagement mode or activity mode, gesture recognition, or user orientation).
Further, power manager 520 may determine a context of an environment surrounding electronic device 102. From this context, power manager 520 may determine which power states are made available and how to configure them. For example, if the electronic device 102 is in a user's pocket, the radar system 104 need not operate in a higher power mode with a high gesture frame update rate despite detecting the user as being proximate to the electronic device 102. Accordingly, power manager 520 may maintain radar system 104 in a lower power mode and maintain display 114 off or other lower power state even if a user is detected as being proximate to electronic device 102. Electronic device 102 may use any suitable non-radar sensors 108 (e.g., gyroscopes, accelerometers, light sensors, proximity sensors, capacitive sensors, etc.) in conjunction with radar system 104 to determine a context of its environment. The context may include time of day, calendar day, darkness, number of users in proximity to the user, surrounding noise levels, speed of movement of surrounding objects (including the user) relative to the electronic device 102, etc.).
Fig. 7 illustrates additional details of an exemplary implementation 700 of radar system 104 within electronic device 102. In example 700, antenna array 504 is positioned below a housing (such as a glass cover or housing) of electronic device 102. Depending on its material properties, the housing may act as an attenuator 702 that attenuates or distorts radar signals transmitted and received by the radar system 104. The attenuator 702 may include different types of glass or plastic, some of which may be found in a display screen, external housing, or other component of the electronic device 102 and have a dielectric constant (e.g., relative dielectric constant) between about 4 and 10. Attenuator 702 is therefore opaque or translucent to radar signal 706 and may cause a portion of transmitted or received radar signal 706 to be reflected (as shown by reflected portion 704). For conventional radars, attenuator 702 may reduce the effective range that can be monitored, prevent detection of small targets, or reduce overall accuracy.
Assuming that the transmit power of radar system 104 is limited and redesign of the housing is not desired, one or more attenuation-related characteristics of radar signal 706 (e.g., frequency sub-spectrum 708 or steering angle 710) or attenuation-related characteristics of attenuator 702 (e.g., distance 712 between attenuator 702 and radar system 104 or thickness 714 of attenuator 702) are adjusted to mitigate the effects of attenuator 702. Some of these characteristics may be set during manufacture or adjusted by attenuation resolver 514 during operation of radar system 104. For example, attenuation reducer 514 may cause transceiver 506 to transmit radar signal 706 using a selected frequency sub-spectrum 708 or steering angle 710, cause the platform to move radar system 104 closer or farther away from attenuator 702 to alter distance 712, or prompt the user to apply another attenuator to increase thickness 714 of attenuator 702.
Appropriate adjustments may be made by attenuation mitigator 514 to measure one or more characteristics of attenuator 702 based on predetermined characteristics of attenuator 702 (e.g., characteristics stored in computer-readable medium 404 or within system medium 510 of electronic device 102) or by processing the return signal of radar signal 706. Even if some attenuation-related characteristics are fixed or constrained, attenuation attenuator 514 may take into account these limitations to balance each parameter and achieve target radar performance. As a result, attenuation mitigator 514 enables radar system 104 to achieve enhanced accuracy and greater effective range for detecting and tracking users located on opposite sides of attenuator 702. These techniques provide an alternative to increasing the transmit power, increasing the power consumption of radar system 104, or changing the material properties of attenuator 702, which can be difficult and expensive once the device is in production.
Fig. 8 shows an exemplary scheme 800 implemented by radar system 104. Portions of scheme 800 may be performed by processor 508, computer processor 402, or other hardware circuitry. Scheme 800 may be customized to support different types of electronic devices and radar-based applications (e.g., gesture training module 106), and may enable radar system 104 to achieve target angle accuracy despite design constraints.
The transceiver 506 generates raw data 802 based on the individual responses of the receive antenna elements 602 to the received radar signals. The received radar signals may be associated with one or more frequency sub-spectra 804 selected by an angle estimator 518 to facilitate angle ambiguity resolution. For example, frequency sub-spectrum 804 may be selected to reduce the number of side lobes or to reduce the amplitude of the side lobes (e.g., by 0.5dB, 1dB, or more). The number of frequency sub-spectra may be determined based on a target angular accuracy or computational limit of the radar system 104.
Raw data 802 contains digital information (e.g., in-phase and quadrature data) over a period of time, different wave numbers, and multiple channels each associated with a receiving antenna element 602. A Fast Fourier Transform (FFT)806 is performed on the raw data 802 to generate preprocessed data 808. The pre-processed data 808 includes digital information across a period of time, for different ranges (e.g., bins), and for multiple channels. Doppler filtering processing 810 is performed on the preprocessed data 808 to generate range-doppler data 812. The doppler filtering process 810 may include another FFT that generates amplitude and phase information for multiple spacings, multiple doppler frequencies, and multiple channels. The digital beamformer 516 generates beamforming data 814 based on the range-doppler data 812. The beamforming data 814 contains digital information for a set of azimuth and/or elevation angles that represents the field of view from which the digital beamformer forms different steering angles or beams. Although not shown, the digital beamformer 516 may alternatively generate beamforming data 814 based on the pre-processed data 808 and the doppler filter process 810 may generate range-doppler data 812 based on the beamforming data 814. To reduce the amount of computation, the digital beamformer 516 may process a portion of the range-doppler data 812 or the pre-processed data 808 based on the range, time, or doppler frequency interval of interest.
The digital beamformer 516 may be implemented using a single-view beamformer 816, a multi-view interferometer 818, or a multi-view beamformer 820. In general, the monoscopic beamformer 816 may be used for deterministic objects (e.g., point source targets with a single phase center). For non-deterministic targets (e.g., targets with multiple phase centers), a multi-view interferometer 818 or a multi-view beamformer 820 may be used to improve accuracy relative to the single-view beamformer 816. Humans are examples of non-deterministic targets and have multiple phase centers 822 that can be changed based on different FOV angles (as shown by 824-1 and 824-2). The changes in constructive or destructive interference produced by the multiple phase centers 822 may be a challenge to conventional radars to accurately determine angular position. However, the multi-view interferometer 818 or the multi-view beamformer 820 performs coherent averaging to improve the accuracy of the beamformed data 814. Multi-view interferometer 818 coherently averages the two channels to generate phase information that can be used to accurately determine angular information. On the other hand, the multiview beamformer 820 may coherently average two or more channels using a linear or nonlinear beamformer such as fourier, Capon, multiple signal classification (MUSIC), or Minimum Variance Distortionless Response (MVDR). The increased accuracy provided via the multi-view beamformer 820 or the multi-view interferometer 818 enables the radar system 104 to recognize small gestures or to distinguish multiple parts of a user.
The angle estimator 518 analyzes the beamforming data 814 to estimate one or more angular positions. The angle estimator 518 may utilize signal processing techniques, pattern matching techniques, or machine learning. The angle estimator 518 also accounts for angular ambiguities that may result from the design of the radar system 104 or the field of view of the radar system 104 monitor. Exemplary angular ambiguities are shown within amplitude map 826 (e.g., amplitude response).
The amplitude map 826 depicts the amplitude differences that may occur for different angular positions of the target and for different steering angles 710. A target at a first angular position 830-1 is shown showing a first amplitude response 828-1 (shown in solid lines). Likewise, for a target at a second angular position 830-2, a second amplitude response 828-2 (shown in phantom) is shown. In this example, the difference is considered in the angle between-180 degrees and 180 degrees.
As shown in amplitude plot 826, there are fuzzy regions for two angular positions 830-1 and 830-2. The first amplitude response 828-1 has a highest peak at a first angular position 830-1 and a smaller peak at a second angular position 830-2. Although the highest peak corresponds to the actual position of the target, the smaller peak may cause the first angular position 830-1 to be ambiguous because it is within some threshold that conventional radar may not be able to confidently determine whether the target is at the first angular position 830-1 or the second angular position 830-2. In contrast, the second amplitude response 828-2 has a smaller peak at the second angular position 830-2 and a higher peak at the first angular position 830-1. In this case, the smaller peak corresponds to the position of the target.
While conventional radar may be limited to using the highest peak amplitude to determine the angular position, instead, the angle estimator 518 analyzes the slight differences in the shape of the amplitude responses 828-1 and 828-2. The characteristics of the shape may include, for example, roll-off, peak or zero bit width, angular position of peak or zero value, height or depth of peak and zero value, side lobe shape, symmetry within the amplitude response 828-1 or 828-2, or lack of symmetry within the amplitude response 828-1 or 828-2. Similar shape features can be analyzed in the phase response, which can provide additional information to resolve angular ambiguities. Thus, the angle estimator 518 maps unique angular features or patterns to angular positions.
The angle estimator 518 may include a set of algorithms or tools that may be selected based on the type of electronic device 102 (e.g., computing power or power limitations) or the target angular resolution of the gesture training module 106. In some implementations, the angle estimator 518 may include a neural network 832, a Convolutional Neural Network (CNN)834, or a Long Short Term Memory (LSTM) network 836. The neural network 832 may have various depths or different numbers of hidden layers (e.g., 3 hidden layers, 5 hidden layers, or 10 hidden layers), and may also include different numbers of connections (e.g., the neural network 832 may include a fully-connected neural network or a partially-connected neural network). In some cases, CNN 834 may be used to increase the computation speed of the angle estimator 518. LSTM network 836 may be used to cause angle estimator 518 to track the target. Using machine learning techniques, the angle estimator 518 employs a non-linear function to analyze the shape of the amplitude response 828-1 or 828-2 and generates angle probability data 838 that indicates the likelihood that the user or a portion of the user is within the angle box. The angle estimator 518 can provide angle probability data 838 to several angle boxes, such as two angle boxes that provide a probability that the target is located to the left or right of the electronic device 102, or to thousands of angle boxes (e.g., angle probability data 838 for consecutive angle measurements).
Based on the angle probability data 838, the tracker module 840 generates angle position data 842 that identifies the angle position of the target the tracker module 840 may determine the angle position of the target based on the angle box with the highest probability in the angle probability data 838 or based on predictive information (e.g., previously measured angle position information).
Quantizer module 844 obtains angular position data 842 and quantizes the data to produce quantized angular position data 846. The quantification may be performed based on a target angular resolution for the gesture training module 106. In some cases, a smaller quantization level may be used such that the quantized angular position data 846 indicates whether the target is on the right or left side of the electronic device 102, or identifies the 90 degree quadrant in which the target is located. This may be sufficient for some radar-based applications, such as user proximity detection. In other cases, more quantization levels may be used such that the quantized angular position data 846 indicates the angular position of the target within an accuracy of a fraction of a degree, 1 degree, five degrees, etc. This resolution may be used for high resolution radar-based applications, such as gesture recognition, or in implementing gesture zones, recognition zones, perception modes, engagement modes, or activity modes as described herein. In some embodiments, the digital beamformer 516, the angle estimator 518, the tracker module 840, and the quantizer module 844 are implemented together in a single machine learning module.
These and other capabilities and configurations, as well as the manner in which the entities of fig. 1-8 act and interact, are set forth below. The entities may be further divided, combined, used with other sensors or components, and so forth. In this manner, different implementations of the electronic device 102 having different configurations of the radar system 104 and the non-radar sensor may be used to implement aspects that facilitate a user's proficiency in using radar gestures to interact with the electronic device. The exemplary operating environment 100 of fig. 1 and the detailed illustrations of fig. 2-8 illustrate only a few of the many possible environments and devices in which the techniques can be employed.
Exemplary method
Fig. 9-22 and 23-33 depict exemplary methods 900 and 2300 that facilitate a user proficient in radar gestures to interact with an electronic device. The methods 900 and 2300 may be performed by an electronic device that includes or is associated with a display, a computer processor, and a radar system that can provide a radar field, such as the electronic device 102 (and the radar system 104). The radar system and radar field may provide radar data based on reflections of the radar field from objects in the radar field (e.g., the user 112 or a portion of the user 112, such as a hand). For example, the radar data may be generated by the radar system 104 and/or received by the radar system 104, as described with reference to fig. 1-8. The radar data is used to determine user interactions with the electronic device, such as the presence of the user in the radar field and gestures made by the user (e.g., radar gestures). Based on determining the presence, motion, and gestures of the user, the electronic device may enter and exit different functional modes and display different elements on the display, including visual elements, visual game elements, and visual feedback elements.
The visual elements described with reference to method 900 and method 2300 may enable an electronic device to provide training and practice to a user in performing radar gesture interactions with the electronic device. Further, the visual element may provide feedback to the user to indicate success and efficiency of the user's radar gesture interacting with the electronic device. Additional examples of visual elements are described with reference to fig. 10-22 and 24-33.
Method 900 is illustrated as a set of blocks that specify operations performed but are not necessarily limited to the orders or combinations shown for performing the operations by the respective blocks. Further, any of the one or more operations may be repeated, combined, re-organized, or linked to provide a wide variety of additional and/or alternative methods. In portions of the following discussion, reference may be made to the exemplary operating environment 100 of fig. 1 or to entities or processes described in fig. 2-8, which are referenced by way of example only. The techniques are not limited to being performed by one entity or multiple entities operating on one device.
At 902, visual elements and instructions are presented on a display of a radar gesture enabled electronic device. The visual elements and instructions request the user to perform a gesture proximate to the electronic device. For example, gesture training module 106 may present visual element 122 (which may include visual element 122-1) and instructions on display 114 of electronic device 102. The requested gesture may be a radar-based touch independent radar gesture (as described above), a touch gesture (e.g., on a touch screen), or another gesture, such as a camera-based touch independent gesture. Visual element 122 may be any of a variety of suitable elements that user 112 may interact with using a requested gesture (e.g., a radar gesture). In some cases, for example, visual element 122 may be a set of objects, such as a ball and a dog or a mouse in a maze. In other cases, the visual element 122 may be a character or object in the game environment, such as an animated character (e.g., a pick-up hill) having a task to perform or a car on a track.
The instructions included with visual element 122 may take any of a variety of forms (e.g., textual, non-textual, or implicit instructions). For example, the instruction may be text presented on the display 114 separate from the visual elements (e.g., a line of text may be presented with the ball and dog shown in FIG. 1, reading "swipe from left to right to traverse the ball to the dog)". The non-textual instructions provided by the visual element 122 may be an animation of the visual element 122, an audio instruction (e.g., through a speaker associated with the electronic device 102), or another type of non-textual instruction. For example, the non-text instruction may be an animation of the dog shown in fig. 1 in which the dog is jumping into the air shaking the tail, or an animation in which the ball is moving towards the dog and the dog catches the ball. In other cases, the instruction may be implicit in the presentation of the visual element 122 (e.g., a dog and ball presented together may implicitly indicate that the user is attempting to throw the ball to the dog without additional instructions).
At 904, radar data corresponding to motion of a user in a radar field provided by a radar system is received. The radar system may be included in or associated with an electronic device and move proximate to the electronic device. For example, the radar system 104 described with reference to fig. 1-8 may provide radar data.
At 906, it is determined whether the user's motion in the radar field includes a gesture that the instruction requests to be performed based on the radar data. For example, gesture training module 106 may determine whether the user's motion in radar field 110 is a radar gesture (e.g., a radar-based touch-independent gesture as described above).
In some implementations, the gesture training module 106 can detect values of a set of parameters associated with the motion of the user in the radar field by using radar data to determine whether the motion of the user in the radar field 110 is a radar gesture. For example, the set of parameters may include values representing one or more of a shape of a motion of the user in radar field 110, a path of the motion of the user in radar field 110, a length of the motion of the user in radar field 110, a speed of the motion of the user in radar field 110, a distance of the user in radar field 110 from electronic device 102. The gesture training module 106 then compares the values of the parameter set to the baseline values for the parameter set. For example, as described above, the gesture training module 106 may compare the values of the set of parameters to stored reference values of the gesture library 120, as described above. The reference value may be a value of a parameter corresponding to a gesture requested to be performed by the instruction.
When the values of the set of parameters associated with the user's motion satisfy the criteria defined by the reference parameters, the gesture training module 106 determines that the user's motion in the radar field is a radar gesture. Similarly, when values of a set of parameters associated with the user's motion do not meet the criteria defined by the reference parameters, the gesture training module 106 determines that the user's motion in the radar field is not a radar gesture. In some cases, the gesture training module 106 may use a range of reference values (e.g., stored by the gesture library 120) that allow the gesture training module 106 to determine that the user's motion is some variation of the values of the parameter set of the radar gesture.
Further, in some implementations, the electronic device 102 can include machine learning techniques that can generate an adaptive or adjusted reference value associated with a gesture requested to be performed by the instruction. The adjusted baseline value is generated based on radar data representing a number of attempts by a user to perform a gesture requested to be performed by the instruction. For example, the user may repeatedly attempt to make the requested gesture without success (e.g., the parameter value associated with the gesture attempted by the user does not fall within the value of the baseline parameter). In this case, the machine learning technique may generate a set of adjusted reference values that include at least some of the values of the parameters associated with the unsuccessful gesture of the user.
The gesture training module 106 may then receive radar data corresponding to the user's motion in the radar field (e.g., after a failed gesture attempt) and detect values of another set of parameters associated with the user's motion. As described above, the gesture training module 106 may then compare the detected values of the other set of parameters to the adjusted reference values and determine whether the motion of the user in the radar field is a gesture that the instruction requests to perform based on the comparison. Because the adjusted parameters are based on a machine-learned set of parameters, the user's gesture may be determined to be a requested radar gesture even when the user's gesture is not a requested gesture based on a comparison to an unadjusted reference value. In this manner, the adjusted baseline value allows the electronic device and gesture training module 106 to learn to accept more changes in how the user made the radar gesture (e.g., when the changes are consistent). These techniques may also allow for the recognition of specific user gestures. For example, if the user is physically unable to make a gesture as defined by the reference parameters.
In some implementations, the visual element 122 and associated instructions may also be used to increase the accuracy of the adjusted reference value and reduce the time it takes to generate the adjusted reference value. For example, the machine learning techniques may instruct the gesture training module 106 to present instructions, such as text or audio, to ask the user whether the user's motion is intended as the requested gesture. The user may reply (e.g., using a radar gesture, touch input, or voice input), and then the gesture training module 106 may ask the user to repeat the requested gesture until the machine learning technique has sufficient data to generate the adjusted baseline value.
Optionally, at 908, a visual feedback element is presented on the display in response to determining that the motion of the user in the radar field is a gesture instructing the request to perform. The visual feedback element indicates that the user's motion in the radar field is a radar gesture instructing the performance of the request. For example, in response to determining that the user's motion is a requested radar gesture, gesture training module 106 may present visual feedback elements 126 (which may include one or both of visual feedback elements 126-1 and 126-2) on display 114.
Consider the example shown in fig. 10, which shows additional examples of visual elements 122 and visual feedback elements 126 generally at 1000. The detail view 1000-1 shows the exemplary electronic device 102 (in this case, the smartphone 102-1) being presented with the exemplary visual element 1002 (including the ball component 1002-1 and the dog component 1002-2). In detail view 1000-1, ball component 1002-1 is presented at the left edge of display 114 and dog component 1002-2 is presented at the right edge of display 114. The detail view 1000-1 also shows a location 1004 where selectable text instructions (e.g., text instructions for performing a radar gesture of a swipe from left to right) associated with the example visual element 1002 may be displayed.
Another detail view 1000-2 shows an exemplary visual feedback element 1006 that includes a ball component 1006-1 and a dog component 1006-2. In the example of detail view 1000-2, assume that the user 112 successfully performed the requested gesture (e.g., the gesture training module 106 determines that the user's motion in the radar field 110 is the requested radar gesture based on the radar data). The gesture may be any of a variety of gestures, including a swipe gesture (e.g., left-to-right or right-to-left) or a direction-independent gesture (e.g., a full gesture). In response to a successfully performed radar gesture, gesture training module 106 presents visual feedback element 1006 by animating visual element 1002. In the exemplary animation, the ball component 1006-1 moves from the left edge of the display 114 toward the dog component 1006-2, as indicated by arrow 1008. As the ball assembly 1006-1 approaches, the dog assembly 1006-2 moves to pick up the ball assembly 1006-1. The detail view 1000-2 also illustrates a location 1010 where additional selectable text instructions (e.g., an instruction to again execute the requested gesture or a message confirming successful execution of the requested gesture) associated with the example visual feedback element 1006 may be displayed.
Returning to FIG. 9, optionally, at 910, in response to determining that the user's motion in the radar field is not a gesture instructing the requested performance, another visual feedback is presented on the display. Another visual feedback element indicates that the first motion of the user in the radar field is not or does not include a gesture that the instruction requests to be performed. For example, in response to determining that the user's motion is not a requested gesture, gesture training module 106 may present another visual feedback element on display 114.
Consider the example shown in fig. 11, which illustrates additional examples of visual elements 122 and visual feedback elements 126 generally at 1100. The detail view 1100-1 illustrates the exemplary electronic device 102 (in this case, the smartphone 102-1) presenting the exemplary visual element 1102 (including the ball component 1102-1 and the dog component 1102-2). In detail view 1100-1, ball component 1102-1 is presented at the left edge of display 114 and dog component 1102-2 is presented at the right edge of display 114. The detail view 1100-1 also illustrates a location 1104 at which selectable text instructions (e.g., text instructions for performing a radar gesture that swipes from left to right) associated with the example visual element 1102 may be displayed.
Another detail view 1100-2 shows an exemplary visual feedback element 1106 that includes a ball component 1106-1 and a dog component 1106-2. In the example of detail view 1100-2, assume that the user 112 is unable to perform the requested gesture (e.g., the gesture training module 106 determines, based on the radar data, that the user's motion in the radar field 110 is not the requested radar gesture). The gesture may be any of a variety of gestures, including a swipe gesture (e.g., left-to-right or right-to-left) or a direction-independent gesture (e.g., a full gesture). In response to a failed radar gesture, gesture training module 106 presents visual feedback element 1106 by animating visual element 1102. In the exemplary animation, the ball component 1106-1 simply bounces up and down as shown by the motion indicator 1108. When ball assembly 1106-1 bounces, dog assembly 1106-2 sits down. The detail view 1100-2 also shows a location 1110 at which additional selectable text instructions (e.g., an instruction to again perform the requested gesture or a message confirming successful performance of the requested gesture) associated with the example visual feedback element 1106 may be displayed.
After presenting the visual feedback elements 1106 for a duration of time, the gesture training module 106 may stop presenting the visual feedback elements 1106 and present the visual elements 1102. The duration may be any suitable duration (e.g., about two, four, or six seconds) that allows the user to view the visual feedback element 1106. The duration may be selected and/or adjusted by the user 112. In some cases, the user may attempt another gesture, in which case gesture training module 106 may stop presenting visual feedback elements 1106 and present visual elements 1102, even if the duration has not expired.
After determining that the user's motion in the radar field is not the requested gesture and while the visual element 1102 is being presented (e.g., after the duration ends or when the gesture training module 106 determines that the user is performing motion in the radar field), then additional radar data may be received. The additional radar data may correspond to another motion of the user 112 in the radar field 110 (e.g., the user 112 may make another attempt after an unsuccessful attempt to perform the requested gesture). Based on the additional radar data, the gesture training module 106 may determine (e.g., using the baseline value, as described above) that another motion of the user 112 is the requested gesture. In response to determining that another motion of the user 112 is a requested gesture, the gesture training module 106 may present a visual feedback element 1006 to indicate that another motion of the user 112 is a requested gesture.
In some implementations, the gesture training module 106 may present other visual feedback elements in place of the visual feedback elements 1006 and 1106, or in addition to the visual feedback elements 1006 and 1106. For example, for a radar gesture application running on electronic device 102, gesture training module 106 may provide a similar or identical set of system-level visual feedback elements (but different from visual feedback elements 1006 and 1106).
Consider fig. 12, which illustrates an example of a visual feedback element 1202 generally at 1200. The detail view 1200-1 illustrates the exemplary electronic device 102 (in this case, the smartphone 102-1) presenting the exemplary visual feedback element 1202, which is shown as an illumination area (e.g., a light emitting area). As with visual elements 1002 and 1102 and visual feedback elements 1006 and 1106, visual feedback element 1202 may be presented at another location on display 114, presented with a different illumination level (e.g., more or less illumination), or presented as another shape or type of element. The detail view 1200-1 also shows a location 1004 where selectable text instructions associated with the example visual element 1002 may be displayed. As shown, location 1004 is presented in a different location because visual feedback element 1202 is presented at the top of display 114.
Another detail view 1200-2 shows how the example visual feedback element 1202 changes in response to a successful radar gesture. In the example of detail view 1200-2, assume that the requested gesture is a swipe from left to right and that the user 112 successfully performed the requested gesture (e.g., the gesture training module 106 determines, based on the radar data, that the user's motion in the radar field 110 is the requested radar gesture). In response to successfully performing the radar gesture, the gesture training module 106 animates the visual feedback element 1202 by moving it from left to right, bypassing a corner of the display 114, as indicated by arrow 1204. The motion of the visual feedback element 1202 lets the user 112 know that the requested gesture has been successfully performed. As shown in fig. 12, exemplary visual element 1002 and exemplary visual feedback element 1006 (and location 1010) are presented along with visual feedback element 1202. In other cases, the visual feedback element 1202 may be presented without presenting one or both of the visual element 1002 and the visual feedback element 1006. In some cases, the visual feedback element 1202 may be presented with another visual element (not shown).
Fig. 13 illustrates, generally at 1300, an example of another visual feedback element 1302 that may be presented when the motion of the user 112 is not or does not include a requested gesture. The detail view 1300-1 shows an exemplary electronic device 102 (in this case, a smartphone 102-1) that is presenting an exemplary visual feedback element 1302, shown as an illumination area (e.g., a light emitting area). As with visual elements 1002 and 1102 and visual feedback elements 1006, 1106, and 1202, visual feedback element 1302 may be presented at another location on display 114, at a different illumination level (e.g., more or less illumination), or as another shape or type of element. The detail view 1300-1 also illustrates a location 1104 where selectable text instructions associated with the example visual element 1102 may be displayed. As shown, the location 1104 is presented in a different location because the visual feedback element 1302 is being presented at the top of the display 114.
Another detail view 1300-2 shows how an exemplary visual feedback element 1302 changes in response to an attempt to perform a requested radar gesture failing. In the example of detail view 1300-2, assume that the requested gesture is a swipe from left to right and that the user 112 failed to perform the requested gesture (e.g., the gesture training module 106 determines, based on the radar data, that the user's motion in the radar field 110 is not the requested radar gesture). In response to a failed gesture, the gesture training module 106 animates the visual feedback element 1302 by moving it from left to right, as indicated by arrow 1304. In this case, the visual feedback element 1302 does not bypass the corner. Instead, the visual feedback element 1302 stops before reaching the corner and returns to the original position, as shown in the detail view 1300-1 (return not shown). The movement of the visual feedback element 1302 lets the user 112 know that the requested gesture was not successfully performed. As shown in FIG. 13, a visual feedback element 1302 is presented with an exemplary visual element 1102, a position 1110, and an exemplary visual feedback element 1106 (including animations, such as the motion 1108 of the ball component 1106-1). In other cases, the visual feedback element 1302 may be presented without presenting one or both of the visual element 1102 and the visual feedback element 1106. In some cases, the visual feedback element 1302 may be presented with another visual element (not shown).
In other implementations, the visual feedback element 1202 or 1302 may be animated in other manners. For example, consider fig. 14, which shows an additional example of a visual feedback element. The detail view 1400-1 shows an exemplary electronic device 102 (in this case, a smartphone 102-1) that is presenting an exemplary visual feedback element 1402 shown as an illumination area (e.g., a light emitting area). Although shown at the upper edge of the display 114, the visual feedback element 1402 may be presented at another location on the display 114, at a different illumination level (e.g., more or less illumination), or as another shape or type of element. The detail view 1400-1 also shows a location 1010 where selectable text instructions associated with the example visual feedback element 1006 may be displayed. As shown, the location 1010 is presented at a different location because the visual feedback element 1402 is being presented at the top of the display 114.
In the example of detail view 1400-1, assume that the requested gesture is a direction-independent gesture (e.g., a full gesture), and that the user 112 successfully performed the requested gesture (e.g., the gesture training module 106 determines, based on the radar data, that the user's motion in the radar field 110 is the requested radar gesture). In response to successfully performing the radar gesture, gesture training module 106 animates visual feedback element 1402 by increasing the size and brightness (e.g., luminosity) of visual feedback element 1402 and adding a bright line 1404 near the edge of display 114 (as shown in detail view 1400-2). The animation sequence continues in another detail view 1400-3 in which the visual feedback elements 1402 begin to decrease in size, as indicated by the double arrow 1406. Another detail view 1400-4 shows a continuous animation in which the visual feedback elements 1402 are further reduced in size, zooming out toward the center of the upper edge of the display 114, as indicated by another double arrow 1408. The animation continues until the visual feedback element 1402 disappears and then returns to the state (not shown) as shown in the detail view 1400-1. The movement of the visual feedback element 1402 makes the user 112 aware that the requested gesture was successfully performed.
As shown in fig. 14, visual feedback elements 1402 are presented with visual feedback elements 1006 (including animations, such as movement 1008 of ball assembly 1006-1). In other cases, visual feedback element 1402 may be presented without presenting visual feedback element 1006, with other content (with or without visual feedback element 1006), with a visual element (e.g., visual element 1002), or in another configuration (not shown).
Similarly, consider 15, which shows an additional example of a visual feedback element that may be presented when the motion of the user 112 is not or does not include the requested gesture. The detail view 1500-1 illustrates an exemplary electronic device 102 (in this case, a smartphone 102-1) that is presenting an exemplary visual feedback element 1502, which is shown as an illumination area (e.g., a light emitting area). While visual feedback element 1502 is shown at the upper edge of display 114, it may be presented at another location on display 114, at a different illumination level (e.g., more or less illumination), or as another shape or type of element. The detail view 1500-1 also shows a location 1110 at which selectable text instructions associated with the example visual feedback element 1106 may be displayed. As shown, location 1110 is presented in a different location because visual feedback element 1502 is presented at the top of display 114.
In the example of detail view 1500-1, assume that the requested gesture is a direction-independent gesture (e.g., a full gesture), and that the user 112 cannot perform the requested gesture (e.g., the gesture training module 106 determines, based on the radar data, that the user's motion in the radar field 110 is not the requested radar gesture). In response to the radar gesture not being successfully performed, gesture training module 106 animates visual feedback element 1502 by reducing the size and brightness (e.g., luminosity) of visual feedback element 1502, as shown in detail view 1500-2. The animation sequence continues in another detail view 1500-3, in which visual feedback element 1502 stops zooming out and begins to brighten and expand, as indicated by another double arrow 1506. Another detail view 1500-4 shows a continuous animation in which the visual feedback element 1502 returns to the state as shown in detail view 1500-1. The motion of visual feedback element 1502 lets user 112 know that the requested gesture was not successfully performed.
As shown in FIG. 15, a visual feedback element 1502 is shown along with a visual feedback element 1106 (including animations, such as the motion 1108 of the ball component 1106-1). In other cases, visual feedback element 1502 may be presented without visual feedback element 1106, with other content (with or without visual feedback element 1106), with a visual element (e.g., visual element 1102), or in another configuration (not shown).
In some implementations, the electronic device 102 and the radar system 104 can include a gesture pause mode. In the gesture pause mode, a gesture pause trigger event is detected for a period of time during which the radar system is providing a radar field and the radar gesture application is running on the electronic device. In response to detecting the gesture pause trigger event, a gesture pause mode is entered. In the gesture pause mode, when the radar gesture application is running on the electronic device, the electronic device provides another visual feedback element indicating that the electronic device is in the gesture pause mode.
The electronic device 102 may detect the gesture pause trigger event through input from the radar system 104 and/or input from other sensors (e.g., camera or non-radar sensors 108). A gesture pause trigger event is a condition, set of conditions, or state that pauses a radar gesture because the radar gesture application is unable to perform an action associated with the radar gesture. In general, a gesture pause trigger event is a condition that makes it difficult for the electronic device 102 or radar system 104 to accurately and efficiently determine whether a user's motion is a radar gesture. For example, the gesture pause trigger event may be an oscillating motion of the electronic device 102 that exceeds a threshold frequency, a motion of the electronic device at a speed above a threshold speed, or an oscillating motion of an object, such as the user 112 (or a portion of the user 112), in the radar field that exceeds a critical frequency.
In response to detecting the gesture pause trigger event, the electronic device 102 enters a gesture pause mode. If a radar gesture application (e.g., an application capable of receiving control inputs corresponding to radar gestures) is running on the electronic device 102 while the electronic device 102 is in a gesture pause mode, the gesture training module 106 will provide a visual feedback element on the display 114 of the electronic device 102. In this case, the user 112 may or may not have attempted to make a radar gesture. The gesture training module 106 provides visual feedback elements based on the detection of the gesture pause trigger event and does not need to attempt a radar gesture. In contrast, the visual feedback element alerts the user that a radar gesture is not currently available to control a radar gesture application on the electronic device 102.
Consider the example shown in fig. 16, which shows an example of a visual feedback element generally at 1600 indicating that electronic device 102 and/or radar system 104 are in a gesture pause mode. The detail view 1600-1 shows the exemplary electronic device 102 (in this case, the smartphone 102-1) being presented with the exemplary visual feedback element 1602 (shown as a seated dog). In this case, the visual feedback element indicates the gesture pause mode by eliminating the ball component (e.g., 1002-1 or 1102-1) and animating a dog component (e.g., 1002-2 or 1102-2) in the visual element (e.g., visual element 1002 or 1102) that is presented to indicate that gesture training module 106 cannot accept gestures. The detail view 1600-1 also illustrates a location 1010 at which additional selectable text instructions associated with the example visual feedback element 1602 (e.g., an instruction to wait to perform a requested gesture and/or a message stating that the electronic device 102 is in a gesture pause mode) may be displayed.
In some implementations, the gesture training module 106 can present other visual feedback elements instead of or in addition to the visual feedback elements 1602. For example, the gesture training module 106 may provide another visual feedback element 1604, which may be part of the system level set of visual feedback elements described with reference to fig. 12-15. In fig. 16, another visual feedback element 1604 is an illumination area (e.g., a light emitting area) at the upper edge of the display 114. In other cases, the visual feedback element 1604 may be presented at another location on the display 114, at a different illumination level (e.g., more or less illumination), or as an element of other shape or type.
In the example of detail view 1600-1, visual feedback element 1604 is presented in a form that indicates that electronic device 102 can receive and be controlled by radar gestures (e.g., similar to visual feedback elements 1202, 1302, 1402, and 1502). When electronic device 102 enters the gesture pause mode, gesture training module 106 animates visual feedback elements 1604 to alert the user. The gesture training module 106 begins the animation by decreasing the size and brightness (e.g., luminance) of the visual feedback element 1604, as indicated by the double arrow 1606 in the detail view 1600-2. The animation sequence continues in another detail view 1600-3, where the visual feedback element 1604 has stopped zooming out and is displayed near the center of the upper edge of the display 114. A smaller, darker visual feedback element 1604 indicates being in a gesture pause mode. The detail view 1600-4 illustrates the end of the animation (e.g., the end of the gesture pause mode), displaying the visual feedback element 1604 being returned to the state shown in the detail view 1600-1 by increasing in size and brightness, as shown by another double arrow 1608. As shown in fig. 16, visual feedback element 1604 is presented with visual feedback element 1602. In other cases, the visual feedback element 1604 may be presented without the visual feedback element 1602, with other content (with or without the visual feedback element 1602), with a visual element (e.g., visual elements 1002 and/or 1102), or in another configuration (not shown).
Fig. 10-16 also illustrate locations (e.g., locations 1004, 1010, 1104, and 1110) where selectable text instructions associated with exemplary visual elements 1002 and 1102 and exemplary visual feedback elements 1006 and 1106 may be displayed. These text locations may include any suitable instructions, descriptions, or messages related to the requested gesture, the user's performance of the requested gesture, and the like. For example, the text instructions may include an instruction to perform the requested gesture, an instruction to perform the requested gesture again, a message to interpret or relate to the requested gesture, or a message confirming successful performance of the requested gesture.
Consider fig. 17, which shows an example of textual instructions that may be presented. In detail view 1700-1, location 1004 is displaying a text message ("thread the ball for the dog"), indicating the requested gesture (e.g., a swipe gesture from left to right to move the ball toward the dog). Other variations of text instructions include "Swipe right to the ball to the dog" or "Use a get to send the ball to the dog using a gesture". In another detail view 1700-2, location 1010 is displaying a text message ("Good Job!)") indicating that the user 112 successfully performed the requested gesture. Other variations of the word instruction include "Success! (success!) "or" thin was very good ". Another detail view 1700-3 illustrates exemplary instructions ("Close, try again to throw a ball") indicating that the user 112 was unable to successfully perform the requested gesture. Other variations of the text instructions include "Almost, try again (faster, try again)" or "One more try again. you got this".
In some implementations (not shown), the instructions can also include messages or feedback related to the requested gesture. For example, the instruction may be a message informing the user 112 how the visual feedback element 1202 works (e.g., "Make a gesture to throw the ball, and the dog will go to pick up the ball"). In another example, gesture training module 106 is displaying other visual feedback elements (e.g., a system level set of visual feedback elements) described with reference to fig. 12-16. Consider the case where gesture training module 106 is displaying the visual elements shown in FIG. 12 (e.g., in detail views 1200-1 and 1200-2). In this case, the gesture training module 106 may present a text instruction at location 1004, such as "Watch the glow above the screen when you try to throw the ball" of the screen move wyyou try to the throw the ball. Similarly, the gesture training module 106 may present a text instruction such as "See how the glow is around the glow to show you the gesture made" at location 1010. "
After the first or subsequent attempt successfully performs the requested gesture, the gesture training module 106 may continue to provide training to the user or stop providing training. With training continuing, the gesture training module 106 may present the same visual elements (to keep practicing the same gesture) or different visual elements and instructions (e.g., to practice different gestures or to practice the same gesture in different environments). Accordingly, a failed attempt by the user to perform the requested gesture causes the electronic device 102 to repeat the visual element and the request. Alternatively or additionally, successful performance of the requested gesture by the user may cause the electronic device to provide a different visual element so that the user may accept training for other gestures after successfully performing a previously requested gesture. In some implementations, the gesture training module 106 can present the first visual feedback element and the instruction multiple times (e.g., once, three times, five times, or seven times) before presenting the next visual element. The number of times each visual element is presented may be user selectable. In addition, the gesture training module 106 may use text instructions to ask the user whether training should stop or continue (e.g., "Do you want to throw the ball again.
Method 900 may also be implemented in other ways. For example, consider fig. 18-22, which illustrate a tutorial practice and training environment (e.g., a user "tip" environment) and another example of additional exemplary visual elements 122 and visual feedback elements 126. For example, FIG. 18 depicts an entry sequence for another exemplary environment (e.g., a trick environment) at 1800. For example, the details view 1800-1 illustrates an exemplary display 114 that is presenting skill detail pages (e.g., via the gesture training module 106), including video that the user 112 may view to learn about using radar gestures. The user can access the video using control 1802. The skill detail page (as shown in detail view 1800-1) may also include one or more text areas 1804 that may display text describing how to use radar gestures to skip songs, pause (snooze) alarms, or mute a phone that is ringing.
For example, in the text area 1804-1, the gesture training module 106 may present a heading, such as "Tips Details" or "Become an Expert". Similarly, using another text area 1804-2 (shown as a dashed rectangle), the gesture training module 106 may present messages, such as "Swipe left or right above the phone to skip songs" or "Swipe in any direction above the phone to sound an alarm or mute a phone ring" (or two messages). In some cases, the message may carry a title, such as "Use Quick Gestures" or "How to Use Radar Gestures".
The gesture training module 106 may also present a control 1806 that may be used to enter a skill course (e.g., a "Try" icon). For example, if the user 112 uses the control 1806 to enter a skill tutorial, the gesture training module 106 may present a training screen, as shown in another detail view 1800-2, that explains how to perform radar gestures to skip songs. The training screen in detail view 1800-2 shows the smartphone and animation of the user's hand 112 above the smartphone. The smartphone also displays a visual feedback element 1808 (e.g., visual feedback element 1202, 1302, 1402, 1502, or 1604). The training screen and animation can be presented in response to a user activating control 1806.
The training screen may also include a text area 1810 (shown as a dashed rectangle) that may display text indicating that the user performed a radar gesture to Skip a song (e.g., "Swipe left or right Swipe over the phone"). the text area 1810 may also be used to display a title of a trick course (e.g., "Skip songs" or "Music Player)". in some embodiments, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). As shown by sound control 1814. Sound control 1814 lets the user know that the sound is off and the user can turn the sound on and off through sound control 1814.
In another detail view 1800-3, the animation sequence continues as the user reaches for the electronic device 102. In the continuous sequence, the animation of the user's hand 112 disappears and the smartphone displayed on the training screen continues to display the visual feedback element 1808, which expands as the user 112 reaches for the electronic device 102. The training screen may still present a text area 1810 with text instructing the user to swipe left or right over the smartphone to skip songs. Another detail view 1800-4 shows the results of a partial gesture (e.g., a radar gesture that was not successfully performed, as described above). In the details view 1800-4, the animation continues by returning the visual feedback element 1808 to its original size and altering the instructional text in the text area 1810 to include more details to help the user perform a radar gesture to skip the song (e.g., from "Swipe left or right around the cell phone" to "Try a Swipe motion past both sides of the cell phone)". In some cases, audio or tactile (haptic) feedback (e.g., a vibration) may be included with the new text instruction (e.g., a sound or haptic sensation indicating a rejected input).
FIG. 19 shows at 1900 an additional training screen that may be presented in a skill tutorial. For example, the detail view 1900-1 shows a training screen that appears after a user performs a selectable multiple part gesture without a successful swipe (e.g., one, two, three, or four part gestures). The training screen of detail view 1900-1 illustrates different animations of the smartphone and the user's part above the smartphone, and instructional text presented in text area 1810 tells the user "Try to move past the phone pages of the swipe" to skip the song.
Another detail view 1900-2 shows the result of a swipe gesture (e.g., a swipe radar gesture performed successfully, as described above). In detail view 1900-2, the animation continues by moving visual feedback element 1808 toward and around a corner of the example smartphone (e.g., as shown in fig. 12). In the detail view 1900-2, the instructional text presented in the text area 1810 becomes "nicey done! ", and the song jumps to the next in the playlist. As noted with reference to fig. 18, the trick environment may operate in a default mode (user selectable) that turns off the sound, as shown by sound control 1814. Audio or tactile (haptic) feedback may also be provided if available (e.g., sound or haptic indicating a confirmation input). In some cases, music icon 1812 also moves or pans on the screen in response to a successful swipe (not shown). In addition, the training screen presents a return control 1902 (e.g., a "Got it (known)" icon) that allows the user 112 to exit skipping the song tutorial.
Another detail view 1900-3 illustrates the training screen that is presented if the user 112 does not activate the return control 1902. In detail view 1900-3, an animation is presented through smartphone and visual feedback element 1808 (while return control 1902 is also presented) in the same state as detail views 1800-2 and 1800-4. Another detail view 1900-4 illustrates the training screen after user 112 activates return control 1902. In detail view 1900-4, the animation ends and the summary page is presented. The summary page includes text in a text area 1810 that informs the user 112 about other training options ("TryQuick Gestures for the actions"). The gesture training module 106 also presents tutorial controls 1904 that allow the user 112 to reenter the skill environment skipping songs or enter other skill tutorials that pause the alarm and mute the call. Tutorial controls 1904 include text and icons (e.g., a musical note for "Skip songs", an alarm clock for "Snooze alarms", and a classic telephone handset for "Silence calls"). The tutorial control 1904 is presented with an indicator 1906 (e.g., a check mark icon) that lets the user 112 know which tutorials have been completed. In some cases, the tutorial controls 1904 are ordered based on whether they have been completed (e.g., completed tutorials at the top of the list and incomplete tutorials at the bottom of the list). The summary page may also include an exit control 1908 that allows the user 112 to exit the skill tutorial environment (e.g., the "Finish" icon).
FIG. 20 depicts at 2000 a series of training screens that may be presented when the user 112 activates the tutorial control 1904 for temporary alarm. For example, gesture training module 106 may present a training screen as shown in detail view 2000-1 that explains how radar gestures are performed for pausing an alarm. The training screen in detail view 2000-1 shows an animation of the smartphone and the user's hand 112 above the smartphone. The smartphone also displays visual feedback elements 1808 and sound controls 1814. The training screen of detail view 2000-1 also presents in text area 1810 a text instruction explaining to the user how to "zoom in and direction above the phone" (e.g., "Swipe in any direction over the phone)"). In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). An alarm icon 2002 (e.g., an alarm clock) may also be presented on the smartphone.
In another detail view 2000-2, the animation continues as the user reaches for the electronic device 102. In a continuous animation, the animation of the user's hand disappears, and the smartphone displayed on the training screen shows a visual feedback element 1808 that expands as the user 112 reaches for the electronic device 102. The training screen continues to present a text area 1810 with instructional text for how to pause the alarm. In detail view 2000-2, sound control 1814 is displayed with "sound on" text to show how user 112 turns sound on and off through sound control 1814. Another detail view 2000-3 shows the results of a partial gesture (e.g., an unsuccessfully performed radar gesture, as described above). In the detail view 2000-3, the animation continues by returning the visual feedback element 1808 to its original size and changing the instructional text presented in the text area 1810 (e.g., from "Swipe in any direction over the phone)" to "Try a swiping motion past the phone" in some cases, audio or tactile (haptic) feedback may be included with new textual instructions (e.g., sound or haptic sensations indicating rejected input).
Another detail view 2000-4 shows the results of a successful swipe gesture (e.g., a successfully performed swipe radar gesture, a direction-independent swipe or omnidirectional swipe radar gesture, as described above). In detail view 2000-4, the animation continues by collapsing visual feedback element 1808 on itself (e.g., as shown in FIG. 14). In the detail view 2000-4, the instructional text presented in the text area 1810 changes to provide feedback to the user 112 that the gesture was successful (e.g., "nicey done |"). Audio or tactile (haptic) feedback, if any, may also be provided (e.g., a sound or haptic sensation indicating that input has been confirmed). The description of the skill tutorial for pausing the alarm continues in the following description of FIG. 21.
FIG. 21 shows at 2100 an additional training screen that may be presented in a skill tutorial for pausing an alarm. For example, detail view 2100-1 illustrates additional elements of the training screen described in detail view 2000-4 that are presented after a user successfully performs a swipe (e.g., a direction-independent swipe or an omnidirectional swipe). The training screen of detail view 2100-1 illustrates a smartphone, with "nicety done!displayed in text area 1810! "text message," a return control 1902 (e.g., a "Got it" icon) that allows the user 112 to exit the paused alarm tutorial, and a sound control 1814. In some implementations, the training screen may also present the smartphone display with one or both of a completion icon 2102 (e.g., a checkmark) or a restart control 2104 (e.g., a "Practice again" icon).
Another detail view 2100-2 shows the training screen after the user 112 activates the return control 1902. In detail view 2100-2, the animation ends and a summary page (e.g., the summary page described in detail view 1900-4 of FIG. 19) is presented. The summary page may present text in a text area 1810 that alerts the user 112 of other gesture training options ("TryQuick Gestures for the actions"). The summary page may also present tutorial controls 1904 that allow the user 112 to reenter the skill environment for pausing the alarm or enter other skill tutorials for skipping songs and muting calls. The summary page also includes an exit control 1908 that allows the user 112 to exit the skill tutorial environment (e.g., the "Finish" icon). The course control 1904 may be presented with an indicator 1906 (e.g., a check mark icon) that lets the user 112 know which courses have been completed.
FIG. 21 also depicts in a detail view 2100-3 a series of training screens that may be presented when the user 112 activates a tutorial control 1904 for muting a call. For example, gesture training module 106 may present a series of training screens that explain how to perform radar gestures for muting calls. The training screen in detail view 2100-3 illustrates the smartphone that is displaying visual feedback elements 1808 and sound controls 1814. The text displayed in the text area 1810 explains to the user how to mute the call (e.g., "Swipe in and direction above the phone" to Swipe in any direction of the phone). In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). In some cases, a call icon 2106 (e.g., a classic telephone handset) may also be presented on the animated smartphone display.
Another detail view 2100-4 shows the results of a successful swipe gesture (e.g., a successfully performed swipe radar gesture, a direction-independent swipe or omnidirectional swipe radar gesture, as described above). In detail view 2100-4, the animation continues by collapsing visual feedback element 1808 on itself (e.g., as shown in fig. 14 and 20). In detail view 2100-4, the instructional text displayed in text area 1810 changes to indicate that the gesture was successfully performed (e.g., from "Swipe in pointing above the phone" to "Well done |"). A call icon 2106 and sound controls 1814 may also be displayed. Audio or tactile (haptic) feedback (e.g., system sounds or haptics indicating confirmed input) may also be provided if available. The description of the tutorial skills for muting a call continues in the following description of fig. 22.
Figure 22 shows at 2200 an additional training screen that may be presented in a skill tutorial for muting a call. For example, detail view 2200-1 shows other elements of the training screen depicted in detail view 2100-4 that are presented after the user successfully performs a swipe (e.g., a direction-independent swipe or an omnidirectional swipe). The training screen of detail view 2200-1 illustrates a user having "Well done!presented in the text area 1810! "a message, a return control 1902 (e.g., a" Got it "icon) that allows the user 112 to exit the silent call tutorial, and a sound control 1814. In some implementations, the training screen can also present one or both of a completion icon 2102 (e.g., a checkmark) or a restart control 2104 (e.g., a "Practice again" icon) to the smartphone display.
Another detail view 2200-2 shows the training screen after the user 112 activates a return control 2202. In detail view 2200-2, the animation ends and a summary page (e.g., the summary page described in detail view 1900-4 of FIG. 19) is presented. The summary page may include text in a text area 1810 that alerts the user 112 of other training options (e.g., "Try Quick Gestures for these actions") and a tutorial control 1904 that allows the user 112 to re-enter a tutorial context for silent calls or enter other tutorial tutorials for skipping songs and pausing an alarm. The summary page also includes an exit control 1908 that allows the user 112 to exit the skill tutorial environment (e.g., the "Finish" icon). The skill controls may be presented with indicators 1906 (e.g., check mark icons) that let the user 112 know which courses have been completed. The techniques and examples described with reference to fig. 10-22 may enable the electronic device 102 and the radar system 104 to facilitate proficiency of the user, provide feedback to the user, and, in some embodiments, learn the preferences and habits of the user (e.g., via the machine learning techniques described with reference to fig. 9, which may be used with any of the visual elements and visual feedback elements described) to improve the performance, accuracy, and efficiency of the electronic device 102, the radar system 104, and the gesture training module 106.
Fig. 23 illustrates a method 2300, the method 2300 illustrated as a set of blocks that specify operations performed but are not necessarily limited to the orders or combinations of operations illustrated for performing by the various blocks. Further, any of the one or more operations may be repeated, combined, re-organized, or linked to provide a wide variety of additional and/or alternative methods. In portions of the following discussion, reference may be made to the exemplary operating environment 100 of fig. 1 or the entities or processes described in detail in fig. 2-22, which are referenced by way of example only. The techniques are not limited to being performed by one entity or multiple entities operating on one device.
At block 2302, a visual game element is presented on a display of a radar gesture enabled electronic device. For example, the gesture training module 106 may present the visual game element 124 (which may include the visual game element 124-1) on the display 114 of the electronic device 102. Visual game element 124 may be any of a variety of suitable elements with which a user (e.g., using gestures, such as radar gestures) interacts as part of a game or game environment. In some cases, for example, the visual game element 124 may be a character (e.g., a pick, hero, creature, or adventure) or a vehicle (e.g., a racing car or airplane). In other cases, visual game elements 124 may be a set of objects, such as a ball and dog, a basketball and basketball basket, or a mouse in a maze, which user 112 may interact with using gestures. Additionally, the visual game element may include instructions (textual, non-textual, or implicit, as described with reference to method 900) that describe game play, describe gestures that may be used to interact with visual game element 124, or request that the user perform a particular gesture.
In some cases, the instructions or visual game elements themselves may include a request for the user to perform a gesture proximate to the electronic device. The requested gesture may be a radar-based touch independent radar gesture (as described above), a touch gesture (e.g., on a touch screen), or another gesture, such as a camera-based touch independent gesture.
At 2304, radar data corresponding to motion of a user in a radar field provided by a radar system is received. The radar system may be included with or associated with an electronic device, and the motion may be proximate to the electronic device. For example, the radar system 104 described with reference to fig. 1-8 includes may provide radar data.
At 2306, based on the radar data, it is determined whether the user's motion in the radar field includes a gesture (e.g., a gesture that the instruction describes or requests to be performed). For example, gesture training module 106 may determine whether the user's motion in radar field 110 includes or is a radar gesture (e.g., a radar-based touch-independent gesture as described above).
In some implementations, as described above, the gesture training module 106 may determine whether the motion of the user in the radar field 110 is a radar gesture by using radar data to detect values of a set of parameters associated with the motion of the user in the radar field. For example, the set of parameters may include values representing one or more of a shape or path of motion of the user, a length or speed of motion, or a distance of the user from the electronic device 102. The gesture training module 106 then compares the values of the set of parameters to the baseline values for the set of parameters. For example, as described above, the gesture training module 106 may compare the values of the set of parameters to reference values stored by the gesture library 120.
When the values of the set of parameters associated with the user's motion satisfy the criteria defined by the reference parameters, the gesture training module 106 determines that the user's motion in the radar field is or includes a radar gesture. Alternatively, when values of a set of parameters associated with the user's motion do not meet the criteria defined by the reference parameters, the gesture training module 106 determines that the user's motion in the radar field is not or does not include a radar gesture. As described, the gesture training module 106 may use a range of reference values that allows for some variation in the values of the parameter set, such that the gesture training module 106 determines that the motion of the user is a radar gesture. Additional details regarding techniques for determining whether a user's motion is a gesture are described with reference to FIG. 9.
Further, in some implementations described above, the electronic device 102 may include a machine learning technique that may generate an adaptive or adjusted reference value associated with a first gesture that an instruction requests to be performed. Then, when the gesture training module 106 receives radar data corresponding to the user's motion in the radar field, the gesture training module 106 may compare the detected value to the adjusted baseline value and determine whether the user's motion is a radar gesture. Since the adjusted parameters are based on a set of machine-learned parameters, the user's gesture can be determined to be the requested radar gesture even when the user's gesture is not the requested gesture based on a comparison to an unadjusted reference value. Thus, the adjusted baseline value allows the electronic device and the gesture training module 106 to learn to accept more changes in how the user made the radar gesture (e.g., when the changes are consistent). Additional details related to using machine learning techniques are described with reference to fig. 9.
Optionally, at 2308, a successful visual animation of the visual game element is presented on the display in response to determining that the user's motion in the radar field is or includes a radar gesture (e.g., a gesture that the instruction requests to be performed). Successful visual animation of a visual game element indicates successful advancement of game play, positive results, or other positive feedback (e.g., presenting text such as "good jobi!" or a visual character such as an animal or Pokemon, for example)TMRoles (e.g., Pikachu)TM) Smile and jumpOr behave in an affirmative manner). Accordingly, visual feedback elements (e.g., visual animations) are presented on the display. The visual animation or visual feedback element indicates that the user's motion in the radar field is a radar gesture instructing the request to be performed. For example, in response to determining that the user's motion is a requested radar gesture, gesture training module 106 may present a successful visual animation of visual game element 124 on display 114.
Optionally, at 2310, an unsuccessful visual animation of the visual game element is presented on the display in response to determining that the user's motion in the radar field is not or does not include a radar gesture. Unsuccessful visual animation of a visual element indicates a failure to advance game play, game progress failure, negative results, or other negative feedback (presenting the text "try again!" or a visual character, such as an animal or Pokemon waiting, displaying a sad expression, or being represented in a neutral or negative mannerTMRoles (e.g., Pikachu)TM) Or any non-affirmative response other than the response that determines a successful gesture). For example, in response to determining that the user's motion is not the requested radar gesture, gesture training module 106 may present an unsuccessful visual animation of visual game element 124 on display 114.
When the user 112 fails to make a successful gesture (e.g., the gesture training module 106 determines that the user's motion is not a radar gesture, as described above) and the gesture training module 106 presents an unsuccessful visual animation of the visual game element, the user 112 may attempt the radar gesture again (e.g., the user's own intent or in response to an instruction, such as a text instruction that may be displayed in the text area 1810, as described above). When user 112 attempts the gesture again, the radar system generates corresponding radar data, and electronic device 102 (e.g., using radar system 104 and/or gesture training module 106) may determine that the user's motion is a radar gesture and present a successful visual animation of visual game element 124 on display 114. Further, after the gesture training module 106 determines that the user's motion is a radar gesture (e.g., after a first, second, or subsequent attempt), the gesture training module 106 may present another successful visual animation of the visual game element, thereby promoting game play.
Another successful visual animation of a visual game element may advance game play to present the visual game element (e.g., the original visual game element or a new visual game element). The visual game element may include instructions (e.g., textual, non-textual, or implicit as described with reference to method 900) that describe game play, describe gestures that may be used to interact with the visual game element, or request user-specific gestures. The process of gesture, advancing or advancing the game based on user performance may be repeatedly performed through different visual game elements and different successful and unsuccessful visual animations of the visual game elements.
Different visual game elements and different successful and unsuccessful visual animations of the visual game elements may be associated with different gestures, such as direction-dependent gestures (e.g., swipe left-to-right, swipe right-to-left, swipe down-up, or swipe up-down) or direction-independent gestures (e.g., omnidirectional swipe described above). Thus, the electronic device 102 may use the gaming environment to teach the user gestures or allow the user to practice the gestures in a pleasing and efficient manner. By way of example, consider fig. 24-33 below, which illustrate various examples of visual animations of success and failure of visual game element 124.
FIG. 24 depicts at 2400 a series of training screens that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. Consider a simple game in which the user performs a gesture or a series of gestures to open a treasure chest. Alternatively or additionally, the gaming environment may employ other common objects with simple or intuitive operations, such as toggle switches, rotary dials, swipe controls, pages of a book, and so forth. The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), assuming for this example that the game gesture is a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen), as shown in detail view 2400-1, that explains how a gesture is performed to open the treasure house 2402. The training screen in the detail view 2400-1 shows an animation of the exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of the detail view 2400-1 also includes a text area 2404 (shown as a dashed rectangle) that may display text indicating that the user performed a radar gesture to play a game or manipulate a game element (e.g., "Swipe up the music chest" or "Swipe up the smart phone"). The text area 2404 may also be used to display the title of the course or a success gesture, e.g., "(Swipe up to open)") an explanation of what action will result. In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). In some implementations, the gaming environment also includes an exit control 2406 that allows the user to exit the training screen and the gaming environment to return to normal operation of the electronic device 102.
The animation sequence begins in another detail view 2400-2, where the user 112 begins making the requested gesture (swipe up), as shown by arrow 2408. As the user initiates the gesture, the treasure box 2402 begins to rise upward, as shown by another arrow 2410. The animation continues in the detail view 2400-3, where the gesture continues as the user's hand reaches the top of the exemplary smartphone 102 (as indicated by arrow 2412) and the treasure box 2402 continues to rise (as indicated by arrow 2414). The description of the training screen in the treasure box game environment continues in the following description of fig. 25.
FIG. 25 illustrates at 2500 an additional training screen that may be presented in a treasure house game environment. In this example, the additional training screen shows the result of a successful "swipe up" radar gesture (e.g., a successful animation of a visual game element, as described with reference to fig. 23). For example, detail view 2500-1 shows treasure box 2402 (shown by double arrow 2502) beginning to open. The detail view 2500-1 also includes a text region 2404 with text instructions. In another detail view 2500-2, the success animation continues, releasing the treasure coin 2504 as the text instructions disappear and the treasure house 2402 explodes open.
In both detail views 2500-1 and 2500-2, the training screen includes an optional exit control 2406. Further, in the training screens presented in fig. 24 and 25, audio or tactile (haptic) feedback may also be provided if available, (e.g., a sound or haptic indicating a confirmed input). In the event that the user makes an unsuccessful gesture (not shown), the treasure box 2402 does not lift or open, and the gesture training module 106 may present another animation that shows a failed gesture attempt, such as the treasure box 2402 being taken by the tide or sinking into the ground (e.g., the gesture training module 106 may present an unsuccessful animation of a visual game element, as described with reference to fig. 23).
FIG. 26 illustrates another sequence of training screens at 2600 that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. For example, consider a user performing a gesture or a series of gestures to reach a pet 2602 (e.g., a cat, a dog, or a picard dune)TM) A game for calling. The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), assuming for this example that the game gesture is a radar gesture.
For example, gesture training module 106 may present a training game screen (training screen) shown in detail view 2600-1 that explains how gestures are performed to call pet 2602 (in this case, a kitten). The training screen in detail view 2600-1 illustrates an animation of the exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of detail view 2600-1 also includes a text area 2404 (shown as a dashed rectangle) that may display text indicating that the user performed a radar gesture to play the game (e.g., "slide a finger across the screen" or "slide left or right above the phone" Swipe). The text area 2404 is also used to display the title of the tutorial or an explanation of the purpose of the successful gesture (e.g., "Swipe to say hello") (or what action will result). In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). In some implementations, the gaming environment also includes an exit control 2406 that allows the user to exit the training screen and the gaming environment to return to normal operation of the electronic device.
In another detail view 2600-2, the user 112 begins to make the requested gesture (swipe across the screen), as indicated by arrow 2604. Pet 2602 and exit control 2406 are also presented on the training screen in detail view 2600-2. Animation (e.g., unsuccessful animation of visual game elements as described with reference to fig. 23) continues in detail view 2600-3, where pet 2602 sits up and opens the mouth (e.g., "hello"). In the detail views 2600-1 through 2600-3, audio or tactile (haptic) feedback (e.g., sound or haptic sensation indicating a successful gesture) may also be provided, if available. In the event that the user makes an unsuccessful gesture (not shown), pet 2602 does not say "hello," and gesture training module 106 may present another animation that shows the failed gesture attempt, such as pet 2602 walking away or beginning to sleep (e.g., gesture training module 106 may present an unsuccessful animation of a visual game element, as described with reference to fig. 23).
FIG. 27 at 2700 shows another sequence of training screens that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. For example, consider a user performing a gesture or a series of gestures to stroke an animal or pet 2702 (e.g., a cat, ferret, or pick dune)TM) The game of (1). The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), assuming for this example that the game gesture is a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen) shown in detail view 2700-1 that explains how gestures are performed to stroke an animal or pet 2702 (in this case, cat 2702). The training screen in detail view 2700-1 shows an animation of the exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of the details view 2700-1 also includes a text region 2404 (shown as a dashed rectangle) that can display text indicating that the user performed a radar gesture to play the game (e.g., "Swipe left and right to pet" or "dragger left and right to pet" or "Swipe left and right above the phone)", the text region 2404 can also be used to display a title of a course or a description of the purpose of (or what action will result in) a successful gesture (e.g., "Swipe to pet)", in some embodiments, audio instructions can also be available (e.g., the user can select text, audio, or both, and select a default option, such as text only), the gaming environment also includes an exit control 2406 that allows the user to exit the training screen and gaming environment to return to normal operation of the electronic device.
In another detail view 2700-2, the user 112 begins making the first portion of the requested gesture (swiping right over the screen), as shown by arrow 2704. Cat 2702 and exit control 2406 are also presented on the training screen in detail view 2700-2. The animation continues in detail view 2700-3, which shows the user making a second portion of the requested gesture (swiping left over the screen), as shown by another arrow 2706. The training screens in detail views 2700-2 and 2700-3 also show a text region 2404 (with instructions) and an exit control 2406. In detail views 2700-1 through 2700-3, audio or tactile (haptic) feedback (e.g., sound or haptic to indicate a successful gesture) may also be provided, if available. The description of the training screen in the petting pet game environment continues in the following description of fig. 28.
Fig. 28 illustrates at 2800 an additional training screen that may be presented in a petty game environment. In this example, the additional training screen shows the results of a successful "swipe left and right" gesture (e.g., a successful animation of a visual game element as described with reference to FIG. 23). For example, detail view 2800-1 shows cat 2702 closing the eyes and changing its facial expression. Detail view 2800-1 also includes a text area 2404 with text instructions. The user 112 continues to swipe left and right as indicated by double arrow 2802.
In another detail view 2800-2, as the cat 2702 opens its eyes, an animated heart 2804 appears above the cat 2702 and successful animation continues. In detail views 2800-1 and 2800-2, the training screen includes an optional text control 2406. Additionally, in the training screen presented in fig. 28, audio or tactile (haptic) feedback (e.g., a sound or haptic sensation indicating a confirmed input) may also be provided, if available. In the event that the user makes an unsuccessful gesture (not shown), cat 2702 does not close the eyes and does not assume a heart shape. In addition, the gesture training module 106 may present another animation that shows a failed gesture attempt, such as the cat 2702 walking away or going to sleep (e.g., the gesture training module 106 may present a failed animation of a visual game element, as described with reference to fig. 23).
FIG. 29 shows another sequence of training screens at 2900 that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. For example, consider a user performing a gesture or a series of gestures to cause a character or animal 2902 (e.g., a lemur, cat, or picard hill)TM) The game of (1). The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), for which example the game gesture is assumed to be a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen) shown in detail view 2900-1 that explains how gestures are performed to stroke a character or animal 2902 (in this case, a lemur 2902). The training screen in detail view 2900-1 shows an animation of an exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of detail view 2900-1 also includes a text area 2404 (shown as a dashed rectangle) that may display text indicating that the user performed a radar gesture to play a game (e.g., "Swipe up to charge" or "Swipe up to spring"). The text area 2404 may also be used to display the title of the course or a description of the purpose of the successful gesture (e.g., "Swipe to jump"), or what action will result. In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). In some implementations, the gaming environment also includes an exit control 2406 that allows the user to exit the training screen and the gaming environment to return to normal operation of the electronic device.
In another detail view 2900-2, the user 112 begins making the requested gesture (swipe up on the screen), as shown by arrow 2904. A lemur 2902 and exit control 2406 are also presented on the training screen in detail views 2900-2. The animation continues in detail views 2900-3, which show the user continuing to make the requested gesture, as illustrated by another arrow 2906. The training screens in detail views 2900-2 and 2900-3 also show a text area 2404 (with instructions) and an exit control 2406. The description of the training screen in the swipe skip game environment continues in the following description of fig. 30.
FIG. 30 shows at 3000 an additional training screen that may be presented in a swipe skip game environment. In this example, the additional training screen shows the result of a successful "swipe up" gesture (e.g., a successful animation of a visual game element, as described with reference to FIG. 23). For example, detail view 3000-1 shows a lemur 2902 changing its facial expression and jumping into the air. Detail view 3000-1 also includes a play indicator 3002, shown as a small flame. The game play indicator 3002 shows the user how many times the lemur is to be made to jump to complete the game. In the example of detail view 3000-1, the first hop has been completed, as shown by the leftmost game play indicator 3002, which is shown larger and has a halo 3004.
In another detail view 3000-2, successful animation continues as the lemur 2902 returns to the ground and returns to its original facial expression. In addition, the light ring 3004 is fading away, as shown by the thinner and partially dashed lines. In both detail views 3000-1 and 3000-2, the training screen includes an optional exit control 2406. Further, in the training screens presented in fig. 29 and 30, audio or tactile (haptic) feedback (e.g., sound or haptic indicative of a successful gesture) may also be provided, if available. In the event that the user makes an unsuccessful gesture (not shown), the lemur 2902 does not jump, and the gesture training module 106 may present another animation that shows the failed gesture attempt, such as the lemur 2902 shrugging the shoulder or lying down to sleep (e.g., the gesture training module 106 may present an unsuccessful animation of the visual game element, as described with reference to fig. 23).
FIG. 31 at 3100 shows another sequence of training screens that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. For example, consider a user performing a gesture or a series of gestures to splash penguins 3102 (or another character, such as a cat, dog, or piccard hill)TM) The game of (1). The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), assuming for this example that the game gesture is a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen) as shown in detail view 3100-1 that explains how gestures are performed to splash water onto the penguins 3102. The training screen in detail view 3100-1 shows an animation of an exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of the detail view 3100-1 also includes a text area 2404 (shown as a dashed rectangle) that can display text indicating that the user performed a radar gesture to play a game (e.g., "Swipe a finger across the screen to splash" or "Swipe left or right above the phone)". the text area 2404 can also be used to display the title of a course or a description of the purpose of a success gesture (or what action will result) such as "Swipe to splash water".
In this example, the training screen also includes a sun 3104, which may help the user understand that the purpose of the game is to cool the penguin 3102 by splashing water thereon. In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). The gaming environment also includes an exit control 2406 that allows the user to exit the training screen and gaming environment to return to normal operation of the electronic device.
In another detail view 3100-2, the user 112 begins making the requested gesture (swipe on or over the screen), as indicated by arrow 3106. The penguin 3102, sun 3104 and exit controls 2406 are also presented on the training screen of detail view 3100-2. The animation (e.g., successful animation of a visual game element, as described with reference to fig. 23) continues in detail view 3100-3, where splash 3108 of water is on the penguin 3102. The detail view 3100-3 also includes a game play indicator 3110 shown as a small drop. The game play indicator 3110 shows the user how many times penguins are splashed to complete the game. In the example of detail view 3100-3, the first splash has completed, as shown by the leftmost entertainment play indicator 3110 being larger and having a halo 3112.
In detail views 3100-1-3100-3, audio or tactile (haptic) feedback (e.g., sound or haptic sensation indicating a successful gesture) may also be provided, if available. In the event that the user makes an unsuccessful gesture (not shown), the penguin 3102 may not be splashed and the gesture training module 106 may present another animation that shows the failed gesture attempt, such as the penguin 3102 walking far or wandering or crying (e.g., the gesture training module 106 may present an unsuccessful animation of the visual game element, as described with reference to fig. 23).
FIG. 32 illustrates another sequence of training screens at 3200 that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23. For example, consider a game in which a user performs a gesture or a series of gestures to cause a bear 3202 (or other character, such as a cat, dog, mystery, or fictional character) to grow grass through a magic wand 3204. The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), for which example the game gesture is assumed to be a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen) as shown in the detail view 3200-1 that explains how gestures are performed to have the bear 3202 use the wand 3204. The training screen in the detail view 3200-1 shows an animation of an exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of the detail view 3200-1 also includes a text area 2404 (shown as a dashed rectangle) that may display text indicating that the user performed a radar gesture to play the game (e.g., "Swipe a finger down the screen" or "Swipe down above the phone"). The text area 2404 may also be used to display the title of the course or an explanation of the purpose of a successful gesture (e.g., "Swipe down to help the grass grow)" (or what action will result).
In this example, the training screen also includes a game play indicator 3206, shown as a small circle. The game play indicator 3206 shows the user the number of times the bear 3202 is allowed to complete the game using the stick 3204. In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). The gaming environment also includes an exit control 2406 that allows the user to exit the training screen and gaming environment to return to normal operation of the electronic device.
In another detail view 3200-2, the user 112 begins making the requested gesture (swipe down over the screen), as shown by arrow 3208. Also presented on the training screen in the detail view 3200-2 are a bear 3202, a magic wand 3204, a game play indicator 3206, and an exit control 2406. The animation (e.g., successful animation of the visual game element, as described with reference to fig. 23) continues in the detail view 3200-3, where the bear 3202 begins to hit the ground with the magic wand 3204 and the grass 3210 grows (e.g., the user swipes down and the bear 3202 touches down). In the detail view 3200-3, the text in the text area 2404 has changed to let the user 112 know that the attempted gesture was successful (e.g., "Great | Big Bear loves to play"). The detail view 3200-3 also includes a game play indicator 3206 and an exit control 2406. In the example of detail view 3200-3, the first growth of the grass has been completed, as shown by being the leftmost game play indicator 3206 that is larger than the other game play indicators 3206.
In detail views 3200-1 through 3200-3, audio or tactile (haptic) feedback (e.g., a sound or haptic sensation indicating a successful gesture) may also be provided, if available. In the event that the user makes an unsuccessful gesture (not shown), the bear 3202 does not use the magic wand 3204 to grow the grass 3210, and the gesture training module 106 may present another animation that shows a failed gesture attempt, such as the bear 3202 walking away or sleeping (e.g., the gesture training module 106 may present an unsuccessful animation of the visual game element, as described with reference to fig. 23).
FIG. 33 illustrates another sequence of training screens that may be presented when the user 112 enters the gaming environment described with reference to FIG. 23 at 3300. For example, consider a game in which a user performs a gesture or a series of gestures to touch or tease dog 3303 (or another character, such as a cat or lizard). The gesture may be any suitable type of gesture, such as a radar-based touch-independent radar gesture (as described above), a touch gesture (e.g., a touch gesture performed on a touch screen), or another gesture (such as a camera-based touch-independent gesture), assuming for this example that the game gesture is a radar gesture.
For example, the gesture training module 106 may present a training game screen (training screen) as shown in detail view 3300-1 that explains how gestures are performed to stroke the dog 3302. The detail view 3300-1 shows an animation of the exemplary smartphone 102 and the user's hand 112 in the vicinity of the smartphone. The training screen of detail view 3300-1 also includes a text area 2404 (shown as a dashed rectangle) that can display text indicating that the user performed a radar gesture to play the game (e.g., "slide left and right to pet" or "slide left and right above the cell phone" or "reach in and move around fingers to scratch"). The text area 2404 may also be used to display the title of the course or a description of the purpose of the successful gesture (or what action will be made), such as "spin to pet" or "Wave your fingers to tick".
In this example, the training screen also includes a game play indicator 3204, shown as a heart. The game play indicator 3204 shows the user how many times the dog 3302 is cared for to complete the game. In some implementations, audio instructions may also be available (e.g., the user may select text, audio, or both, and select a default option, such as text only). The gaming environment also includes an exit control 2406 that allows the user to exit the training screen and gaming environment to return to normal operation of the electronic device.
In another detail view 3300-2, the user 112 begins a gesture (swipe left or right on or over the screen) making the request, as indicated by arrow 3306. Dog 3302, game play indicator 3304, and exit control 2406 are also presented on the training screen in detail views 3300-2. Animation (e.g., successful animation of a visual game element described with reference to fig. 23) continues in detail views 3300-3, where dog 3302 changes its gesture and facial expression to indicate a successful gesture (e.g., swipe or scratch itch). Detail views 3300-3 also include exit control 2406 and gaming entertainment indicator 3304. In this case, the game play indicator 3304 shows the user that the first stroke has been completed, as shown by the leftmost game play indicator 3304, which is shown larger and surrounded by the halo 3308. Additionally, to help the user understand that the radar gesture was successful, gesture training module 106 presents a cautious shape 3310 over dog 3302. In addition, to let the user know to play again, the text in the text area 2404 changes (e.g., from "Swipe to pet" and "Swipe left and right to pet" to "soute! Reach to make Rover happy (too lovely! Reach again to get rid of your heart)" or "Reach in your sight fingers to tick Rover (Reach again to give your fingers to scratch your itch)").
In detail views 3300-1 through 3300-3, audio or tactile (haptic) feedback (e.g., a sound or haptic sensation indicating a successful gesture) may also be provided, if available. In the event (not shown) that the user makes an unsuccessful gesture, the dog 3302 does not change its pose or facial expression, and the gesture training module 106 may present another animation that shows the failed gesture attempt, such as the dog 3302 walking or sleeping (e.g., the gesture training module 106 may present an unsuccessful animation of a visual game element, as described with reference to fig. 23).
Exemplary computing System
Fig. 34 illustrates various components of an exemplary computing system 3400 that may be implemented as any type of client, server, and/or electronic device as described with reference to fig. 1-33 to implement aspects that facilitate proficiency of users in using radar gestures to interact with electronic devices.
The computing system 3400 includes a communication device 3402, the communication device 3402 capable of allowing wired and/or wireless communication of device data 3404 (e.g., radar data, authentication data, reference data, received data, data being received, data scheduled to be broadcast, and data packets of the data). The device data 3404 or other device content may include configuration settings of the device, media content stored on the device, and/or information associated with a user of the device (e.g., the identity of a person within a radar field or customized gesture data)). The media content stored on the computing system 3400 may include any type of radar, biometric, audio, video, and/or image data. Computing system 3400 includes one or more data inputs 3406 via which data inputs 3406 any type of data, media content, and/or input may be received, such as human speech, interaction with a radar field (e.g., radar gestures), touch input, user-selectable input or interaction (explicit or implicit), messages, music, television media content, recorded video content, and any other type of audio, video, and/or image data received from any content and/or data source.
Computing system 3400 also includes communication interfaces 3408, which may be implemented as any one or more of a serial and/or parallel interface, a wireless interface, any type of network interface, a modem, and as any other type of communication interface. The communication interfaces 3408 provide a connection and/or communication links between the computing system 3400 and a communication network over which other electronic, computing, and communication devices communicate data with the computing system 3400.
The computing system 3400 includes one or more processors 3410 (e.g., any of microprocessors, controllers, or other controllers) that may process various computer-executable instructions to control the operation of the computing system 3400 and to enable or in which techniques may be implemented that facilitate a user's proficiency in using radar gestures to interact with an electronic device. Alternatively or in addition, the computing system 3400 may be implemented by any one or combination of hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits which are generally identified at 3412. Although not shown, the computing system 3400 may include a system bus or data transfer system for coupling the various components within the device. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Also not shown, the computing system 3400 may include one or more non-radar sensors, such as non-radar sensor 108.
The computing system 3400 also includes computer-readable media 3414, such as one or more storage devices that enable permanent and/or non-transitory data storage (e.g., as opposed to mere signal transmission), examples of which include Random Access Memory (RAM), non-volatile memory (e.g., any one or more of a read-only memory (ROM), flash memory, EPROM, EEPROM, etc.), and a disk storage device. A disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable and/or rewriteable Compact Disc (CD), any type of a Digital Versatile Disc (DVD), and so forth. The computing system 3400 may also include a mass storage media device (storage media) 3416.
Computer-readable media 3414 provides data storage mechanisms to store the device data 3404, as well as various device applications 3418 and any other types of information and/or data related to operational aspects of the computing system 3400. For example, an operating system 3420 can be maintained as a computer application with the computer-readable media 3414 and executed on processors 3410. The device applications 3418 may include a device manager, such as any form of a control application, software application, signal processing control module, code that is native to a particular device, abstraction module, gesture recognition module, and/or other module. The device applications 3418 may also include system components, engines, modules, or managers to enable facilitating proficiency in using radar gestures by users to interact with electronic devices, such as the radar system 104, the gesture training module 106, the application manager 116, or the gesture library 120. The computing system 3400 may also include or have access to one or more machine learning systems.
Although aspects have been described in language specific to features and/or methods that facilitate proficiency in using radar gestures to interact with electronic devices, the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as exemplary embodiments facilitating proficiency in user interaction with electronic devices using radar gestures, and other equivalent features and methods are intended to be within the scope of the appended claims. In addition, while various aspects are described, it is to be appreciated that each described aspect can be implemented independently or in combination with one or more other described aspects.

Claims (20)

1. A method performed by a radar gesture enabled electronic device for facilitating user proficiency in using gestures received with radar, the facilitating proceeding through a visual game play, the method comprising:
presenting a first visual game element on a display of the radar gesture-enabled electronic device;
receiving first radar data corresponding to a first motion of a user in a radar field provided by a radar system included in or associated with the radar gesture-enabled electronic device;
determining, based on the first radar data, whether the first motion of the user in the radar field includes a first radar gesture; and
in response to determining that the first motion of the user in the radar field comprises the first radar gesture, presenting a successful visual animation of the first visual game element, the successful visual animation of the first visual game element indicating a successful advancement of the visual game play; or
In response to determining that the first motion of the user in the radar field does not include the first radar gesture, presenting an unsuccessful visual animation of the first visual game element, the unsuccessful visual animation of the first visual game element indicating that the visual game play cannot be advanced.
2. The method of claim 1, further comprising:
in response to determining that the first motion of the user in the radar field does not include the first radar gesture, receiving second radar data corresponding to a second motion of the user in the radar field;
determining, based on the second radar data, whether the second motion of the user in the radar field includes the first radar gesture; and
in response to determining that the second motion of the user in the radar field comprises the first radar gesture, presenting the successful visual animation of the first visual game element.
3. The method of claim 2, wherein determining whether the second motion of the user in the radar field based on the second radar data comprises the first radar gesture further comprises:
using the second radar data to detect values of a set of parameters related to the second motion of the user in the radar field;
comparing the detected values of the set of parameters to a reference value for the set of parameters, the reference value corresponding to the first radar gesture.
4. The method of claim 2, further comprising:
in response to determining that the first motion or the second motion of the user in the radar field comprises the first radar gesture, presenting a second visual game element;
receiving third radar data corresponding to a third motion of the user in the radar field, the third radar data being received after the first radar data and the second radar data;
determining, based on the third radar data, whether the third motion of the user in the radar field includes a second radar gesture; and
in response to determining that the third motion of the user in the radar field comprises the second radar gesture, presenting a successful visual animation of the second visual game element, the successful visual animation of the second visual game element indicating another successful advance of the visual game play.
5. The method of claim 4, wherein determining whether the third motion of the user in the radar field based on the third radar data comprises the second radar gesture further comprises:
using the third radar data to detect values of a set of parameters associated with the third motion of the user in the radar field;
comparing the detected value of the set of parameters to a reference value for the set of parameters, the reference value corresponding to the second radar gesture.
6. The method of claim 4, wherein determining the field of view of the first or second radar gesture therein comprises a volume within one meter of the radar gesture-enabled electronic device and within an angle of greater than 10 degrees measured from a plane of a display of the radar gesture-enabled electronic device.
7. The method of claim 4, further comprising:
generating, with a machine learning model, an adjusted baseline value associated with the first radar gesture or the second radar gesture;
receiving fourth radar data corresponding to a fourth motion of the user in the radar field;
using the fourth radar data to detect values of a set of parameters associated with the fourth motion of the user in the radar field;
comparing the detected values of the set of parameters to the adjusted reference values;
based on the comparison, determining whether the fourth motion of the user in the radar field comprises the first radar gesture or the second radar gesture, and wherein the fourth motion of the user in the radar field is determined to not be the first radar gesture or the second radar gesture based on a comparison of the detected values of the set of parameters to a default reference value.
8. The method of claim 1, wherein determining, based on the first radar data, whether the first motion of the user in the radar field includes the first radar gesture further comprises:
using the first radar data to detect values of a first set of parameters associated with the first motion of the user in the radar field;
comparing the detected values of the first set of parameters to a first reference value for the first set of parameters, the first reference value corresponding to the first radar gesture.
9. The method of claim 1, wherein the first visual game element is presented without textual or non-textual instructions associated with how to perform the first radar gesture.
10. The method of claim 1, wherein the first visual game element is presented as a supplemental instruction describing how to perform the first radar gesture.
11. A radar gesture-enabled electronic device, comprising:
a computer processor;
a radar system implemented at least in part in hardware, the radar system configured to:
providing a radar field;
sensing a reflection from a user in the radar field;
analyzing reflections from the user in the radar field; and
providing radar data based on an analysis of the reflections; and
a computer-readable medium having instructions stored thereon that, when executed by the computer processor, implement a gesture training module configured to:
presenting a first visual game element on a display of the radar gesture-enabled electronic device in a context of visual game play;
receiving a first subset of the radar data corresponding to a first motion of the user in the radar field;
determining, based on the first subset of the radar data, whether the first motion of the user in the radar field includes a first radar gesture;
in response to determining that the first motion of the user in the radar field comprises the first radar gesture, presenting a successful visual animation of the first visual game element, the successful visual animation of the first visual game element indicating a successful advancement of the visual game play; or
In response to determining that the first motion of the user in the radar field does not include the first radar gesture, presenting an unsuccessful visual animation of the first visual game element, the unsuccessful visual animation of the first visual element indicating a failure to advance the visual game play.
12. The radar gesture-enabled electronic device of claim 11, wherein the gesture training module is further configured to:
in response to determining that the first motion of the user in the radar field does not include the first radar gesture, receiving a second subset of the radar data corresponding to a second motion of the user in the radar field;
determining, based on the second subset of the radar data, whether the second motion of the user in the radar field includes the first radar gesture; and
in response to determining that the second motion of the user in the radar field comprises the first radar gesture, presenting the successful visual animation of the first visual game element.
13. The radar gesture-enabled electronic device of claim 12, wherein determining, based on the second subset of the radar data, whether the second motion of the user in the radar field comprises the first radar gesture further comprises:
using the second radar data to detect values of a set of parameters related to the second motion of the user in the radar field;
comparing the detected values of the set of parameters to a reference value for the set of parameters, the reference value corresponding to the first radar gesture.
14. The radar gesture-enabled electronic device of claim 12, wherein the gesture training module is further configured to:
in response to determining that the first motion or the second motion of the user in the radar field comprises the first radar gesture, presenting a second visual game element;
receiving a third subset of the radar data corresponding to a third motion of the user in the radar field, the third subset of the radar data being received after the first subset or the second subset of the radar data;
determining, based on the third subset of the radar data, whether the third motion of the user in the radar field includes a second radar gesture; and
in response to determining that the third motion of the user in the radar field comprises the second radar gesture, presenting a successful visual animation of the second visual game element, the successful visual animation of the second visual game element indicating another successful advance of the visual game play.
15. The radar gesture-enabled electronic device of claim 14, wherein determining, based on the third subset of the radar data, whether the third motion of the user in the radar field comprises the second radar gesture further comprises:
using the third subset of the radar data to detect values of a set of parameters associated with the third motion of the user in the radar field;
comparing the detected value of the set of parameters to a reference value for the set of parameters, the reference value corresponding to the second radar gesture.
16. The radar gesture-enabled electronic device of claim 14, wherein the field of view within which the first radar gesture or the second radar gesture is determined comprises a volume within one meter of the radar gesture-enabled electronic device and within an angle greater than 10 degrees measured from a plane of the display of the radar gesture-enabled electronic device.
17. The radar gesture-enabled electronic device of claim 14, wherein the gesture training module is further configured to:
generating, with a machine learning model, an adjusted baseline value associated with the first radar gesture or the second radar gesture;
receiving a fourth subset of the radar data corresponding to a fourth motion of the user in the radar field;
using the fourth subset of the radar data to detect values of a set of parameters associated with the fourth motion of the user in the radar field;
comparing the detected values of the set of parameters to the adjusted reference values;
based on the comparison, determining whether the fourth motion of the user in the radar field comprises the first radar gesture or the second radar gesture, and wherein the fourth motion of the user in the radar field is determined to not be the first radar gesture or the second radar gesture based on a comparison of the detected values of the set of parameters to a default reference value.
18. The radar gesture-enabled electronic device of claim 11, wherein determining, based on the first subset of the radar data, whether the first motion of the user in the radar field comprises the first radar gesture further comprises:
using the first subset of the radar data to detect values of a set of parameters associated with the first motion of the user in the radar field;
comparing the detected values of the set of parameters to a reference value for the set of parameters, the reference value corresponding to the first radar gesture.
19. The radar gesture-enabled electronic device of claim 11, wherein the first visual game element is presented without textual instructions or non-textual instructions associated with how to perform the first radar gesture.
20. The radar gesture-enabled electronic device of claim 11, wherein the first visual game element is presented with supplemental instructions describing how to perform the first radar gesture, the supplemental instructions comprising one or both of textual instructions and non-textual instructions.
CN201911194059.8A 2019-10-03 2019-11-28 Facilitating user proficiency in using radar gestures to interact with electronic devices Pending CN110908516A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962910135P 2019-10-03 2019-10-03
US62/910,135 2019-10-03
US16/601,452 US20210103337A1 (en) 2019-10-03 2019-10-14 Facilitating User-Proficiency in Using Radar Gestures to Interact with an Electronic Device
US16/601,452 2019-10-14

Publications (1)

Publication Number Publication Date
CN110908516A true CN110908516A (en) 2020-03-24

Family

ID=69820340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911194059.8A Pending CN110908516A (en) 2019-10-03 2019-11-28 Facilitating user proficiency in using radar gestures to interact with electronic devices

Country Status (1)

Country Link
CN (1) CN110908516A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112774181A (en) * 2021-01-11 2021-05-11 浙江星汉云图人工智能科技有限公司 Radar data processing method, processing system and computer storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150346820A1 (en) * 2014-06-03 2015-12-03 Google Inc. Radar-Based Gesture-Recognition through a Wearable Device
US20160189469A1 (en) * 2014-09-22 2016-06-30 Gtech Canada Ulc Gesture-based navigation on gaming terminal with 3d display
US9448634B1 (en) * 2013-03-12 2016-09-20 Kabam, Inc. System and method for providing rewards to a user in a virtual space based on user performance of gestures
CN106537173A (en) * 2014-08-07 2017-03-22 谷歌公司 Radar-based gesture recognition
US20170243433A1 (en) * 2011-10-20 2017-08-24 Robert A. Luciano, Jr. Gesture based gaming controls for an immersive gaming terminal
CN107589829A (en) * 2016-07-07 2018-01-16 迪斯尼实业公司 Location-based experience to interactive commodity
CN108287608A (en) * 2017-01-09 2018-07-17 英飞凌科技股份有限公司 The system and method for posture detection for remote equipment
US10088908B1 (en) * 2015-05-27 2018-10-02 Google Llc Gesture detection and interactions
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
CN109857251A (en) * 2019-01-16 2019-06-07 珠海格力电器股份有限公司 Gesture identification control method, device, storage medium and the equipment of intelligent appliance
US20200081560A1 (en) * 2018-09-09 2020-03-12 Microsoft Technology Licensing, Llc Changing a mode of operation of a computing device by a pen device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243433A1 (en) * 2011-10-20 2017-08-24 Robert A. Luciano, Jr. Gesture based gaming controls for an immersive gaming terminal
US9448634B1 (en) * 2013-03-12 2016-09-20 Kabam, Inc. System and method for providing rewards to a user in a virtual space based on user performance of gestures
US20150346820A1 (en) * 2014-06-03 2015-12-03 Google Inc. Radar-Based Gesture-Recognition through a Wearable Device
CN105278674A (en) * 2014-06-03 2016-01-27 谷歌公司 Radar-Based Gesture-Recognition through a Wearable Device
CN106537173A (en) * 2014-08-07 2017-03-22 谷歌公司 Radar-based gesture recognition
US20160189469A1 (en) * 2014-09-22 2016-06-30 Gtech Canada Ulc Gesture-based navigation on gaming terminal with 3d display
US10088908B1 (en) * 2015-05-27 2018-10-02 Google Llc Gesture detection and interactions
CN107589829A (en) * 2016-07-07 2018-01-16 迪斯尼实业公司 Location-based experience to interactive commodity
CN108287608A (en) * 2017-01-09 2018-07-17 英飞凌科技股份有限公司 The system and method for posture detection for remote equipment
CN108958490A (en) * 2018-07-24 2018-12-07 Oppo(重庆)智能科技有限公司 Electronic device and its gesture identification method, computer readable storage medium
US20200081560A1 (en) * 2018-09-09 2020-03-12 Microsoft Technology Licensing, Llc Changing a mode of operation of a computing device by a pen device
CN109857251A (en) * 2019-01-16 2019-06-07 珠海格力电器股份有限公司 Gesture identification control method, device, storage medium and the equipment of intelligent appliance

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112774181A (en) * 2021-01-11 2021-05-11 浙江星汉云图人工智能科技有限公司 Radar data processing method, processing system and computer storage medium
CN112774181B (en) * 2021-01-11 2023-11-10 北京星汉云图文化科技有限公司 Radar data processing method, radar data processing system and computer storage medium

Similar Documents

Publication Publication Date Title
KR102320754B1 (en) Facilitating user-proficiency in using radar gestures to interact with an electronic device
KR102479012B1 (en) Visual indicator for paused radar gestures
CN112753005B (en) Input method of mobile device
US20210064145A1 (en) Detecting and Processing Unsuccessfully Recognized or Unsuccessfully Utilized Non-Contact Gestures for a Computing System
US20210064144A1 (en) Methods for Reliable Acceptance of User Non-Contact Gesture Inputs for a Mobile Device
US11169615B2 (en) Notification of availability of radar-based input for electronic devices
US20200410072A1 (en) Radar-Based Authentication Status Feedback
US11314312B2 (en) Smartphone-based radar system for determining user intention in a lower-power mode
CN113614676B (en) Mobile device-based radar system for providing a multi-mode interface and method thereof
JP7433397B2 (en) Mobile device-based radar system for applying different power modes to multimode interfaces
CN111812633B (en) Detecting reference system changes in smart device-based radar systems
CN113853567A (en) IMU and radar based mitigation states
CN113906367A (en) Authentication management by IMU and radar
CN110908516A (en) Facilitating user proficiency in using radar gestures to interact with electronic devices
CN110989836A (en) Facilitating user proficiency in using radar gestures to interact with electronic devices
JP2023169225A (en) Context-sensitive control of radar-based gesture-recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200324