US20190332353A1 - Gamifying voice search experience for children - Google Patents

Gamifying voice search experience for children Download PDF

Info

Publication number
US20190332353A1
US20190332353A1 US16/504,486 US201916504486A US2019332353A1 US 20190332353 A1 US20190332353 A1 US 20190332353A1 US 201916504486 A US201916504486 A US 201916504486A US 2019332353 A1 US2019332353 A1 US 2019332353A1
Authority
US
United States
Prior art keywords
search
audio
user interface
graphical user
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/504,486
Inventor
Shiva Jaini
Satoe Haile
Aaron Schurman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US16/504,486 priority Critical patent/US20190332353A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAILE, SATOE, SCHURMAN, AARON, JAINI, SHIVA
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Publication of US20190332353A1 publication Critical patent/US20190332353A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Definitions

  • This disclosure relates to the field of content searches and, in particular, to gamifying a voice search experience for children.
  • social networks allow users to connect to and share information with each other.
  • Many social networks include a content sharing aspect that allows users to upload, view, and share content, such as video content, image content, audio content, text content, and so on (which may be collectively referred to as “media items” or “content items”).
  • Such media items may include audio clips, movie clips, TV clips, and music videos, as well as amateur content such as video blogging, short original videos, pictures, photos, other multimedia content, etc.
  • Users may use computing devices (such as smart phones, cellular phones, laptop computers, desktop computers, netbooks, tablet computers) to use, play, and/or consume media items (e.g., watch digital videos, and/or listen to digital music).
  • a method includes receiving, by a processing device of a user device, a user request to perform a search associated with an audio input.
  • the method further includes displaying on a graphical user interface (GUI) of the user device, one or more GUI elements representing one or more audio prompts pertaining to the search upon determining that the audio input for the search has not been provided during a first predefined time interval.
  • the method further includes playing an audio prompt corresponding to the activated GUI element in response to an activation of one of the one or more GUI elements within a second predefined time interval.
  • the method further includes receiving, by the processing device, an audio response to the audio prompt, the audio response indicating a query for the search and presenting, on the user device, a search result for the indicated query, the search result comprising one or more media items.
  • the method also includes: in response to the activation, displaying a visual prompt corresponding to the activated GUI element, in addition to the playing of the audio prompt.
  • the one or more audio prompts are based on an age of a user of the user device. In some implementations, the one or more audio prompts are based on a location of the user device. In some implementations, the one or more audio prompts are based on the search history of a user of the user device. In some implementations, the one or more audio prompts are based on a calendar time of the search. In some implementations, the one of the one or more GUI elements is activated when a user selects the GUI element.
  • the method also includes: automatically playing the audio prompt corresponding to the activated GUI element responsive to detecting that a user has not activated any of the one or more GUI elements within the second predefined time interval. In some implementations, the method also includes: after playing the audio prompt, allowing the user to provide the audio response within a third time interval without any manual interaction with the GUI. In some implementations, each media item of the one or more media items comprised by the search result has a rating that indicates appropriateness of a respective media item for children.
  • a method in another aspect of the disclosure, includes receiving, from a user device, an indication that an audio input for a search has not been received during a predefined time interval. The method further includes determining, by a processing device, one or more search prompts specific to a user of the user device. The method further includes: providing the one or more search prompts specific to the user for presentation to the user of the user device and receiving, from the user device, an answer to one of the one or more search prompts. The method further includes: searching, by the processing device, for one or more media items based on the answer to the one of the one or more search prompts.
  • determining the one or more search prompts includes: receiving an age of a user of the user device; and determining the one or more search prompts based on the age. In some implementations, determining the one or more search prompts includes: determining at least one of: a location of the user device, a search history of a user of the user device, or a calendar time of the search; and determining the one or more search prompts based on the location, the search history, or the calendar time of the search. In some implementations, each media item of the one or more media items has a rating that indicates appropriateness of a respective media item for children.
  • Computing devices for performing the operations of the above described methods and the various implementations described herein are disclosed.
  • Computer-readable media that store instructions for performing operations associated with the above described methods and the various implementations described herein are also disclosed.
  • FIG. 1 is a block diagram illustrating an exemplary network architecture in which implementations of the present disclosure may be implemented.
  • FIG. 2 is a flow diagram illustrating a method for gamifying video search, according to an implementation.
  • FIG. 3 is a flow diagram illustrating a method for identifying search prompts for a gamified voice search, according to some implementations.
  • FIG. 4 illustrates an example gamified voice search user interface corresponding to a voice search activation stage, according to an implementation.
  • FIG. 5 illustrates an example gamified voice search user interface corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • FIG. 6 illustrates another example gamified voice search user interface corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • FIG. 7 illustrates an example gamified voice search user interface corresponding to a search query definition stage, according to implementations of the disclosure.
  • FIG. 8 is a block diagram illustrating one implementation of a computer system, according to an implementation.
  • aspects of the disclosure are directed to gamifying a voice search experience for children.
  • implementations are described for providing a user interface that assists children with content searches using a game-like approach.
  • aspects of the present disclosure transform content searching into a game-like experience that teaches young users how to search for interesting content.
  • aspects of the present disclosure provide a graphical user interface (GUI) that includes a voice search indicator visually illustrating a voice search option.
  • GUI graphical user interface
  • search prompt indicators can appear in the GUI.
  • These search prompt indicators can be presented, for example, as bubbles floating around the screen of the user device. The bubbles may be shown with a visual search indicator (e.g., a question mark) to demonstrate that they pertain to search queries.
  • a search prompt can be a hint question that identifies a topic for a search.
  • the audio search prompt may be “What is your favorite animal?” If the user provides a voice response to such a prompt (e.g., identifying a specific animal), a search can be performed based on the response, and the resulting content items (e.g., videos related to the specific animal) can be presented to the user.
  • the resulting content items e.g., videos related to the specific animal
  • the GUI may display a bubble popping automatically, resulting in an audio prompt being played to the user.
  • a bubble popping automatically For example, one of the bubbles can randomly pop after a 10-15 second interval, and the user can hear a hint question (audio search prompt) suggesting a topic for a search.
  • Audio search prompts played for the user can be continuously changed to suggest new topics for content searches, thereby teaching young users about various things for which they can search through this game-like experience.
  • audio search prompts can be customized for a specific user.
  • an audio search prompt to be played can be selected for a user based on the age, location or search history of the user, the current date/time, the user's demographics, the prior history of prompts already posed to the user, etc. For example, users within a certain age range may be asked questions from a prompt list specific to that age range. Users may be asked specific seasonal questions around the holidays or during other calendar timing events. Users may be asked school related questions during the school year, and “fun” questions during the summer.
  • a prompt selection may be constantly refined for a user as search histories are updated to provide insight into the user's content preference.
  • aspects of the present disclosure provide a gamified voice search experience to assist young users with performing searches specifically tailored to the young users.
  • children can search for and be presented with age-appropriate content selected from a very large number of content items (e.g., billions of videos).
  • the present disclosure often references videos for simplicity and brevity.
  • teaching of the present disclosure are applied to media items generally and can be applied to various types of content or media items, including for example, video, audio, text, images, program instructions, etc.
  • the media items referred to herein represent viewable and/or shareable media items.
  • FIG. 1 illustrates an example system architecture 100 , in accordance with one implementation of the disclosure.
  • the system architecture 100 includes a user device 110 , a network 105 , a data store 106 , a content sharing platform 120 , and a server 130 .
  • network 105 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof.
  • LAN local area network
  • WAN wide area network
  • wired network e.g., Ethernet network
  • wireless network e.g., an 802.11 network or a Wi-Fi network
  • a cellular network e.g.,
  • the data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data.
  • the data store 106 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers).
  • User devices 110 may include computing devices such as personal computers (PCs), laptops, mobile phones, smart phones, tablet computers, network connected televisions, netbook computers etc.
  • the user device 110 may include a media viewer 112 .
  • the media viewer 112 may be an application that allows a user to view content, such as images, videos, web pages, documents, etc.
  • the media viewer 112 may be a web browser that can access, retrieve, present, and/or navigate content (e.g., web pages such as Hyper Text Markup Language (HTML) pages, digital media items, etc.) served by a web server.
  • the media viewer 112 may render, display, and/or present the content (e.g., a web page, a media viewer) to a user.
  • content e.g., a web page, a media viewer
  • the media viewer 112 may also display an embedded media player (e.g., a Flash® player or an HTML5 player) that is embedded in a web page (e.g., a web page that may provide information about a product sold by an online merchant).
  • an embedded media player e.g., a Flash® player or an HTML5 player
  • a web page e.g., a web page that may provide information about a product sold by an online merchant.
  • the media viewer 112 may be a standalone application (a mobile application or “app”) that allows users to search for digital media items (e.g., digital videos, digital images, electronic books, etc.) and can present a media player to play video and audio media items for the user.
  • the media viewer 112 may be a children-specific application that allows users to view and search for content appropriate for children.
  • the media viewer 112 may be provided to the user device 110 by the server 130 and/or content sharing platform 120 .
  • the media viewer 112 may include a search interface 111 that allows a user to search for content hosted by content sharing platform 120 .
  • the search interface 111 may include a voice search indicator that can be selected to initiate a voice search option.
  • the search interface 11 may also include other GUI elements that allow a user to interact with the gamified voice search features described herein.
  • search interface 111 may receive a user request to perform a voice search (a search associated with an audio input). A user may activate a voice search indicator (e.g., by clicking on it) in search interface 111 and provide audio input to initiate a voice search.
  • the audio input may be sent to content sharing platform 120 and/or server 130 to undergo voice recognition operations to define a search query.
  • voice recognition can be performed on the user device 110 , and the resulting search query can be sent to content sharing platform 120 and/or server 130 .
  • one or more GUI elements representing one or more audible search prompts may be displayed by the search interface 111 .
  • the above time interval may be a default time interval. In one implementation, a user is allowed to modify the default setting of the time interval.
  • a GUI element (e.g., a bubble with a question mark) representing an audio search prompt may be activated to cause the audio prompt to be played to a user, thereby aiding the user in searching for content of interest.
  • a GUI element representing an audio search prompt may be activated upon a user selection of the GUI element or upon an expiration of a specific time period from the appearance of the GUI element on the screen, which can result in a visual indication of such a self-activation (e.g., by displaying an automated popping of the corresponding bubble).
  • Example audio search prompts can include, but are not limited to:
  • the audio search prompts being played can be specific to the user. Some aspects of the determination and selection of customized audio search prompts for the user are discussed in more detail below.
  • this audio response defining a search query is provided (e.g., as audio data or text data resulting from voice recognition) to content sharing platform 120 and/or server 130 to perform a search.
  • the audio search prompt “What is your favorite animal?” is played to the user, the user may respond, “Giraffe!” In this case, the audio response “Giraffe” may be provided as the search query.
  • the content sharing platform 120 and/or server 130 may then perform a search for media items relating to Giraffes, which can then be returned to the user device 110 and presented to the user by the media viewer 112 .
  • the content sharing platform 120 may be one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a network connected television, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to provide a user with access to media items and/or provide the media items to the user.
  • the content sharing platform 120 may allow a user to consume, upload, search for, approve of (“like”), dislike, and/or comment on media items.
  • the content sharing platform 120 may also include a website (e.g., a webpage) that may be used to provide a user with access to the media items.
  • a “user” may be represented as a single individual.
  • other implementations of the disclosure encompass a “user” being an entity controlled by a set of users and/or an automated source.
  • a set of individual users federated as a community in a social network may be considered a “user”.
  • an automated consumer may be an automated ingestion pipeline, such as a topic channel, of the content sharing platform 120 .
  • the content sharing platform 120 may include media items 121 .
  • Examples of a media item 121 can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (e-books), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc.
  • media item 121 is also referred to as a content item.
  • a media item 121 may be consumed via the Internet and/or via a mobile device application.
  • an online video also hereinafter referred to as a video
  • “media,” “media item,” “online media item,” “digital media,” “digital media item,” “content,” and “content item” can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity.
  • the content sharing platform 120 may store the media items 121 using the data store 106 .
  • the content sharing platform 120 may also store playlists created by users, third parties or automatically.
  • a playlist may include a list of content items (e.g., videos) that can be played (e.g., streamed) in sequential or shuffled order on the content sharing platform.
  • the server 130 may be one or more computing devices (e.g., a rackmount server, a server computer, etc.).
  • the server 130 may be included in the content sharing platform 120 or be part of a different system.
  • the server 130 may host a voice search system 140 .
  • the voice search system 140 enables the identification of audio search prompts to help identify, curate, and present content appropriate for children, in implementations of the disclosure.
  • Content appropriate for children may refer to one or more content items that are safe (e.g., not mature, violent or explicit) and/or relevant (e.g., entertaining or interesting) for children.
  • the voice search system 140 may include several components (e.g., modules, sub-modules, applications, etc.) that can be executed by one or more processors of a machine hosting the voice search system 140 . These components may include, for example, a search prompt unit 160 , an age unit 162 , a location unit 164 , a time unit 165 , and a search history unit 166 . More or less components can be included in the voice search system 140 to provide functionality described herein.
  • search prompt unit 160 determines which audio search prompts to send to user device 110 for a gamified voice search. Audio search prompts may be sent to user device as audio data or text data that can be converted to audio search prompts at the user device 110 . Prompts may be specifically determined on a per-user or per-device basis. Advantageously, customized prompts for users may allow for better retention of those users within the content sharing platform and a better game-like voice search experience for the user. Search prompt unit 160 may utilize age unit 162 to determine appropriate prompts to provide to a user, based on the user's age.
  • location unit 164 may aid in determining appropriate prompts based on a user's location (or a location of the user device), and time unit 165 may help in determining appropriate prompts based on a calendar time of the search. For example, if a search is performed during the school year, scholastic prompts may be determined to be more appropriate than leisurely prompts. Or, if the search is performed around the holidays, holiday-themed prompts may be determined to be appropriate.
  • search history unit 168 may aid in the procurement of prompts based on the search history of a user (or the search history associated with the user device 110 ). For example, based on a user's search history, it may be determined that the user enjoys content about a particular video game.
  • Search history unit 166 may identify other prompts related to the same video game to provide to the user.
  • the prompts may be compiled and stored in a database (e.g., a databased of data store 106 ), with rankings that represent the relevancy or appropriateness to a user.
  • the prompts may be stored on user device 110 .
  • the content sharing platform 120 can also be performed on the user device 110 in other implementations, if appropriate.
  • the functionality attributed to a particular component can be performed by different or multiple components operating together.
  • the content sharing platform 120 can also be accessed as a service provided to other systems or devices through appropriate application programming interfaces, and thus is not limited to use in websites.
  • implementations of the disclosure are discussed in terms of content sharing platforms and promoting social network sharing of a content item on the content sharing platform, implementations may also be generally applied to any type of social network providing connections between users.
  • the users may be provided with an opportunity to control whether the content sharing platform 120 collects user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user.
  • user information e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location
  • certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed.
  • a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined.
  • location information such as to a city, ZIP code, or state level
  • the user may have control over how information is collected about the user and used by the content sharing platform 120 .
  • FIG. 2 is a flow diagram illustrating a method for gamifying video search, according to an implementation.
  • the method 200 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • processing logic comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • media viewer 112 on user device 110 performs method 200 .
  • processing logic receives a user request to perform a search associated with an audio input.
  • the user request may be a result of the user clicking on a GUI element (also referred to as a voice search indicator) in search interface 111 on the user device 110 that represents a voice search.
  • a voice search GUI element may be visually represented as a microphone, a person speaking, a question mark, or any other graphical representation of a voice search.
  • processing logic determines whether the audio input defining a search query has been provided during a first time interval.
  • the first time interval may be predefined or customizable by a user. If the audio input defining the search query is received within the first time interval, processing logic may request a search (e.g., by sending the search query to a server tasked with performing the search) based on the search query at block 209 , and receive the results of the search from the server. At block 210 , processing logic presents the search results including one or more media items on the user device 110 .
  • a search e.g., by sending the search query to a server tasked with performing the search
  • processing logic presents the search results including one or more media items on the user device 110 .
  • processing logic displays, in the search interface 111 on the user device 110 , one or more GUI elements representing one or more audio voice prompts pertaining to the voice search at block 204 .
  • processing logic may display GUI elements in the form of bubbles, floating around the screen of the user device 110 .
  • Each bubble may represent a single audio prompt that corresponds to a voice search.
  • one of the bubbles may correspond to the prompt “What is your favorite animal?”, and another bubble may represent the prompt “What is your favorite sport?”
  • the individual GUI elements do not represent predefined audio search prompts, but are merely placeholders for audio search prompts. In this way, audio search prompts may be provided in a particular order regardless of which GUI element the user activates first.
  • processing logic determines whether one of the GUI elements is selected by the user before a second predefined time interval has expired from the appearance of the GUI elements in the search interface 111 . It should be noted that once the user request to perform the voice search is received from the user at block 201 , processing logic may continue actively listening for audio input of the user for a third time interval. The third time interval may be predefined and/or customizable by a user. By listening for audio input of the user even after the first predefined time interval expires and the GUI elements representing prompts are displayed, processing logic allows the user to provide such an audio input at any time, without any further manual interaction (e.g., via a touch input or keyboard input) with the search interface 111 on the user device.
  • processing logic plays an audio search prompt corresponding to the activated GUI element at block 206 .
  • audio search prompts may be specific to a user.
  • the user has a unique and customized gamified voice search experience tailored specifically to assist the user in performing voice search.
  • processing logic automatically (without any user interaction) activates a GUI element at block 207 , such as by visually illustrating an automated popping of a corresponding bubble.
  • processing logic may determine which GUI element should be automatically activated in a random (non-deterministic) manner.
  • processing logic may determine which GUI element should be automatically activated based on rankings of associated audio search prompts. For example, if five GUI elements are displayed, representing five audio search prompts, processing logic may activate the GUI element corresponding to the audio search prompt associated with a topic that is most likely to be of interest to the user. Audio search prompts may be ranked according to any number of attributes, including, but not limited to, predicted interest to the user, frequency of appearance (i.e. how many times this audio search prompt has been provided to the user before), etc.
  • method 200 continues to block 206 where the audio voice prompt corresponding to the activated GUI element is played, as discussed above.
  • processing logic receives audio input of the user in response to the audio voice prompt.
  • the audio input provided by the user defines a query for the search.
  • the audio input is a direct response to the question posed by the audio search prompt.
  • audio input of “Giraffe!” may be received as an answer to the audio search prompt “What is your favorite animal?”
  • the audio input may not be a logical answer to the question posed by the audio search prompt.
  • the audio input may be “Baseball!” In such a case, the audio input “Baseball!” may still be used to define the query for the search, even though it does not directly answer the provided prompt.
  • processing logic may request a search (e.g., by sending the search query to a server tasked with performing the search) based on the search query at block 208 and receive the results of the search from the server.
  • processing logic presents the search results including one or more media items. The media items may be presented by media viewer 112 on the user device 110 .
  • FIG. 3 is a flow diagram illustrating a method for identifying search prompts for a gamified voice search, according to some implementations.
  • the method 300 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof
  • voice search system 140 of FIG. 1 performs method 300 .
  • processing logic receives, from a user device (e.g., user device 110 of FIG. 1 ), an indication that an audio input for a voice search has not been received during a predefined time interval.
  • processing logic determines one or more search prompts specific to a user of the user device. Determining search prompts may involve determining rankings of stored search prompts based on one or more characteristics including, for example, an age of the user, location of the user (or user device), seasonable timing of the search, the search history of the user, etc. Characteristics like those listed above may be combined (e.g., in a machine learning algorithm) to determine an ordered list of prompts specifically tailored to a particular user. The ranking of prompts may change as the underlying characteristics change. Furthermore, new prompts may be continually determined based on updated user characteristics.
  • processing logic may provide the one or more search prompts specific to the user for playing to the user of the user device.
  • prompts are provided one at a time, or several at a time, to be played on-demand on the user device. In other implementations, prompts may be provided for storage on the user device.
  • processing logic receives, from the user device, a search query based on audio input provided by the user in response to one of the search prompts, and at block 310 , processing logic searches for one or more media items based on the search query, and returns the search result to the user device.
  • FIG. 4 illustrates an example gamified voice search user interface 400 corresponding to a voice search activation stage, according to an implementation.
  • the gamified voice search interface 400 may be presented on a user device and may include a GUI element 402 that represents a voice search initiation. If activated by a user, voice search GUI element 402 may initiate a voice search option.
  • GUI element 402 may depict a microphone, a person speaking, or other graphical representation of a voice search. Once activated, GUI element 402 may be transformed in some way (e.g., it may be animated by pulsing, bouncing, changing colors etc.) to indicate that a microphone is currently activated and that listening for a voice search query has started.
  • the gamified voice search interface 400 may also include one or more GUI elements 404 that represent audio search prompts.
  • GUI elements 404 may depict a question mark (as shown) or some other graphical representation of a search prompt.
  • GUI elements 404 are animated, floating around on the screen of the user device.
  • GUI elements 404 may collide with and bounce off of each other and off of other GUI elements (e.g., GUI element 402 ).
  • the gamified voice search interface 400 may also include a GUI element 406 that when activated, allows a user to enter a textual search mode. A user may also activate a GUI element 408 to go back to a previous screen of the application providing the gamified voice search interface 400 .
  • FIG. 5 illustrates an example gamified voice search user interface 500 corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • the gamified voice search interface 500 includes a representation 502 of an activated GUI element 404 of FIG. 4 that is animated in response to the activation (caused by user selection or automated activation as described above). For example, GUI elements 404 may “pop” and/or display “waves” extending outward, 502 .
  • Interface 500 may also display a search prompt 504 in textual form.
  • the search prompt 504 may also be played as an audio prompt (e.g., for young users who cannot read or otherwise understand the textual prompt).
  • the search prompt 504 may be displayed in textual form at substantially the same time as the audio prompt is playing.
  • FIG. 6 illustrates another example gamified voice search user interface 600 corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • an activated GUI element of FIG. 4 and FIG. 5 is continuing its animation by expanding outwards to indicate that it was activated.
  • the gamified voice search interface 600 displays full text of search prompt 604 .
  • the audio prompt associated with the textual search prompt 604 completes its play when the textual search prompt 604 finishes its progressive, animated display.
  • FIG. 7 illustrates an example gamified voice search user interface 700 corresponding to a search query definition stage, according to implementations of the disclosure.
  • the gamified voice search interface 700 demonstrates a search prompt 702 , which can visually indicate that a response to the search prompt 702 has been received.
  • the search prompt 702 is displayed in a faded color or a different color when a response is received.
  • the response may be received as audio input, and may be shown in textual form 704 on the gamified search interface 700 (e.g., at substantially the same time as a user is providing the audio response).
  • FIG. 8 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 800 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a network connected television, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • web appliance a web appliance
  • server a network router, switch or bridge
  • computer system 800 may be representative of a server, such as server 102 , executing a voice search system 140 , as described with respect to FIGS. 1-7 .
  • the exemplary computer system 800 includes a processing device 802 , a main memory 804 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 806 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 818 , which communicate with each other via a bus 808 .
  • ROM read-only memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • RDRAM Rambus DRAM
  • static memory 806 e.g., flash memory, static random access memory (SRAM), etc.
  • SRAM static random access memory
  • Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses.
  • the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the
  • Processing device 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 802 is configured to execute processing logic 826 for performing the operations and steps discussed herein.
  • CISC complex instruction set computing
  • RISC reduced instruction set computer
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • network processor or the like.
  • the processing device 802 is configured to execute processing logic 826 for performing the operations and steps discussed here
  • the computer system 800 may further include a network interface device 822 .
  • the computer system 800 also may include a video display unit 810 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), and a signal generation device 820 (e.g., a speaker).
  • a video display unit 810 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 812 e.g., a keyboard
  • a cursor control device 814 e.g., a mouse
  • a signal generation device 820 e.g., a speaker
  • the data storage device 818 may include a computer-readable storage medium 824 (also referred to as a machine-readable storage medium), on which is stored one or more set of instructions 826 (e.g., software) embodying any one or more of the methodologies of functions described herein.
  • the instructions 826 may also reside, completely or at least partially, within the main memory 804 and/or within the processing device 802 during execution thereof by the computer system 800 ; the main memory 804 and the processing device 802 also constituting machine-readable storage media.
  • the instructions 826 may further be transmitted or received over a network 874 via the network interface device 822 .
  • the computer-readable storage medium 824 may also be used to store instructions to perform a method for identifying content appropriate for children algorithmically without human interaction, as described herein. While the computer-readable storage medium 824 is shown in an exemplary implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • a machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
  • the machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g.,
  • EPROM and EEPROM flash memory
  • flash memory or another type of medium suitable for storing electronic instructions.

Abstract

Implementations disclose gamifying voice search experience. An example method includes receiving, by a processing device, a request to initiate a search; providing for display a plurality of graphical user interface elements that represent a plurality of audio prompts pertaining to the search; in response to determining a user selection of the graphical user interface elements is absent during a predefined time interval, activating one of the graphical user interface elements; playing an audio prompt of the activated graphical user interface element, wherein the audio prompt is provided prior to receiving audio input associated with the search; receiving, by the processing device, audio input in response to the audio prompt, the audio input providing data for the search; and initiating, by the processing device, the search based on the audio input.

Description

    RELATED APPLICATIONS
  • The present application is a continuation of application Ser. No. 15/176,654, filed Jun. 8, 2016, entitled “ GAMIFYING VOICE SEARCH EXPERIENCE FOR CHILDREN”, which is incorporated by reference herein.
  • TECHNICAL FIELD
  • This disclosure relates to the field of content searches and, in particular, to gamifying a voice search experience for children.
  • BACKGROUND
  • On the Internet, social networks allow users to connect to and share information with each other. Many social networks include a content sharing aspect that allows users to upload, view, and share content, such as video content, image content, audio content, text content, and so on (which may be collectively referred to as “media items” or “content items”). Such media items may include audio clips, movie clips, TV clips, and music videos, as well as amateur content such as video blogging, short original videos, pictures, photos, other multimedia content, etc. Users may use computing devices (such as smart phones, cellular phones, laptop computers, desktop computers, netbooks, tablet computers) to use, play, and/or consume media items (e.g., watch digital videos, and/or listen to digital music).
  • SUMMARY
  • The following is a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is intended to neither identify key or critical elements of the disclosure, nor delineate any scope of the particular implementations of the disclosure or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
  • In an aspect of the disclosure, a method includes receiving, by a processing device of a user device, a user request to perform a search associated with an audio input. The method further includes displaying on a graphical user interface (GUI) of the user device, one or more GUI elements representing one or more audio prompts pertaining to the search upon determining that the audio input for the search has not been provided during a first predefined time interval. The method further includes playing an audio prompt corresponding to the activated GUI element in response to an activation of one of the one or more GUI elements within a second predefined time interval. The method further includes receiving, by the processing device, an audio response to the audio prompt, the audio response indicating a query for the search and presenting, on the user device, a search result for the indicated query, the search result comprising one or more media items.
  • In some implementations, the method also includes: in response to the activation, displaying a visual prompt corresponding to the activated GUI element, in addition to the playing of the audio prompt. In some implementations, the one or more audio prompts are based on an age of a user of the user device. In some implementations, the one or more audio prompts are based on a location of the user device. In some implementations, the one or more audio prompts are based on the search history of a user of the user device. In some implementations, the one or more audio prompts are based on a calendar time of the search. In some implementations, the one of the one or more GUI elements is activated when a user selects the GUI element.
  • In some implementations, the method also includes: automatically playing the audio prompt corresponding to the activated GUI element responsive to detecting that a user has not activated any of the one or more GUI elements within the second predefined time interval. In some implementations, the method also includes: after playing the audio prompt, allowing the user to provide the audio response within a third time interval without any manual interaction with the GUI. In some implementations, each media item of the one or more media items comprised by the search result has a rating that indicates appropriateness of a respective media item for children.
  • In another aspect of the disclosure, a method includes receiving, from a user device, an indication that an audio input for a search has not been received during a predefined time interval. The method further includes determining, by a processing device, one or more search prompts specific to a user of the user device. The method further includes: providing the one or more search prompts specific to the user for presentation to the user of the user device and receiving, from the user device, an answer to one of the one or more search prompts. The method further includes: searching, by the processing device, for one or more media items based on the answer to the one of the one or more search prompts.
  • In some implementations, determining the one or more search prompts includes: receiving an age of a user of the user device; and determining the one or more search prompts based on the age. In some implementations, determining the one or more search prompts includes: determining at least one of: a location of the user device, a search history of a user of the user device, or a calendar time of the search; and determining the one or more search prompts based on the location, the search history, or the calendar time of the search. In some implementations, each media item of the one or more media items has a rating that indicates appropriateness of a respective media item for children.
  • Computing devices for performing the operations of the above described methods and the various implementations described herein are disclosed. Computer-readable media that store instructions for performing operations associated with the above described methods and the various implementations described herein are also disclosed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
  • FIG. 1 is a block diagram illustrating an exemplary network architecture in which implementations of the present disclosure may be implemented.
  • FIG. 2 is a flow diagram illustrating a method for gamifying video search, according to an implementation.
  • FIG. 3 is a flow diagram illustrating a method for identifying search prompts for a gamified voice search, according to some implementations.
  • FIG. 4 illustrates an example gamified voice search user interface corresponding to a voice search activation stage, according to an implementation.
  • FIG. 5 illustrates an example gamified voice search user interface corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • FIG. 6 illustrates another example gamified voice search user interface corresponding to a search prompt activation stage, according to implementations of the disclosure.
  • FIG. 7 illustrates an example gamified voice search user interface corresponding to a search query definition stage, according to implementations of the disclosure.
  • FIG. 8 is a block diagram illustrating one implementation of a computer system, according to an implementation.
  • DETAILED DESCRIPTION
  • Aspects of the disclosure are directed to gamifying a voice search experience for children. In particular, implementations are described for providing a user interface that assists children with content searches using a game-like approach.
  • Existing search solutions rely on textual prompts to assist a user to search for relevant content. Such textual prompts do not aid young children, who may not yet know how to read, with finding appropriate content. Voice search may be better suited for young children. However, voice search can be ineffective for children who are new to voice search and do not know what to ask for when they try to access or use voice search features. In addition, children often pause when prompted to speak, and sometimes get nervous and stutter, which leads to inaccurate voice recognition and therefore inaccurate search results.
  • Aspects of the present disclosure transform content searching into a game-like experience that teaches young users how to search for interesting content. In particular, aspects of the present disclosure provide a graphical user interface (GUI) that includes a voice search indicator visually illustrating a voice search option. When a user selects the voice search indicator in the GUI presented on the screen of the user device, one or more search prompt indicators can appear in the GUI. These search prompt indicators can be presented, for example, as bubbles floating around the screen of the user device. The bubbles may be shown with a visual search indicator (e.g., a question mark) to demonstrate that they pertain to search queries.
  • If the user selects one of the bubbles (e.g., by clicking on it), a corresponding audio search prompt can be played for the user. A search prompt can be a hint question that identifies a topic for a search. For example, the audio search prompt may be “What is your favorite animal?” If the user provides a voice response to such a prompt (e.g., identifying a specific animal), a search can be performed based on the response, and the resulting content items (e.g., videos related to the specific animal) can be presented to the user. In some implementations, once the user selects the bubble and provides a voice response to the audio search prompt, no other user input is needed to initiate the search.
  • According to some aspects of the present disclosure, if a bubble is not selected within a threshold amount of time, the GUI may display a bubble popping automatically, resulting in an audio prompt being played to the user. For example, one of the bubbles can randomly pop after a 10-15 second interval, and the user can hear a hint question (audio search prompt) suggesting a topic for a search.
  • Audio search prompts played for the user (when selected or popped automatically) can be continuously changed to suggest new topics for content searches, thereby teaching young users about various things for which they can search through this game-like experience. In addition, in some implementations, audio search prompts can be customized for a specific user. For example, an audio search prompt to be played can be selected for a user based on the age, location or search history of the user, the current date/time, the user's demographics, the prior history of prompts already posed to the user, etc. For example, users within a certain age range may be asked questions from a prompt list specific to that age range. Users may be asked specific seasonal questions around the holidays or during other calendar timing events. Users may be asked school related questions during the school year, and “fun” questions during the summer. A prompt selection may be constantly refined for a user as search histories are updated to provide insight into the user's content preference.
  • Accordingly, aspects of the present disclosure provide a gamified voice search experience to assist young users with performing searches specifically tailored to the young users. As a result, children can search for and be presented with age-appropriate content selected from a very large number of content items (e.g., billions of videos).
  • The present disclosure often references videos for simplicity and brevity. However, the teaching of the present disclosure are applied to media items generally and can be applied to various types of content or media items, including for example, video, audio, text, images, program instructions, etc. The media items referred to herein represent viewable and/or shareable media items.
  • FIG. 1 illustrates an example system architecture 100, in accordance with one implementation of the disclosure. The system architecture 100 includes a user device 110, a network 105, a data store 106, a content sharing platform 120, and a server 130. In one implementation, network 105 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof. In one implementation, the data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data. The data store 106 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers).
  • User devices 110 may include computing devices such as personal computers (PCs), laptops, mobile phones, smart phones, tablet computers, network connected televisions, netbook computers etc. The user device 110 may include a media viewer 112. In one implementation, the media viewer 112 may be an application that allows a user to view content, such as images, videos, web pages, documents, etc. For example, the media viewer 112 may be a web browser that can access, retrieve, present, and/or navigate content (e.g., web pages such as Hyper Text Markup Language (HTML) pages, digital media items, etc.) served by a web server. The media viewer 112 may render, display, and/or present the content (e.g., a web page, a media viewer) to a user. The media viewer 112 may also display an embedded media player (e.g., a Flash® player or an HTML5 player) that is embedded in a web page (e.g., a web page that may provide information about a product sold by an online merchant). In another example, the media viewer 112 may be a standalone application (a mobile application or “app”) that allows users to search for digital media items (e.g., digital videos, digital images, electronic books, etc.) and can present a media player to play video and audio media items for the user. According to aspects of the present disclosure, the media viewer 112 may be a children-specific application that allows users to view and search for content appropriate for children.
  • The media viewer 112 may be provided to the user device 110 by the server 130 and/or content sharing platform 120. The media viewer 112 may include a search interface 111 that allows a user to search for content hosted by content sharing platform 120. The search interface 111 may include a voice search indicator that can be selected to initiate a voice search option. The search interface 11 may also include other GUI elements that allow a user to interact with the gamified voice search features described herein. For example, search interface 111 may receive a user request to perform a voice search (a search associated with an audio input). A user may activate a voice search indicator (e.g., by clicking on it) in search interface 111 and provide audio input to initiate a voice search. If the audio input is received from the user, the audio input may be sent to content sharing platform 120 and/or server 130 to undergo voice recognition operations to define a search query. Alternatively, voice recognition can be performed on the user device 110, and the resulting search query can be sent to content sharing platform 120 and/or server 130.
  • If the audio input is not received within a certain time interval from the activation of the voice search indicator in the search interface 111, one or more GUI elements representing one or more audible search prompts (e.g., one or more bubbles with question marks inside) may be displayed by the search interface 111. The above time interval may be a default time interval. In one implementation, a user is allowed to modify the default setting of the time interval.
  • A GUI element (e.g., a bubble with a question mark) representing an audio search prompt may be activated to cause the audio prompt to be played to a user, thereby aiding the user in searching for content of interest. A GUI element representing an audio search prompt may be activated upon a user selection of the GUI element or upon an expiration of a specific time period from the appearance of the GUI element on the screen, which can result in a visual indication of such a self-activation (e.g., by displaying an automated popping of the corresponding bubble). Example audio search prompts can include, but are not limited to:
      • What is your favorite animal?
      • What is your favorite video game?
      • What is your favorite sport?
      • What would you like to search for?
  • The audio search prompts being played can be specific to the user. Some aspects of the determination and selection of customized audio search prompts for the user are discussed in more detail below.
  • In one implementation, when the user provides an audio response to the audio search prompt, this audio response defining a search query is provided (e.g., as audio data or text data resulting from voice recognition) to content sharing platform 120 and/or server 130 to perform a search. For example, after the audio search prompt, “What is your favorite animal?” is played to the user, the user may respond, “Giraffe!” In this case, the audio response “Giraffe” may be provided as the search query. The content sharing platform 120 and/or server 130 may then perform a search for media items relating to Giraffes, which can then be returned to the user device 110 and presented to the user by the media viewer 112.
  • In one implementation, the content sharing platform 120 may be one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a network connected television, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to provide a user with access to media items and/or provide the media items to the user. For example, the content sharing platform 120 may allow a user to consume, upload, search for, approve of (“like”), dislike, and/or comment on media items. The content sharing platform 120 may also include a website (e.g., a webpage) that may be used to provide a user with access to the media items.
  • In implementations of the disclosure, a “user” may be represented as a single individual. However, other implementations of the disclosure encompass a “user” being an entity controlled by a set of users and/or an automated source. For example, a set of individual users federated as a community in a social network may be considered a “user”. In another example, an automated consumer may be an automated ingestion pipeline, such as a topic channel, of the content sharing platform 120.
  • The content sharing platform 120 may include media items 121. Examples of a media item 121 can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (e-books), electronic magazines, digital newspapers, digital audio books, electronic journals, web blogs, real simple syndication (RSS) feeds, electronic comic books, software applications, etc. In some implementations, media item 121 is also referred to as a content item.
  • A media item 121 may be consumed via the Internet and/or via a mobile device application. For brevity and simplicity, an online video (also hereinafter referred to as a video) is used as an example of a media item 121 throughout this document. As used herein, “media,” “media item,” “online media item,” “digital media,” “digital media item,” “content,” and “content item” can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity. In one implementation, the content sharing platform 120 may store the media items 121 using the data store 106. The content sharing platform 120 may also store playlists created by users, third parties or automatically. A playlist may include a list of content items (e.g., videos) that can be played (e.g., streamed) in sequential or shuffled order on the content sharing platform.
  • In one implementation, the server 130 may be one or more computing devices (e.g., a rackmount server, a server computer, etc.). The server 130 may be included in the content sharing platform 120 or be part of a different system. The server 130 may host a voice search system 140. The voice search system 140 enables the identification of audio search prompts to help identify, curate, and present content appropriate for children, in implementations of the disclosure. Content appropriate for children may refer to one or more content items that are safe (e.g., not mature, violent or explicit) and/or relevant (e.g., entertaining or interesting) for children.
  • The voice search system 140 may include several components (e.g., modules, sub-modules, applications, etc.) that can be executed by one or more processors of a machine hosting the voice search system 140. These components may include, for example, a search prompt unit 160, an age unit 162, a location unit 164, a time unit 165, and a search history unit 166. More or less components can be included in the voice search system 140 to provide functionality described herein.
  • In one implementation, search prompt unit 160 determines which audio search prompts to send to user device 110 for a gamified voice search. Audio search prompts may be sent to user device as audio data or text data that can be converted to audio search prompts at the user device 110. Prompts may be specifically determined on a per-user or per-device basis. Advantageously, customized prompts for users may allow for better retention of those users within the content sharing platform and a better game-like voice search experience for the user. Search prompt unit 160 may utilize age unit 162 to determine appropriate prompts to provide to a user, based on the user's age. Likewise, location unit 164 may aid in determining appropriate prompts based on a user's location (or a location of the user device), and time unit 165 may help in determining appropriate prompts based on a calendar time of the search. For example, if a search is performed during the school year, scholastic prompts may be determined to be more appropriate than leisurely prompts. Or, if the search is performed around the holidays, holiday-themed prompts may be determined to be appropriate. In one implementation, search history unit 168 may aid in the procurement of prompts based on the search history of a user (or the search history associated with the user device 110). For example, based on a user's search history, it may be determined that the user enjoys content about a particular video game. Search history unit 166 may identify other prompts related to the same video game to provide to the user. The prompts may be compiled and stored in a database (e.g., a databased of data store 106), with rankings that represent the relevancy or appropriateness to a user. In one implementation, the prompts may be stored on user device 110.
  • It should be noted that functions described in one implementation as being performed by the content sharing platform 120 can also be performed on the user device 110 in other implementations, if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The content sharing platform 120 can also be accessed as a service provided to other systems or devices through appropriate application programming interfaces, and thus is not limited to use in websites.
  • Although implementations of the disclosure are discussed in terms of content sharing platforms and promoting social network sharing of a content item on the content sharing platform, implementations may also be generally applied to any type of social network providing connections between users.
  • In situations in which the systems discussed here collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether the content sharing platform 120 collects user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the content sharing platform 120.
  • FIG. 2 is a flow diagram illustrating a method for gamifying video search, according to an implementation. The method 200 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one embodiment, media viewer 112 on user device 110 performs method 200.
  • Referring to FIG. 2, at block 201, processing logic receives a user request to perform a search associated with an audio input. The user request may be a result of the user clicking on a GUI element (also referred to as a voice search indicator) in search interface 111 on the user device 110 that represents a voice search. In one embodiment, such a voice search GUI element may be visually represented as a microphone, a person speaking, a question mark, or any other graphical representation of a voice search. From the time that the user activates the voice search GUI element, the user device may be actively listening for an audio input provided by the user for the voice search. At block 202, processing logic determines whether the audio input defining a search query has been provided during a first time interval. The first time interval may be predefined or customizable by a user. If the audio input defining the search query is received within the first time interval, processing logic may request a search (e.g., by sending the search query to a server tasked with performing the search) based on the search query at block 209, and receive the results of the search from the server. At block 210, processing logic presents the search results including one or more media items on the user device 110.
  • Otherwise, if a search query was not received within the predefined time interval as determined at block 202, processing logic displays, in the search interface 111 on the user device 110, one or more GUI elements representing one or more audio voice prompts pertaining to the voice search at block 204. For example, processing logic may display GUI elements in the form of bubbles, floating around the screen of the user device 110. Each bubble may represent a single audio prompt that corresponds to a voice search. For example, one of the bubbles may correspond to the prompt “What is your favorite animal?”, and another bubble may represent the prompt “What is your favorite sport?” In another implementation, the individual GUI elements do not represent predefined audio search prompts, but are merely placeholders for audio search prompts. In this way, audio search prompts may be provided in a particular order regardless of which GUI element the user activates first.
  • At block 205, processing logic determines whether one of the GUI elements is selected by the user before a second predefined time interval has expired from the appearance of the GUI elements in the search interface 111. It should be noted that once the user request to perform the voice search is received from the user at block 201, processing logic may continue actively listening for audio input of the user for a third time interval. The third time interval may be predefined and/or customizable by a user. By listening for audio input of the user even after the first predefined time interval expires and the GUI elements representing prompts are displayed, processing logic allows the user to provide such an audio input at any time, without any further manual interaction (e.g., via a touch input or keyboard input) with the search interface 111 on the user device.
  • If a GUI element is selected by the user before the second threshold of time expires, processing logic plays an audio search prompt corresponding to the activated GUI element at block 206. As discussed above, audio search prompts may be specific to a user. Thus, the user has a unique and customized gamified voice search experience tailored specifically to assist the user in performing voice search.
  • If none of the GUI elements displayed at block 204 is selected within the second predefined time interval, processing logic automatically (without any user interaction) activates a GUI element at block 207, such as by visually illustrating an automated popping of a corresponding bubble. In one implementation, processing logic may determine which GUI element should be automatically activated in a random (non-deterministic) manner. In another implementation, processing logic may determine which GUI element should be automatically activated based on rankings of associated audio search prompts. For example, if five GUI elements are displayed, representing five audio search prompts, processing logic may activate the GUI element corresponding to the audio search prompt associated with a topic that is most likely to be of interest to the user. Audio search prompts may be ranked according to any number of attributes, including, but not limited to, predicted interest to the user, frequency of appearance (i.e. how many times this audio search prompt has been provided to the user before), etc.
  • Once a GUI element is automatically activated at block 207, method 200 continues to block 206 where the audio voice prompt corresponding to the activated GUI element is played, as discussed above.
  • At block 208, processing logic receives audio input of the user in response to the audio voice prompt. The audio input provided by the user defines a query for the search. In one implementation, the audio input is a direct response to the question posed by the audio search prompt. For example, audio input of “Giraffe!” may be received as an answer to the audio search prompt “What is your favorite animal?” In another embodiment, the audio input may not be a logical answer to the question posed by the audio search prompt. For example, in response to the prompt, “What is your favorite animal?” the audio input may be “Baseball!” In such a case, the audio input “Baseball!” may still be used to define the query for the search, even though it does not directly answer the provided prompt. At block 209, processing logic may request a search (e.g., by sending the search query to a server tasked with performing the search) based on the search query at block 208 and receive the results of the search from the server. At block 210, processing logic presents the search results including one or more media items. The media items may be presented by media viewer 112 on the user device 110.
  • FIG. 3 is a flow diagram illustrating a method for identifying search prompts for a gamified voice search, according to some implementations. The method 300 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof In one embodiment, voice search system 140 of FIG. 1 performs method 300.
  • Referring to FIG. 3, at block 302, processing logic receives, from a user device (e.g., user device 110 of FIG. 1), an indication that an audio input for a voice search has not been received during a predefined time interval. At block 304, processing logic determines one or more search prompts specific to a user of the user device. Determining search prompts may involve determining rankings of stored search prompts based on one or more characteristics including, for example, an age of the user, location of the user (or user device), seasonable timing of the search, the search history of the user, etc. Characteristics like those listed above may be combined (e.g., in a machine learning algorithm) to determine an ordered list of prompts specifically tailored to a particular user. The ranking of prompts may change as the underlying characteristics change. Furthermore, new prompts may be continually determined based on updated user characteristics.
  • At block 306, processing logic may provide the one or more search prompts specific to the user for playing to the user of the user device. In one implementation, prompts are provided one at a time, or several at a time, to be played on-demand on the user device. In other implementations, prompts may be provided for storage on the user device.
  • At block 308, processing logic receives, from the user device, a search query based on audio input provided by the user in response to one of the search prompts, and at block 310, processing logic searches for one or more media items based on the search query, and returns the search result to the user device.
  • FIG. 4 illustrates an example gamified voice search user interface 400 corresponding to a voice search activation stage, according to an implementation. The gamified voice search interface 400 may be presented on a user device and may include a GUI element 402 that represents a voice search initiation. If activated by a user, voice search GUI element 402 may initiate a voice search option. In some implementations, GUI element 402 may depict a microphone, a person speaking, or other graphical representation of a voice search. Once activated, GUI element 402 may be transformed in some way (e.g., it may be animated by pulsing, bouncing, changing colors etc.) to indicate that a microphone is currently activated and that listening for a voice search query has started.
  • The gamified voice search interface 400 may also include one or more GUI elements 404 that represent audio search prompts. GUI elements 404 may depict a question mark (as shown) or some other graphical representation of a search prompt. In one implementation, GUI elements 404 are animated, floating around on the screen of the user device. GUI elements 404 may collide with and bounce off of each other and off of other GUI elements (e.g., GUI element 402). The gamified voice search interface 400 may also include a GUI element 406 that when activated, allows a user to enter a textual search mode. A user may also activate a GUI element 408 to go back to a previous screen of the application providing the gamified voice search interface 400.
  • FIG. 5 illustrates an example gamified voice search user interface 500 corresponding to a search prompt activation stage, according to implementations of the disclosure. The gamified voice search interface 500 includes a representation 502 of an activated GUI element 404 of FIG. 4 that is animated in response to the activation (caused by user selection or automated activation as described above). For example, GUI elements 404 may “pop” and/or display “waves” extending outward, 502. Interface 500 may also display a search prompt 504 in textual form. The search prompt 504 may also be played as an audio prompt (e.g., for young users who cannot read or otherwise understand the textual prompt). The search prompt 504 may be displayed in textual form at substantially the same time as the audio prompt is playing.
  • FIG. 6 illustrates another example gamified voice search user interface 600 corresponding to a search prompt activation stage, according to implementations of the disclosure. As seen in the gamified voice search interface 600, an activated GUI element of FIG. 4 and FIG. 5 is continuing its animation by expanding outwards to indicate that it was activated. The gamified voice search interface 600 displays full text of search prompt 604. In one implementation, the audio prompt associated with the textual search prompt 604 completes its play when the textual search prompt 604 finishes its progressive, animated display.
  • FIG. 7 illustrates an example gamified voice search user interface 700 corresponding to a search query definition stage, according to implementations of the disclosure. The gamified voice search interface 700 demonstrates a search prompt 702, which can visually indicate that a response to the search prompt 702 has been received. In one implementation, the search prompt 702 is displayed in a faded color or a different color when a response is received. The response may be received as audio input, and may be shown in textual form 704 on the gamified search interface 700 (e.g., at substantially the same time as a user is providing the audio response).
  • FIG. 8 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 800 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a network connected television, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one implementation, computer system 800 may be representative of a server, such as server 102, executing a voice search system 140, as described with respect to FIGS. 1-7.
  • The exemplary computer system 800 includes a processing device 802, a main memory 804 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 806 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 818, which communicate with each other via a bus 808. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.
  • Processing device 802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 802 is configured to execute processing logic 826 for performing the operations and steps discussed herein.
  • The computer system 800 may further include a network interface device 822. The computer system 800 also may include a video display unit 810 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), and a signal generation device 820 (e.g., a speaker).
  • The data storage device 818 may include a computer-readable storage medium 824 (also referred to as a machine-readable storage medium), on which is stored one or more set of instructions 826 (e.g., software) embodying any one or more of the methodologies of functions described herein. The instructions 826 may also reside, completely or at least partially, within the main memory 804 and/or within the processing device 802 during execution thereof by the computer system 800; the main memory 804 and the processing device 802 also constituting machine-readable storage media. The instructions 826 may further be transmitted or received over a network 874 via the network interface device 822.
  • The computer-readable storage medium 824 may also be used to store instructions to perform a method for identifying content appropriate for children algorithmically without human interaction, as described herein. While the computer-readable storage medium 824 is shown in an exemplary implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g.,
  • EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
  • The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of several implementations of the present disclosure. It will be apparent to one skilled in the art, however, that at least some implementations of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present disclosure.
  • Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.”
  • Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In another implementation, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner.

Claims (20)

What is claimed is:
1. A method comprising:
receiving, by a processing device, a request to initiate a search;
providing for display a plurality of graphical user interface elements that represent a plurality of audio prompts pertaining to the search;
in response to determining a user selection of the graphical user interface elements is absent during a predefined time interval, activating one of the graphical user interface elements;
playing an audio prompt of the activated graphical user interface element, wherein the audio prompt is provided prior to receiving audio input associated with the search;
receiving, by the processing device, audio input in response to the audio prompt, the audio input providing data for the search; and
initiating, by the processing device, the search based on the audio input.
2. The method of claim 1, wherein providing for display the plurality of graphical user interface elements is in response to determining that the audio input is absent during a predefined timer interval.
3. The method of claim 1, further comprising in response to the activating, displaying a visual prompt corresponding to the activated graphical user interface element and playing the audio prompt, wherein the visual prompt and the audio prompt comprise a question.
4. The method of claim 1, wherein the plurality of audio prompts are based on an age of a user, a location of a user device, a search history of the user, or a calendar time of the search.
5. The method of claim 1, wherein the plurality of graphical user interface elements comprise a second graphical user interface element that is activated when a user selects the second graphical user interface element.
6. The method of claim 1, wherein the activated graphical user interface element is automatically activated in response to detecting an absence of user input during the predefined time interval.
7. The method of claim 1, further comprising:
after playing the audio prompt, enabling a user to provide the audio input within a third time interval without any manual interaction with a graphical user interface; and
providing for presentation a search result for the search based on the audio input, wherein the search result comprises one or more media items.
8. The method of claim 7, wherein each media item of the one or more media items is associated with a rating that indicates appropriateness for children.
9. The method of claim 1, further comprising determining one or more audio prompts that are specific to a user, wherein determining the one or more audio prompts comprises:
receiving an age of the user; and
determining the one or more audio prompts based on the age.
10. The method of claim 1, further comprising determining one or more audio prompts that are specific to a user, wherein determining the one or more audio prompts comprises:
determining at least one of a location of a device of the user, a search history of the user, or a calendar time of the search; and
determining the one or more audio prompts based on the location, the search history, or the calendar time of the search.
11. The method of claim 10, wherein each media item of the one or more media items has a rating that indicates appropriateness of a respective media item for children.
12. A system comprising:
a memory; and
a processing device coupled to the memory, wherein the processing device is to:
receive a request to initiate a search;
provide for display a plurality of graphical user interface elements that represent a plurality of audio prompts pertaining to the search;
responsive to determining a user selection of the graphical user interface elements is absent during a predefined time interval, activate one of the graphical user interface elements;
playing an audio prompt of the activated graphical user interface element, wherein the audio prompt is provided prior to receiving audio input associate with the search;
receiving audio input in response to the audio prompt, the audio input providing data for the search; and
initiating the search based on the audio input.
13. The system of claim 12, wherein to provide for display the processing device is to provide for display the plurality of graphical user interface elements in response to determining that the audio input associated with the search is absent during a predefined timer interval.
14. The system of claim 12, wherein the plurality of audio prompts are based on an age of a user of a user device, a location of the user device, a search history of a user, or a calendar time of the search.
15. The system of claim 12, wherein the graphical user interface elements comprise a second graphical user interface element that is activated when a user clicks on the second graphical user interface element.
16. A non-transitory machine-readable storage medium storing instructions which, when executed, cause a processing device to:
receive a request to initiate a search;
provide for display a plurality of graphical user interface elements that represent a plurality of audio prompts pertaining to the search;
in response to determining a user selection of the graphical user interface elements is absent during a predefined time interval, activate one of the graphical user interface elements;
play an audio prompt of the activated graphical user interface element, wherein the audio prompt is provided prior to receiving audio input associated with the search;
receive audio input in response to the audio prompt, the audio input providing data for the search; and
initiate the search based on the audio input.
17. The non-transitory machine-readable storage medium of claim 16, wherein to provide for display the processing device is to provide for display the plurality of graphical user interface elements in response to a determination that the audio input is absent during a predefined timer interval.
18. The non-transitory machine-readable storage medium of claim 16, further comprising: in response to the activating, displaying a visual prompt corresponding to the activated graphical user interface element and playing the audio prompt.
19. The non-transitory machine-readable storage medium of claim 16, further comprising:
after playing the audio prompt, receive the audio input within a third time interval without any manual interaction with a graphical user interface; and
provide for presentation a search result for the search base on the audio input, wherein the search result comprises one or more media items.
20. The non-transitory machine-readable storage medium of claim 19, wherein each media item of the one or more media items is associated with a rating that indicates appropriateness for children.
US16/504,486 2016-06-08 2019-07-08 Gamifying voice search experience for children Abandoned US20190332353A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/504,486 US20190332353A1 (en) 2016-06-08 2019-07-08 Gamifying voice search experience for children

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/176,654 US10346129B1 (en) 2016-06-08 2016-06-08 Gamifying voice search experience for children
US16/504,486 US20190332353A1 (en) 2016-06-08 2019-07-08 Gamifying voice search experience for children

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/176,654 Continuation US10346129B1 (en) 2016-06-08 2016-06-08 Gamifying voice search experience for children

Publications (1)

Publication Number Publication Date
US20190332353A1 true US20190332353A1 (en) 2019-10-31

Family

ID=67106484

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/176,654 Active 2037-01-24 US10346129B1 (en) 2016-06-08 2016-06-08 Gamifying voice search experience for children
US16/504,486 Abandoned US20190332353A1 (en) 2016-06-08 2019-07-08 Gamifying voice search experience for children

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/176,654 Active 2037-01-24 US10346129B1 (en) 2016-06-08 2016-06-08 Gamifying voice search experience for children

Country Status (1)

Country Link
US (2) US10346129B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019172704A1 (en) * 2018-03-08 2019-09-12 Samsung Electronics Co., Ltd. Method for intent-based interactive response and electronic device thereof
CN111638787B (en) * 2020-05-29 2023-09-01 百度在线网络技术(北京)有限公司 Method and device for displaying information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010049688A1 (en) * 2000-03-06 2001-12-06 Raya Fratkina System and method for providing an intelligent multi-step dialog with a user
US6714222B1 (en) * 2000-06-21 2004-03-30 E2 Home Ab Graphical user interface for communications
US20060143307A1 (en) * 1999-03-11 2006-06-29 John Codignotto Message publishing system
US20080232277A1 (en) * 2007-03-23 2008-09-25 Cisco Technology, Inc. Audio sequestering and opt-in sequences for a conference session
US20140040748A1 (en) * 2011-09-30 2014-02-06 Apple Inc. Interface for a Virtual Digital Assistant
US20150006564A1 (en) * 2013-06-27 2015-01-01 Google Inc. Associating a task with a user based on user selection of a query suggestion
US20170169113A1 (en) * 2015-12-11 2017-06-15 Quixey, Inc. Providing Search Results based on an Estimated Age of a Current User of a Mobile Computing Device
US20170242657A1 (en) * 2016-02-22 2017-08-24 Sonos, Inc. Action based on User ID
US20190057698A1 (en) * 2015-06-01 2019-02-21 AffectLayer, Inc. In-call virtual assistant
US20190246238A1 (en) * 2014-04-11 2019-08-08 Flaregun Inc. Apparatus, systems and methods for visually connecting people

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060143307A1 (en) * 1999-03-11 2006-06-29 John Codignotto Message publishing system
US20010049688A1 (en) * 2000-03-06 2001-12-06 Raya Fratkina System and method for providing an intelligent multi-step dialog with a user
US7539656B2 (en) * 2000-03-06 2009-05-26 Consona Crm Inc. System and method for providing an intelligent multi-step dialog with a user
US6714222B1 (en) * 2000-06-21 2004-03-30 E2 Home Ab Graphical user interface for communications
US20080232277A1 (en) * 2007-03-23 2008-09-25 Cisco Technology, Inc. Audio sequestering and opt-in sequences for a conference session
US20140040748A1 (en) * 2011-09-30 2014-02-06 Apple Inc. Interface for a Virtual Digital Assistant
US20150006564A1 (en) * 2013-06-27 2015-01-01 Google Inc. Associating a task with a user based on user selection of a query suggestion
US20190246238A1 (en) * 2014-04-11 2019-08-08 Flaregun Inc. Apparatus, systems and methods for visually connecting people
US20190057698A1 (en) * 2015-06-01 2019-02-21 AffectLayer, Inc. In-call virtual assistant
US20170169113A1 (en) * 2015-12-11 2017-06-15 Quixey, Inc. Providing Search Results based on an Estimated Age of a Current User of a Mobile Computing Device
US20170242657A1 (en) * 2016-02-22 2017-08-24 Sonos, Inc. Action based on User ID

Also Published As

Publication number Publication date
US10346129B1 (en) 2019-07-09

Similar Documents

Publication Publication Date Title
JP6564008B2 (en) Suggest search results to the user before receiving a search query from the user
KR101921816B1 (en) User interactions using digital content
US9852648B2 (en) Extraction of knowledge points and relations from learning materials
US11539992B2 (en) Auto-adjust playback speed and contextual information
US20130297599A1 (en) Music management for adaptive distraction reduction
US20140074648A1 (en) Portion recommendation for electronic books
CN112292675A (en) Assisting a computer to interpret native language input with prominence ranking of entities and tasks
US20140279993A1 (en) Clarifying User Intent of Query Terms of a Search Query
US11049029B2 (en) Identifying content appropriate for children algorithmically without human intervention
US9223830B1 (en) Content presentation analysis
US8949874B1 (en) Evaluating media channels
JP2023184563A (en) Comprehensibility-based identification of educational content of multiple content types
US20190332353A1 (en) Gamifying voice search experience for children
US8935299B2 (en) Identifying relevant data for pages in a social networking system
Duong SEO management: Methods and techniques to achieve success
RU2586249C2 (en) Search request processing method and server
EP3458977A1 (en) Facilitating efficient searching using message exchange threads
US20210182700A1 (en) Content item selection for goal achievement
RU2605001C2 (en) Method for processing user's search request and server used therein
US20210124464A1 (en) Online engagement platform for video creators
US11334611B2 (en) Content item summarization with contextual metadata
KR20170123660A (en) Algorithm radio for arbitrary text queries
US20240095273A1 (en) Actionable suggestions for media content
Rowe et al. Communicating Nutrition, Food, and Health Information: New Paradigms Revisited
Scheets It only takes a spark

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAINI, SHIVA;HAILE, SATOE;SCHURMAN, AARON;SIGNING DATES FROM 20160524 TO 20160607;REEL/FRAME:049699/0835

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:049706/0219

Effective date: 20170930

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION