US20200026742A1 - Integrating communications into a social graph - Google Patents

Integrating communications into a social graph Download PDF

Info

Publication number
US20200026742A1
US20200026742A1 US16/442,843 US201916442843A US2020026742A1 US 20200026742 A1 US20200026742 A1 US 20200026742A1 US 201916442843 A US201916442843 A US 201916442843A US 2020026742 A1 US2020026742 A1 US 2020026742A1
Authority
US
United States
Prior art keywords
trigger
profile
query
sentence
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/442,843
Inventor
David Gerard Ledet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Open Invention Network LLC
Original Assignee
Open Invention Network LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Open Invention Network LLC filed Critical Open Invention Network LLC
Priority to US16/442,843 priority Critical patent/US20200026742A1/en
Assigned to OPEN INVENTION NETWORK LLC reassignment OPEN INVENTION NETWORK LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEDET, DAVID GERARD
Publication of US20200026742A1 publication Critical patent/US20200026742A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1093Calendar-based scheduling for persons or groups
    • G06Q10/1095Meeting or appointment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3457Performance evaluation by simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2358Change logging, detection, and notification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24564Applying rules; Deductive queries
    • G06F16/24565Triggers; Constraints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/687Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/93Document management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • This application generally establish connections and more specifically connecting users with other users that may offer needed expertise in an area.
  • An individual and/or a computer program/system may know that there are people out there having a deep understanding of the technological details surrounding a project that has been undertaken, but how does one go about finding these people? Furthermore, what if it were possible to connect with other people having the needed technical expertise without ever specifically reaching out to them?
  • the current application interacts with a user's communication methods, allowing the user to provide a trigger (such as an input stream of characters, a color, a spoken phrase, etc.) that informs the current application to initiate the logic to seek others that may be able to provide technical assistance.
  • a trigger such as an input stream of characters, a color, a spoken phrase, etc.
  • An example operation may include a method comprising one or more of receiving, from a device, a first data containing a trigger, obtaining a trigger sentence from the first data. analyzing the trigger sentence to determine a second data including at least one keyword, sending a profile query, including the second data to an external server, determining, by the external server, a profile result including at least one profile related to the profile query, receiving a response from the external server including the profile result, and presenting the profile result to the device.
  • Another example operation may include a system comprising a device, containing a processor and memory, wherein the processor is configured to perform one or more of receive a first data which contains a trigger, obtain a trigger sentence from the first data, analyze the trigger sentence to determine a second data which includes at least one keyword, send a profile query, which includes the second data to an external server, determine, by the external server, a profile result which includes at least one profile related to the profile query, receive a response from the external server which includes the profile result, and present the profile result to the device.
  • the processor is configured to perform one or more of receive a first data which contains a trigger, obtain a trigger sentence from the first data, analyze the trigger sentence to determine a second data which includes at least one keyword, send a profile query, which includes the second data to an external server, determine, by the external server, a profile result which includes at least one profile related to the profile query, receive a response from the external server which includes the profile result, and present the profile result to the device.
  • a further example operation may include a non-transitory computer readable medium comprising instructions, that when read by a processor, cause the processor to perform one or more of receiving, from a device, a first data containing a trigger, obtaining a trigger sentence from the first data, analyzing the trigger sentence to determine a second data including at least one keyword, sending a profile query, including the second data to an external server, determining, by the external server, a profile result including at least one profile related to the profile query, receiving a response from the external server including the profile result, and presenting the profile result to the device.
  • FIG. 1 is a system diagram in one embodiment of the current application.
  • FIG. 2 is a block diagram of a computing system in one embodiment of the current application.
  • FIG. 3 is a flowchart of the speech conversion process in one embodiment of the current application.
  • FIG. 4 is a snapshot of a GUI showing a configuration module in one embodiment of the current application.
  • FIG. 5 is a flowchart of the current functionality in one embodiment of the current application.
  • messages may have been used in the description of embodiments, the application may be applied to many types of network data, such as, packet, frame, datagram, etc.
  • the term “message” also includes packet, frame, datagram, and any equivalents thereof.
  • certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message, and the application is not limited to a certain type of signaling.
  • FIG. 1 illustrates a block diagram of one embodiment of the current application 100 in accordance with the present disclosure.
  • the system includes at least one client device 102 .
  • a client device may be at least one of a mobile device ( 102 a , 102 b ) a tablet, a laptop device, and/or a personal desktop computer.
  • the client device is communicably coupled to the network 104 .
  • other types of devices might be used with the present application.
  • a PDA, an MP3 player, or any other wireless device, a gaming device (such as a hand held system or home based system), any computer wearable device, and the like (including a P.C. or other wired device) that may transmit and receive information may be used with the present application.
  • the client device may execute a user browser used to interface with the network 104 , an email application used to send and receive emails, a text application used to send and receive text messages, and many other types of applications. Communication may occur between the client device and the network 104 via applications executing on said device and may be applications downloaded via an application store or may reside on the client device by default. Additionally, communication may occur on the client device wherein the client device's operating system performs the logic to communicate without the use of either an inherent or downloaded application.
  • the system 100 includes a network 104 (e.g., the Internet or Wide Area Network (WAN)).
  • the network may be the Internet or any other suitable network for the transmitting of data from a source to a destination.
  • a server 106 exists in the system 100 , communicably coupled to the network 104 , and may be implemented as multiple instances wherein the multiple instances may be joined redundant network or may be singular in nature. Furthermore, the server may be connected to database 108 wherein tables in the database are utilized to contain the elements of the stored data in the current application, such as Structured Query Language (SQL), for example.
  • SQL Structured Query Language
  • the database may reside remotely to the server coupled to the network 104 and may be redundant in nature.
  • FIG. 2 is a block diagram illustrating a computer system 200 upon which embodiments of the current invention may be implemented, for example.
  • the computer system 200 may include a bus 206 or other communication mechanism for communicating information, and a hardware processor 205 coupled with bus 206 for processing information.
  • Hardware processor 205 may be, for example, a general purpose microprocessor, for example.
  • Computer system 200 may also include main memory 208 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 206 for storing information and instructions to be executed by a processor 205 .
  • Main memory 208 also may be used for storing temporary variables or other intermediate information during the execution of instructions to be executed by a processor 205 .
  • Such instructions when stored in the non-transitory storage media accessible to processor 205 , may render computer system 200 into a special-purpose machine that is customized to perform the operations specified in the previously stored instructions.
  • Computer system 200 may also include a read only memory (ROM) 207 or other static storage device, which is coupled to bus 206 for storing static information and instructions for processor 205 .
  • ROM read only memory
  • a storage device 209 such as a magnetic disk or optical disk, may be provided and coupled to bus 206 , which stores information and instructions.
  • Computer system 200 may also be coupled via bus 206 to a display 212 , such as a cathode ray tube (CRT), a light-emitting diode (LED), etc. for displaying information to a computer user.
  • a display 212 such as a cathode ray tube (CRT), a light-emitting diode (LED), etc.
  • An input device 211 such as a keyboard, including alphanumeric and other keys, is coupled to bus 206 , which communicates information and command selections to processor 205 .
  • cursor control 210 such as a mouse, a trackball, or cursor direction keys which communicates direction information and command selections to processor 205 and controlling cursor movement on display 212 .
  • the techniques herein are performed by computer system 200 in response to a processor 205 executing one or more sequences of one or more instructions which may be contained in main memory 208 . These instructions may be read into main memory 208 from another storage medium, such as storage device 209 . Execution of the sequences of instructions contained in main memory 208 may cause processor 205 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry or embedded technology may be used in place of or in combination with software instructions.
  • Non-volatile media may include, for example, optical or magnetic disks, such as storage device 209 .
  • Volatile media may include dynamic memory, such as main memory 208 .
  • Common forms of storage media include, for example, a hard disk, solid state drive, magnetic tape, or other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer.
  • the remote computer may load the instructions into its dynamic memory and send the instructions over a medium such as the Internet 202 .
  • Computer system 200 may also include a communication interface 204 coupled to bus 206 .
  • the communication interface may provide two-way data communication coupling to a network link, which is connected to a local network 201 .
  • a network link typically provides data communication through one or more networks to other data devices.
  • the network link may provide a connection through local network 201 to data equipment operated by an Internet Service Provider (ISP) 202 .
  • ISP 202 provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet” 202 .
  • Internet 202 uses electrical, electromagnetic, or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link and through communication interface 204 , carrying the digital data to and from computer system 200 are example forms of transmission media.
  • Computer system 200 can send messages and receive data, including program code, through the network(s) 202 , the network link, and the communication interface 204 .
  • a server 203 may transmit a requested code for an application program through Internet 202 , local network 201 , and communication interface 204 .
  • Processor 205 can execute the received code as it is received, and/or stored in storage device 209 , or other non-volatile storage for execution at a later time.
  • the social network provides target member profiles of possible connections via a modifiable query in lieu of a “black box” approach in place today.
  • search engine evaluates the exponential number of possible generalizations and identifies a subset to present to the searcher. Each of these generalizations should be useful (e.g., adding a significant number of additional results) and plausible (e.g., adding results that have some intuitive similarity to the initial results). If two generalizations are similar to one another, the search engine might present only the “better” generalization.
  • the '238 application seeks to provide a more granular approach wherein a user may “guide” the application to provide intelligent connection suggestions.
  • This granular approach includes at least the following functionality:
  • An individual and/or a computer program/system may know that there are people out there having a deep understanding of the technological details surrounding a project that has been undertaken, but how does one go about finding these people? Furthermore, what if it were possible to connect with other people having the needed technical expertise without ever specifically reaching out to them?
  • the current application interacts with a user's communication methods, allowing the user to provide a trigger (such as an input stream of characters, a color, a spoken phrase, etc.) that informs the current application to initiate the logic to seek others that may be able to provide technical assistance.
  • a trigger such as an input stream of characters, a color, a spoken phrase, etc.
  • the said trigger is depicted herein with multiple embodiments, yet one versed in computer programming techniques will easily be able to recognize other types of triggers with other types of communication wherein the current application initiates the logic to connect with others without deviating from the scope of the current application.
  • triggers informs current application to begin processing the functionality to connect the user with others. Additionally, when a sentence is determined to be a sentence that the user is interested in finding other people to offer assistance (henceforth referred to as the trigger-sentence), it informs the system to begin to search for someone that is working on that technology and request assistance.
  • the member's profile is updated with the content in the trigger-sentence.
  • the above table shows a list of some of the possible triggers in the current application in one possible implementation.
  • the first trigger is a text symbol and is a series of characters that would normally not be used in normal conversation. Communication application such as emails and messaging may be utilized in this trigger.
  • the second trigger is a spoken phrase wherein any phrase that is repeated is determined to be a trigger for the current application when the speech is analyzed.
  • the current application captures words spoken into the device, such as in conversations, commands, etc.
  • the capturing of words is performed in the client device 102 using such technology normally used in smartphone functionality today.
  • Voice recognition is common and included in the operating system on many devices.
  • the current application utilizes similar functionality to receive voice input, transferring the voice into text, and finally into understood commands or text, for example.
  • the current application records all speech into the device and stores the recorded audio internally in the device.
  • the recorded audio is sent to a server processing, such as server 106 .
  • the current application records speech in the device when the device is in a voice call to another person.
  • a client device 102 such as a mobile device
  • voice-to-text functionality exists on the device, or can be accessed by the device, assuming that the mobile device is a phone with advanced capabilities (smartphone).
  • a smartphone is a mobile device containing many of the same functionalities similarly found on a desktop or laptop computer.
  • smartphones offer more advanced computing ability and connectivity than a contemporary phone, either traditional or mobile.
  • Speech recognition is built into many common mobile device SDKs.
  • SDK 1.5 a library included in that release called android.speech allows speech recognition functionality.
  • Speech recognition is done through the RecognizerIntent.
  • the intent starts an activity that can prompt the user for speech and send it through the speech recognizer.
  • the code below starts an activity with an intent and the application of the current invention waits for the result:
  • Intent intent new Intent(“android.speech.action.RECOGNIZE_SPEECH”); startActivityForResult(intent, 0);
  • the startActivityForResult method launches an Activity returning the result. While speech recognition is performed, it displays an overlay over your app, and when done returns the results back for your activity to handle.
  • the voice recognition application that handles the intent processes the voice input, then passes the recognized string back to the application by calling the onActivityResult( ) callback.
  • Android supports two language models for speech analysis: free_form for dictation, and web_search for shorter, search-like phrases.
  • the invention of the current application utilizes the free_form model.
  • the code above first verifies that the target mobile device is able to interwork with the speech input, then uses startActivityForResult( ) to broadcast an intent requesting voice recognition, including the extra parameter specifying one of two language models.
  • the voice recognition application that handles the intent processes the voice input, then passes the recognized string back to your application by calling the onActivityResult( ) callback.
  • NLP Natural Language Processing
  • FIG. 3 depicts a flow of the speech conversion process 300 .
  • the application initiates the speech conversion process 302 .
  • the intent requesting voice recognition is registered with the system 304 . This allows the system to begin listening for the voice from the user, capturing the incoming speech and converting it to text.
  • a listener is set on an existing component to receive the response from the voice recognition 306 . This listener can receive the text from the converted speech 308 .
  • the result is received by the application 310 , such as the current application executing on the client device 102 wherein the application is able to process the result.
  • Sentences in the text may be determined by separating the text into words in a sentence separated by punctuation.
  • the user is able to speak a trigger, such as a repeated phrase such that the current application executing on the client device 102 begins to process the sentence immediately spoken before the trigger.
  • a trigger such as a repeated phrase
  • the converted speech to text is sent to a server for processing, such as server 106 .
  • the current application executing on a client device 102 may interact with communication applications, such as an email, chat or messaging application such that the messages are examined.
  • the text is captured by the current application as the characters are entered into the device, as is commonly used in software development environments. Therefore, all text in any application is thereby received by the current application and processed therein.
  • decipher applications There exists many 3 rd party applications that perform text input on devices, called decipher applications.
  • the conversation in an email, chat, or messaging application is obtained via Application Program Interfaces (APIs) of the respective application.
  • APIs Application Program Interfaces
  • email applications have a GET http function that allows an application to obtain the full text of an email.
  • the message is converted in the client device 102 into indexable/searchable tokens utilizing and API such as the Java Package: org.apache.lucene.analysis.
  • This package implements tokenization—namely breaking of input text into small indexing elements or tokens.
  • Some of the other analysis tools included in the Java package include:
  • Stemming Replacing the words by their stems. For instance, with English stemming “bikes” is replaced by “bike”; now query “bike” can find both documents containing “bike” and those containing “bikes”.
  • Stop Words Filtering Common words as “the”, “and” and “a” rarely add any value to a search. Removing these shrinks the index size and increases performance. It may also reduce some “noise” and actually improve search quality.
  • Text Normalization Stripping some or all accents and other character markings can make for better searching.
  • Synonym Expansion Adding in synonyms at the same token position as the current word can mean more accurate matching when users search with words in the synonym set.
  • tokenize For example, using the Java tokenizer, the following code will tokenize a sentence:
  • Keywords are a noun or group of nouns pertaining to topic of the message.
  • NLP Natural Language Processing
  • the current application executing on a device such as the client device 102 contains a configuration module wherein searches are configured, allowing for a more granular selection of results.
  • This configuration module is accessible via the current system.
  • the current system executing on the client device 102 contains a menu wherein the navigation of different parts of the application is possible.
  • This navigation element may be implemented by many various Graphical User Interface (GUI) components, such as dropdown components, tabbed components, voice detection, etc.
  • GUI Graphical User Interface
  • FIG. 4 shows a Graphical User Interface (GUI) of a configuration module in one implementation of the current application 400 executing on the client device 102 .
  • GUI Graphical User Interface
  • the GUI screenshot depicts the configuration options for the social network queries.
  • Other elements may be configured using various GUI components wherein other elements may be configured in a similar manner.
  • the connections configuration component 402 allows for the configuration of the results returned from the social network.
  • Radio buttons 306 and 308 allow for each configuration element 404 to be selected. The radio buttons are programmed such that either a prefer radio button 406 or an omit radio button 408 can be selected, but not both.
  • buttons 406 and 408 are selected for a component, then the results from that component are assigned a higher priority and are listed at the top. If an omit button 408 is selected for a component, then the results from that component are omitted from the results. If neither buttons 406 nor 408 are selected for a component, then the results are presented in normal fashion, as is currently performed.
  • a “Submit” button 410 exists on the bottom of the window wherein the selected entries are submitted and stored in the current application.
  • the trigger-sentence is the sentence before the trigger. Regardless of the origin of the text (messaging or text converted from speech input), the client device 102 or the server 106 processes the resulting text, splitting the text into tokens that are then split into sentences. In another embodiment, the trigger-sentence may follow the trigger.
  • the text is analyzed by the current application such that the trigger characters: “! *” are determined as a trigger wherein the sentence preceding the trigger characters are determined to be the trigger-sentence.
  • FIG. 5 shows a block flow of the functionality of the current application executing on a device such as client device 102 in one implementation of the current application 500 .
  • the current description presents some of the embodiments of one current implementation of the application, but one versed in software development will be able to design and implement other, similar solutions without deviating from the scope of the current application.
  • Text is obtained via text-to-speech 504 with speech received 502 from a user speaking into the device, or text from a user typing on the device.
  • the text is split into sentences 508 , which may occur following the tokenization of the text.
  • a trigger-sentence is obtained 510 , such that the trigger-sentence is the sentence in the text preceding the trigger in the case of a text message or the sentence in the trigger word in the case of incoming speech, further disclosed herein.
  • the trigger-sentence is used as input to the analyze sentence functionality 512 .
  • the trigger-sentence is first parsed such that each word in the sentence is made into a token, a process referred to as tokenization 512 , further disclosed here.
  • tokenization 512 a process referred to as tokenization 512 , further disclosed here.
  • keywords are obtained from the tokenized words wherein the keywords are determined via logic, such as NLP functionality and stored either in the client device 102 or in a remote server 106 or remote database 108 for further processing.
  • a query is performed with a social networking service using the previously determined keywords to query users with the same or similar expertise as the keywords 514 .
  • the query utilizes normal social network queries wherein users of the social network matching the data in the query are returned.
  • the current application utilizes APIs of social networks to perform the query, in one implementation.
  • the response to the of the social networking service is received at the client device and presented on said device.
  • the response data may be presented on the client device 102 in a message format, such as a text message where the links to the profile are Uniform Resource Locator (URL) links, an email message containing a summary of the profiles, or via the GUI of the current application wherein the profiles are listed in GUI components and displayed on the client device.
  • a message format such as a text message where the links to the profile are Uniform Resource Locator (URL) links, an email message containing a summary of the profiles, or via the GUI of the current application wherein the profiles are listed in GUI components and displayed on the client device.
  • URL Uniform Resource Locator
  • a user inputs the following text on a client device 102 :
  • the text is received by the current application executing on the client device 102 wherein the text is analyzed.
  • the trigger “!*” is encountered wherein the sentence preceding the trigger is determined to be the trigger-sentence.
  • the trigger sentence is analyzed for keywords.
  • the following keywords are determined:
  • the current application executing on the client device 102 uses these keywords to query social networks to find users of the social network obtaining the technologies of the data in the query.
  • the query utilizes normal social network queries wherein users of the social network matching the data in the query are returned.
  • the current application utilizes APIs of social networks to perform the query, in one implementation.
  • Target profile matches are returned to the client device 102 via a response message in response to the query, allowing interaction with the user.
  • a user while using client device 102 while in a conversation, says the following sentence:
  • the speech is received by the current application executing on the client device wherein the speech is converted to text using speech-to-text functionality further disclosed herein.
  • the trigger is determined via a repeat of the words: “my iPhone”.
  • the sentence is determined to be the trigger-sentence.
  • the trigger sentence is analyzed for keywords.
  • the following keywords are determined:
  • the current application executing on the client device 102 uses these keywords to query social networks to find users of the social network obtaining the technologies of the data in the query.
  • the query utilizes normal social network queries wherein users of the social network matching the data in the query are returned.
  • the current application utilizes APIs of social networks to perform the query, in one implementation.
  • Target profile matches are returned to the client device 102 in the response to the query sent to query the social network(s), allowing interaction with the user.
  • a transport is presented wherein keywords spoken by a user in a transport are received by the current application executing in a device in the transport.
  • the transport may be an automobile, airplane, train, bus, boat, or any type of transport that normally transports people from one place to another.
  • the device contains a processor and memory and may be integrated into a computing device in the transport, providing additional details to sentences received in the transport from audio sources, for example the radio.
  • the current application is executing on a device in the transport, henceforth referred to as the transport computer.
  • the current application is executing on the transport computer, which may any of the following:
  • the transport computer executing the current application and recording audio receives this audio and analyzes the spoken words to determine that the spoken words are a trigger word/phrase. The analysis of the received audio and the determination of triggers are further depicted herein.
  • the transport computer records the audio in the transport. There is a buffer of time wherein the received audio is stored. Therefore, the amount of audio stored is the recorded audio equal to the buffer time, henceforth referred to as the ‘buffered audio”. For example, if the buffer time is 90 seconds, at any time there will only be 90 seconds of audio stored.
  • the amount of buffer is either hardcoded, such as 90 seconds, or configurable in the configuration module of the transport computer wherein the amount of buffer is entered into a GUI element and used to determine the amount of recorded audio is stored.
  • the received audio is stored in the transport computer, or remotely in a database, such as database 108 , wherein messaging occurs between the transport computer and database through the network 104 , for example.
  • the transport computer analyzes the received buffered audio to determine that a trigger word/phrase has been received, in this example the words “details, details” is the trigger.
  • the sentence before the trigger word/phrase is the sentence utilized for the command.
  • the analysis of the audio will parse the sentence before the trigger word/phrase in the buffered audio.
  • the transport computer extracts the trigger sentence from the buffer audio before the trigger word/phrase was received. For example, if the radio was on and an advertisement was being played, the sentence preceding the spoken trigger word/phrase is used as the trigger sentence.
  • the trigger sentence is then parsed in the transport computer wherein keywords are determined as previously depicted herein.
  • the keywords are then used in a search of a separate system, such as a query to a search on the Internet.
  • the transport computer uses the determined keywords in the trigger sentence to query the Internet. This may be through normal Internet queries or via access to APIs of search applications on the Internet.
  • the search may interface with other installed applications on a user's client device, such as a mapping program, a restaurant application, shopping applications, and the like through APIs of said applications.
  • NPR National Public Radio
  • the transport computer examines the buffer audio and determines that the received speech “details, details” is a trigger word/phrase, and extracts the previous sentence from the buffer audio, thereby determined to be the trigger sentence.
  • a word or phrase following the trigger word/phrase allows for the searching of the buffer audio. Therefore, the user may wish to obtain details regarding a sentence that was heard 30 seconds ago, for example. The user is not required to decide immediately after hearing a sentence to obtain details for example but can decide later and then say the trigger word/phrase “details details” followed by a search word/phrase to allow the transport computer to search through the buffer audio to find the correct sentence.
  • the transport computer determines that the phrase “details details” is a trigger word/phrase, and further determines that “Malcom Gladwell book” is a search term requesting the transport computer search through the buffer audio for the term “Malcom Gladwell book”.
  • the transport computer parses the received command, and first determines the trigger word/phrase “details details”, then parses the search phrase “Malcom Gladwell book” to be used to search through the buffer audio.
  • the transport computer utilizes text-to-speech functionality commonly utilized in computer applications to convert the possible determined trigger sentence to speech then reads the proposed trigger sentence via the current audio system in the transport.
  • the transport computer announces the following over the audio of the transport:
  • the transport then waits for a response to be received from the user in audible form.
  • the asking of questions and waiting for a response is common in transport applications today and is similar in functionality to the normal voice applications existing in many transport systems and mobile device today.
  • the volume of the audio of the transport is lowered to a minimal level or cut off completely while the proposed trigger sentence is read.
  • a spoken command is used as a trigger to stop all transport computer functionality such as: “cancel cancel”. This received trigger command stops all query functionality of the current transport wherein any current determination of trigger sentences is halted, and processing returns to normal functionality.
  • the keywords are used to query the Internet to obtain additional information and delivered to the client device 102 in one embodiment.
  • the Internet search queries are delivered to the client device through the current application executing on the client device through APIs of a browser application on the client device.
  • the transport reads through the query results using text-to-speech functionality such that the transport converts the received Internet search results text to speech and reads the results, announced through the transport's audio system.
  • the results are displayed on the transport's display system allowing the user to view the Internet search query results on the display.

Abstract

An example operation may include a method comprising one or more of receiving, from a device, a first data containing a trigger, obtaining a trigger sentence from the first data. analyzing the trigger sentence to determine a second data including at least one keyword, sending a profile query, including the second data to an external server, determining, by the external server, a profile result including at least one profile related to the profile query, receiving a response from the external server including the profile result, and presenting the profile result to the device.

Description

    TECHNICAL FIELD
  • This application generally establish connections and more specifically connecting users with other users that may offer needed expertise in an area.
  • BACKGROUND
  • While previous functionality seeks to overcome the “black box” approach by providing a more granular solution, there exists a continuing issue with interworking the connection to other profiles into people's daily activities. As people work on projects, projects wherein they may not be proficient in the needed technology, it would be beneficial to connect to other people with whom have a more concrete understanding of the needed technology.
  • An individual and/or a computer program/system may know that there are people out there having a deep understanding of the technological details surrounding a project that has been undertaken, but how does one go about finding these people? Furthermore, what if it were possible to connect with other people having the needed technical expertise without ever specifically reaching out to them?
  • This is the premise of the current application—in one embodiment; a connection to people that may have a needed level of expertise in the technical area of a project one is working on, without specifically reaching out to them. In another embodiment, an individual and/or a software program/system may reach out to them.
  • In one embodiment, the current application interacts with a user's communication methods, allowing the user to provide a trigger (such as an input stream of characters, a color, a spoken phrase, etc.) that informs the current application to initiate the logic to seek others that may be able to provide technical assistance.
  • SUMMARY
  • An example operation may include a method comprising one or more of receiving, from a device, a first data containing a trigger, obtaining a trigger sentence from the first data. analyzing the trigger sentence to determine a second data including at least one keyword, sending a profile query, including the second data to an external server, determining, by the external server, a profile result including at least one profile related to the profile query, receiving a response from the external server including the profile result, and presenting the profile result to the device.
  • Another example operation may include a system comprising a device, containing a processor and memory, wherein the processor is configured to perform one or more of receive a first data which contains a trigger, obtain a trigger sentence from the first data, analyze the trigger sentence to determine a second data which includes at least one keyword, send a profile query, which includes the second data to an external server, determine, by the external server, a profile result which includes at least one profile related to the profile query, receive a response from the external server which includes the profile result, and present the profile result to the device.
  • A further example operation may include a non-transitory computer readable medium comprising instructions, that when read by a processor, cause the processor to perform one or more of receiving, from a device, a first data containing a trigger, obtaining a trigger sentence from the first data, analyzing the trigger sentence to determine a second data including at least one keyword, sending a profile query, including the second data to an external server, determining, by the external server, a profile result including at least one profile related to the profile query, receiving a response from the external server including the profile result, and presenting the profile result to the device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a system diagram in one embodiment of the current application.
  • FIG. 2 is a block diagram of a computing system in one embodiment of the current application.
  • FIG. 3 is a flowchart of the speech conversion process in one embodiment of the current application.
  • FIG. 4 is a snapshot of a GUI showing a configuration module in one embodiment of the current application.
  • FIG. 5 is a flowchart of the current functionality in one embodiment of the current application.
  • DETAILED DESCRIPTION
  • It will be readily understood that the instant components and/or steps, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, system, component and non-transitory computer readable medium, as represented in the attached figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments.
  • The instant features, structures, or characteristics as described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of network data, such as, packet, frame, datagram, etc. The term “message” also includes packet, frame, datagram, and any equivalents thereof. Furthermore, while certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message, and the application is not limited to a certain type of signaling.
  • FIG. 1 illustrates a block diagram of one embodiment of the current application 100 in accordance with the present disclosure.
  • The system includes at least one client device 102. A client device may be at least one of a mobile device (102 a, 102 b) a tablet, a laptop device, and/or a personal desktop computer. The client device is communicably coupled to the network 104. It should be noted that other types of devices might be used with the present application. For example, a PDA, an MP3 player, or any other wireless device, a gaming device (such as a hand held system or home based system), any computer wearable device, and the like (including a P.C. or other wired device) that may transmit and receive information may be used with the present application. The client device may execute a user browser used to interface with the network 104, an email application used to send and receive emails, a text application used to send and receive text messages, and many other types of applications. Communication may occur between the client device and the network 104 via applications executing on said device and may be applications downloaded via an application store or may reside on the client device by default. Additionally, communication may occur on the client device wherein the client device's operating system performs the logic to communicate without the use of either an inherent or downloaded application.
  • The system 100 includes a network 104 (e.g., the Internet or Wide Area Network (WAN)). The network may be the Internet or any other suitable network for the transmitting of data from a source to a destination.
  • A server 106 exists in the system 100, communicably coupled to the network 104, and may be implemented as multiple instances wherein the multiple instances may be joined redundant network or may be singular in nature. Furthermore, the server may be connected to database 108 wherein tables in the database are utilized to contain the elements of the stored data in the current application, such as Structured Query Language (SQL), for example. The database may reside remotely to the server coupled to the network 104 and may be redundant in nature.
  • FIG. 2 is a block diagram illustrating a computer system 200 upon which embodiments of the current invention may be implemented, for example. The computer system 200 may include a bus 206 or other communication mechanism for communicating information, and a hardware processor 205 coupled with bus 206 for processing information. Hardware processor 205 may be, for example, a general purpose microprocessor, for example.
  • Computer system 200 may also include main memory 208, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 206 for storing information and instructions to be executed by a processor 205. Main memory 208 also may be used for storing temporary variables or other intermediate information during the execution of instructions to be executed by a processor 205. Such instructions, when stored in the non-transitory storage media accessible to processor 205, may render computer system 200 into a special-purpose machine that is customized to perform the operations specified in the previously stored instructions.
  • Computer system 200 may also include a read only memory (ROM) 207 or other static storage device, which is coupled to bus 206 for storing static information and instructions for processor 205. A storage device 209, such as a magnetic disk or optical disk, may be provided and coupled to bus 206, which stores information and instructions.
  • Computer system 200 may also be coupled via bus 206 to a display 212, such as a cathode ray tube (CRT), a light-emitting diode (LED), etc. for displaying information to a computer user. An input device 211 such as a keyboard, including alphanumeric and other keys, is coupled to bus 206, which communicates information and command selections to processor 205. Other type of user input devices may be present including cursor control 210, such as a mouse, a trackball, or cursor direction keys which communicates direction information and command selections to processor 205 and controlling cursor movement on display 212.
  • According to one embodiment, the techniques herein are performed by computer system 200 in response to a processor 205 executing one or more sequences of one or more instructions which may be contained in main memory 208. These instructions may be read into main memory 208 from another storage medium, such as storage device 209. Execution of the sequences of instructions contained in main memory 208 may cause processor 205 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry or embedded technology may be used in place of or in combination with software instructions.
  • The term “storage media” as used herein refers to any non-transitory media that may store data and/or instructions causing a machine to operation in a specific fashion. These storage media may comprise non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks, such as storage device 209. Volatile media may include dynamic memory, such as main memory 208. Common forms of storage media include, for example, a hard disk, solid state drive, magnetic tape, or other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Various forms of media may be involved in the carrying one or more sequences of one or more of the instructions to processor 205 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer may load the instructions into its dynamic memory and send the instructions over a medium such as the Internet 202.
  • Computer system 200 may also include a communication interface 204 coupled to bus 206. The communication interface may provide two-way data communication coupling to a network link, which is connected to a local network 201.
  • A network link typically provides data communication through one or more networks to other data devices. For example, the network link may provide a connection through local network 201 to data equipment operated by an Internet Service Provider (ISP) 202. ISP 202 provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet” 202. Local network 201 and Internet 202 both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 204, carrying the digital data to and from computer system 200, are example forms of transmission media.
  • Computer system 200 can send messages and receive data, including program code, through the network(s) 202, the network link, and the communication interface 204. In the Internet example, a server 203 may transmit a requested code for an application program through Internet 202, local network 201, and communication interface 204.
  • Processor 205 can execute the received code as it is received, and/or stored in storage device 209, or other non-volatile storage for execution at a later time.
  • Every action or step described herein is fully and/or partially performed by at least one of any element depicted and/or described herein.
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure. Modifiers such as “first”, “second”, and “third” may be used to differentiate elements, but the modifiers do not necessarily indicate any particular order. For example, a first party may be so named although, in reality, it may be a second, third, and/or fourth party.
  • In the US Patent Application 2016/0292238, henceforth referred to as the '238 application, the social network provides target member profiles of possible connections via a modifiable query in lieu of a “black box” approach in place today. As stated in the application:
  • “This approach is appealing and simple in concept. Unfortunately, this approach confronts a searcher with a combinatorial explosion of choices: the number of generalizations the checkboxes offer is literally exponential in the number of facets. Furthermore, many of the generalizations will not help the searcher; in fact, many of the generalizations may not even broaden the search beyond the initial result.
  • A better approach would be for the search engine to constrain and guide the generalization process intelligently. The search engine evaluates the exponential number of possible generalizations and identifies a subset to present to the searcher. Each of these generalizations should be useful (e.g., adding a significant number of additional results) and plausible (e.g., adding results that have some intuitive similarity to the initial results). If two generalizations are similar to one another, the search engine might present only the “better” generalization.
  • To summarize, the '238 application seeks to provide a more granular approach wherein a user may “guide” the application to provide intelligent connection suggestions. This granular approach includes at least the following functionality:
      • calculation of a similarity score for potential target members
      • associating the similarity score with potential target member profile
      • selecting a set of potential target member profiles with a highest similarity score
  • While the '238 application seeks to overcome the “black box” approach by providing a more granular solution, there exists a continuing issue with interworking the connection to other profiles into people's daily activities. As people work on projects, projects wherein they may not be proficient in the needed technology, it would be beneficial to connect to other people with whom have a more concrete understanding of the needed technology.
  • An individual and/or a computer program/system may know that there are people out there having a deep understanding of the technological details surrounding a project that has been undertaken, but how does one go about finding these people? Furthermore, what if it were possible to connect with other people having the needed technical expertise without ever specifically reaching out to them?
  • This is the premise of the current application—in one embodiment; a connection to people that may have a needed level of expertise in the technical area of a project one is working on, without specifically reaching out to them. In another embodiment, an individual and/or a software program/system may reach out to them.
  • In one embodiment, the current application interacts with a user's communication methods, allowing the user to provide a trigger (such as an input stream of characters, a color, a spoken phrase, etc.) that informs the current application to initiate the logic to seek others that may be able to provide technical assistance.
  • The said trigger is depicted herein with multiple embodiments, yet one versed in computer programming techniques will easily be able to recognize other types of triggers with other types of communication wherein the current application initiates the logic to connect with others without deviating from the scope of the current application.
  • As previously mentioned, the use of triggers informs current application to begin processing the functionality to connect the user with others. Additionally, when a sentence is determined to be a sentence that the user is interested in finding other people to offer assistance (henceforth referred to as the trigger-sentence), it informs the system to begin to search for someone that is working on that technology and request assistance.
  • In another embodiment, the member's profile is updated with the content in the trigger-sentence.
  • Trigger Area Trigger Protocol
    Text Symbol “!*” Email, Chat, Messaging
    Repeated Phrase Voice Conversation
  • The above table shows a list of some of the possible triggers in the current application in one possible implementation.
  • The first trigger is a text symbol and is a series of characters that would normally not be used in normal conversation. Communication application such as emails and messaging may be utilized in this trigger.
  • The second trigger is a spoken phrase wherein any phrase that is repeated is determined to be a trigger for the current application when the speech is analyzed.
  • Other triggers may be implemented by one versed in common programming design without deviating from the scope of the current application.
  • The current application captures words spoken into the device, such as in conversations, commands, etc. The capturing of words is performed in the client device 102 using such technology normally used in smartphone functionality today. Voice recognition is common and included in the operating system on many devices. The current application utilizes similar functionality to receive voice input, transferring the voice into text, and finally into understood commands or text, for example.
  • In one embodiment, the current application records all speech into the device and stores the recorded audio internally in the device. In another embodiment, the recorded audio is sent to a server processing, such as server 106. In another embodiment, the current application records speech in the device when the device is in a voice call to another person.
  • On a client device 102, such as a mobile device, voice-to-text functionality exists on the device, or can be accessed by the device, assuming that the mobile device is a phone with advanced capabilities (smartphone). A smartphone is a mobile device containing many of the same functionalities similarly found on a desktop or laptop computer. Usually, smartphones offer more advanced computing ability and connectivity than a contemporary phone, either traditional or mobile.
  • Speech recognition is built into many common mobile device SDKs. For example, in the Android release of SDK 1.5, a library included in that release called android.speech allows speech recognition functionality.
  • In the Android 1.5, speech recognition is done through the RecognizerIntent. The intent starts an activity that can prompt the user for speech and send it through the speech recognizer. For example, the code below starts an activity with an intent and the application of the current invention waits for the result:
  • Intent intent = new
    Intent(“android.speech.action.RECOGNIZE_SPEECH”);
    startActivityForResult(intent, 0);
  • The startActivityForResult method launches an Activity returning the result. While speech recognition is performed, it displays an overlay over your app, and when done returns the results back for your activity to handle. The action: RECOGNIZE SPEECH starts an activity to recognize the speech and send the result back to the activity. The voice recognition application that handles the intent processes the voice input, then passes the recognized string back to the application by calling the onActivityResult( ) callback.
  • Android supports two language models for speech analysis: free_form for dictation, and web_search for shorter, search-like phrases. The invention of the current application utilizes the free_form model.
  • Finally, the example below depicts code to integrate speech-to-text to the application:
  • // Check to see if a recognition activity is present
    PackageManager pkgmgr = getPackageManager( );
    List activities = pkgmgr.queryIntentActivities(
      new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH),
      0);
      if (activities.size( ) != 0)
          { speakButton.setOnClickListener(this);
        } else
          { speakButton.setEnabled(false);
            speakButton.setText(“Recognizer is not present”);
        }
      }
  • The code above first verifies that the target mobile device is able to interwork with the speech input, then uses startActivityForResult( ) to broadcast an intent requesting voice recognition, including the extra parameter specifying one of two language models. The voice recognition application that handles the intent processes the voice input, then passes the recognized string back to your application by calling the onActivityResult( ) callback.
  • In another embodiment of the current invention, instead of utilizing Speech-to-Text where the speech is converted to text that is then used as input to the application, the application utilizes Natural Language Processing (NLP). NLP functionality is commonly used in software design and development. Through the use of NLP, the speech is received from the user and processed. The resultant outcome from the processing is used as input to the application and a new method or procedure is executed.
  • FIG. 3 depicts a flow of the speech conversion process 300. The application initiates the speech conversion process 302. The intent requesting voice recognition is registered with the system 304. This allows the system to begin listening for the voice from the user, capturing the incoming speech and converting it to text. A listener is set on an existing component to receive the response from the voice recognition 306. This listener can receive the text from the converted speech 308. The result is received by the application 310, such as the current application executing on the client device 102 wherein the application is able to process the result. Sentences in the text may be determined by separating the text into words in a sentence separated by punctuation.
  • Using speech-to-text functionality, the user is able to speak a trigger, such as a repeated phrase such that the current application executing on the client device 102 begins to process the sentence immediately spoken before the trigger.
  • In another embodiment, the converted speech to text is sent to a server for processing, such as server 106.
  • The current application executing on a client device 102 may interact with communication applications, such as an email, chat or messaging application such that the messages are examined.
  • In one embodiment, the text is captured by the current application as the characters are entered into the device, as is commonly used in software development environments. Therefore, all text in any application is thereby received by the current application and processed therein. There exists many 3rd party applications that perform text input on devices, called decipher applications.
  • In another embodiment, the conversation in an email, chat, or messaging application is obtained via Application Program Interfaces (APIs) of the respective application. For example, email applications have a GET http function that allows an application to obtain the full text of an email.
  • The message is converted in the client device 102 into indexable/searchable tokens utilizing and API such as the Java Package: org.apache.lucene.analysis. This package implements tokenization—namely breaking of input text into small indexing elements or tokens. Some of the other analysis tools included in the Java package include:
  • Stemming—Replacing the words by their stems. For instance, with English stemming “bikes” is replaced by “bike”; now query “bike” can find both documents containing “bike” and those containing “bikes”.
  • Stop Words Filtering—Common words as “the”, “and” and “a” rarely add any value to a search. Removing these shrinks the index size and increases performance. It may also reduce some “noise” and actually improve search quality.
  • Text Normalization—Stripping some or all accents and other character markings can make for better searching.
  • Synonym Expansion—Adding in synonyms at the same token position as the current word can mean more accurate matching when users search with words in the synonym set.
  • Utilizing a tokenizer program, main words from the temporary text file are stored and analyzed for repetition. The application analyzes (or together with another application running on at least one of the devices depicted in FIG. 1) text in a document for various occurrences. For example, using the Java tokenizer, the following code will tokenize a sentence:
  • String speech = “Working on a new project ”;
    StringTokenizer st = new StringTokenizer(speech);
    while (st.hasMoreTokens( )) {
      println(st.nextToken( ));
    }
  • The resulting tokenized elements of the above code would be:
      • Working
      • on
      • a
      • new
      • project
  • Once the text is tokenized, the key words are extracted allowing further processing at the client device 102. Keywords are a noun or group of nouns pertaining to topic of the message. The use of Natural Language Processing (NLP) toolkits and/or applications may be utilized to perform this activity, as is commonly used in software development applications.
  • When searching a social network, the current application executing on a device such as the client device 102 contains a configuration module wherein searches are configured, allowing for a more granular selection of results. This configuration module is accessible via the current system. For example, the current system executing on the client device 102 contains a menu wherein the navigation of different parts of the application is possible. This navigation element may be implemented by many various Graphical User Interface (GUI) components, such as dropdown components, tabbed components, voice detection, etc.
  • FIG. 4 shows a Graphical User Interface (GUI) of a configuration module in one implementation of the current application 400 executing on the client device 102. The GUI screenshot depicts the configuration options for the social network queries. Other elements may be configured using various GUI components wherein other elements may be configured in a similar manner.
  • The connections configuration component 402 allows for the configuration of the results returned from the social network. There exist 4 components 404, allowing for the preference or omission of these types of responses. Radio buttons 306 and 308 allow for each configuration element 404 to be selected. The radio buttons are programmed such that either a prefer radio button 406 or an omit radio button 408 can be selected, but not both.
  • If neither a prefer 406 or omit 408 button is selected, then there is no determined preference selected and the results are obtained through normal functionality.
  • If a prefer button 406 is selected for a component, then the results from that component are assigned a higher priority and are listed at the top. If an omit button 408 is selected for a component, then the results from that component are omitted from the results. If neither buttons 406 nor 408 are selected for a component, then the results are presented in normal fashion, as is currently performed.
  • A “Submit” button 410 exists on the bottom of the window wherein the selected entries are submitted and stored in the current application.
  • In one embodiment, the trigger-sentence is the sentence before the trigger. Regardless of the origin of the text (messaging or text converted from speech input), the client device 102 or the server 106 processes the resulting text, splitting the text into tokens that are then split into sentences. In another embodiment, the trigger-sentence may follow the trigger.
  • As an example, the text is analyzed by the current application such that the trigger characters: “! *” are determined as a trigger wherein the sentence preceding the trigger characters are determined to be the trigger-sentence.
  • FIG. 5 shows a block flow of the functionality of the current application executing on a device such as client device 102 in one implementation of the current application 500. The current description presents some of the embodiments of one current implementation of the application, but one versed in software development will be able to design and implement other, similar solutions without deviating from the scope of the current application.
  • Text is obtained via text-to-speech 504 with speech received 502 from a user speaking into the device, or text from a user typing on the device.
  • The text is split into sentences 508, which may occur following the tokenization of the text. A trigger-sentence is obtained 510, such that the trigger-sentence is the sentence in the text preceding the trigger in the case of a text message or the sentence in the trigger word in the case of incoming speech, further disclosed herein. The trigger-sentence is used as input to the analyze sentence functionality 512.
  • The trigger-sentence is first parsed such that each word in the sentence is made into a token, a process referred to as tokenization 512, further disclosed here. Secondly, keywords are obtained from the tokenized words wherein the keywords are determined via logic, such as NLP functionality and stored either in the client device 102 or in a remote server 106 or remote database 108 for further processing.
  • A query is performed with a social networking service using the previously determined keywords to query users with the same or similar expertise as the keywords 514. The query utilizes normal social network queries wherein users of the social network matching the data in the query are returned. The current application utilizes APIs of social networks to perform the query, in one implementation. The response to the of the social networking service is received at the client device and presented on said device.
  • The response data may be presented on the client device 102 in a message format, such as a text message where the links to the profile are Uniform Resource Locator (URL) links, an email message containing a summary of the profiles, or via the GUI of the current application wherein the profiles are listed in GUI components and displayed on the client device.
  • As an example, in a messaging scenario, a user inputs the following text on a client device 102:
      • “Working on a new project, python coding to interact with a SQL database. !*”
  • The text is received by the current application executing on the client device 102 wherein the text is analyzed. The trigger “!*” is encountered wherein the sentence preceding the trigger is determined to be the trigger-sentence.
  • The trigger sentence is analyzed for keywords. The following keywords are determined:
      • project
      • python
      • interact
      • SQL
      • database
  • The current application executing on the client device 102 uses these keywords to query social networks to find users of the social network obtaining the technologies of the data in the query. The query utilizes normal social network queries wherein users of the social network matching the data in the query are returned. The current application utilizes APIs of social networks to perform the query, in one implementation.
  • These keywords are used to query a social networking system to identify target member profiles matching the keywords, while adhering to the configuration parameters previously set in the configuration module of the current application. Target profile matches are returned to the client device 102 via a response message in response to the query, allowing interaction with the user.
  • As an example, in a speech scenario, a user, while using client device 102 while in a conversation, says the following sentence:
      • “I've been doing some swift design using a back end database on my iPhone my iPhone.”
  • The speech is received by the current application executing on the client device wherein the speech is converted to text using speech-to-text functionality further disclosed herein. The trigger is determined via a repeat of the words: “my iPhone”. The sentence is determined to be the trigger-sentence.
  • The trigger sentence is analyzed for keywords. The following keywords are determined:
      • swift
      • design
      • back
      • end
      • database
      • iphone
  • The current application executing on the client device 102 uses these keywords to query social networks to find users of the social network obtaining the technologies of the data in the query. The query utilizes normal social network queries wherein users of the social network matching the data in the query are returned. The current application utilizes APIs of social networks to perform the query, in one implementation.
  • These keywords are used to query a social networking system to identify target member profiles matching the keywords, while adhering to the configuration parameters previously set in the configuration module of the current application. Target profile matches are returned to the client device 102 in the response to the query sent to query the social network(s), allowing interaction with the user.
  • In another embodiment, a transport is presented wherein keywords spoken by a user in a transport are received by the current application executing in a device in the transport. The transport may be an automobile, airplane, train, bus, boat, or any type of transport that normally transports people from one place to another. The device contains a processor and memory and may be integrated into a computing device in the transport, providing additional details to sentences received in the transport from audio sources, for example the radio.
  • The current application is executing on a device in the transport, henceforth referred to as the transport computer.
  • For example, if a user is listening to a radio in a transport, the current application is executing on the transport computer, which may any of the following:
      • a client device 102
      • a navigational system in the transport
      • a separate device in the transport
      • any device containing a processor and memory
  • The user says, “details, details”. The transport computer executing the current application and recording audio receives this audio and analyzes the spoken words to determine that the spoken words are a trigger word/phrase. The analysis of the received audio and the determination of triggers are further depicted herein.
  • The transport computer records the audio in the transport. There is a buffer of time wherein the received audio is stored. Therefore, the amount of audio stored is the recorded audio equal to the buffer time, henceforth referred to as the ‘buffered audio”. For example, if the buffer time is 90 seconds, at any time there will only be 90 seconds of audio stored.
  • The amount of buffer is either hardcoded, such as 90 seconds, or configurable in the configuration module of the transport computer wherein the amount of buffer is entered into a GUI element and used to determine the amount of recorded audio is stored. The received audio is stored in the transport computer, or remotely in a database, such as database 108, wherein messaging occurs between the transport computer and database through the network 104, for example.
  • Using embodiments previously disclosed herein, the transport computer analyzes the received buffered audio to determine that a trigger word/phrase has been received, in this example the words “details, details” is the trigger.
  • In one embodiment, the sentence before the trigger word/phrase (henceforth referred to as the “trigger sentence” is the sentence utilized for the command. When the transport computer determines the trigger word/phrase, the analysis of the audio will parse the sentence before the trigger word/phrase in the buffered audio.
  • The transport computer extracts the trigger sentence from the buffer audio before the trigger word/phrase was received. For example, if the radio was on and an advertisement was being played, the sentence preceding the spoken trigger word/phrase is used as the trigger sentence.
  • The trigger sentence is then parsed in the transport computer wherein keywords are determined as previously depicted herein. The keywords are then used in a search of a separate system, such as a query to a search on the Internet. The transport computer uses the determined keywords in the trigger sentence to query the Internet. This may be through normal Internet queries or via access to APIs of search applications on the Internet.
  • In an additional embodiment, the search may interface with other installed applications on a user's client device, such as a mapping program, a restaurant application, shopping applications, and the like through APIs of said applications.
  • For example, the user is listening to National Public Radio (NPR) on the radio in a transport. The following is heard from the radio:
      • “Malcom Gladwell's Outliers book is available on iTunes as are all of his books”
  • Immediately following this, the user says: “details, details”
  • The transport computer examines the buffer audio and determines that the received speech “details, details” is a trigger word/phrase, and extracts the previous sentence from the buffer audio, thereby determined to be the trigger sentence.
  • In another embodiment, a word or phrase following the trigger word/phrase allows for the searching of the buffer audio. Therefore, the user may wish to obtain details regarding a sentence that was heard 30 seconds ago, for example. The user is not required to decide immediately after hearing a sentence to obtain details for example but can decide later and then say the trigger word/phrase “details details” followed by a search word/phrase to allow the transport computer to search through the buffer audio to find the correct sentence.
  • For example, after hearing the phrase, the user waits 30 seconds, then says: “details details Malcom Gladwell book”
  • The transport computer determines that the phrase “details details” is a trigger word/phrase, and further determines that “Malcom Gladwell book” is a search term requesting the transport computer search through the buffer audio for the term “Malcom Gladwell book”. The transport computer parses the received command, and first determines the trigger word/phrase “details details”, then parses the search phrase “Malcom Gladwell book” to be used to search through the buffer audio.
  • In another embodiment, the transport computer utilizes text-to-speech functionality commonly utilized in computer applications to convert the possible determined trigger sentence to speech then reads the proposed trigger sentence via the current audio system in the transport.
  • For example, the transport computer announces the following over the audio of the transport:
  • “Is this the correct sentence? Malcom Gladwell's Outliers book is available on iTunes as are all of his books. Please answer yes or no”.
  • The transport then waits for a response to be received from the user in audible form. The asking of questions and waiting for a response is common in transport applications today and is similar in functionality to the normal voice applications existing in many transport systems and mobile device today.
  • If the answer received is “Yes”, processing continues wherein the determined trigger sentence is parsed, keywords are extracted, then the Internet is searched using the keywords.
  • In another embodiment, the volume of the audio of the transport is lowered to a minimal level or cut off completely while the proposed trigger sentence is read.
  • In another embodiment, a spoken command is used as a trigger to stop all transport computer functionality such as: “cancel cancel”. This received trigger command stops all query functionality of the current transport wherein any current determination of trigger sentences is halted, and processing returns to normal functionality.
  • The following keywords are extracted from the resultant speech-to-text:
      • Malcolm
      • Gladwell
      • Outliers
      • book
      • itunes
  • The keywords are used to query the Internet to obtain additional information and delivered to the client device 102 in one embodiment. The Internet search queries are delivered to the client device through the current application executing on the client device through APIs of a browser application on the client device.
  • In another embodiment, the transport reads through the query results using text-to-speech functionality such that the transport converts the received Internet search results text to speech and reads the results, announced through the transport's audio system.
  • In another embodiment, the results are displayed on the transport's display system allowing the user to view the Internet search query results on the display.

Claims (20)

What is claimed is:
1. A method, comprising:
receiving, from a device, a first data containing a trigger;
obtaining a trigger sentence from the first data;
analyzing the trigger sentence to determine a second data including at least one keyword;
sending a profile query, including the second data to an external server;
determining, by the external server, a profile result including at least one profile related to the profile query;
receiving a response from the external server including the profile result; and
presenting the profile result to the device.
2. The method of claim 1, wherein the profile result includes at least one profile considered to be an expert in a same area as the at least one keyword.
3. The method of claim 1, wherein the trigger may be one or more of:
a predetermined text data; and
a repeated spoken word.
4. The method of claim 1, wherein a query preference is configured, by the device, including one or more of:
items to omit, wherein the omitted items are provided a lower priority; and
items to include, wherein the included items are provided a higher priority.
5. The method of claim 4, wherein the external server uses the query preference to determine the profile result.
6. The method of claim 1, wherein one or more of the trigger sentence may precede the trigger; and the trigger sentence may come after the trigger.
7. The method of claim 1, wherein the profile query utilizes Application Program Interfaces (APIs) related to the external server.
8. A system, comprising:
a device, containing a processor and memory, wherein the processor is configured to perform:
receive a first data which contains a trigger;
obtain a trigger sentence from the first data;
analyze the trigger sentence to determine a second data which includes at least one keyword;
send a profile query, which includes the second data to an external server;
determine, by the external server, a profile result which includes at least one profile related to the profile query;
receive a response from the external server which includes the profile result; and
present the profile result to the device.
9. The system of claim 8, wherein the profile result includes at least one profile considered to be an expert in a same area as the at least one keyword.
10. The system of claim 8, wherein the trigger may be one or more of:
a predetermined text data; and
a repeated spoken word.
11. The system of claim 8, wherein a query preference is configured, by the device, which includes one or more of:
items to omit, wherein the omitted items are provided a lower priority; and
items to include, wherein the included items are provided a higher priority.
12. The system of claim 11, wherein the external server uses the query preference to determine the profile result.
13. The system of claim 8, wherein one or more of the trigger sentence may precede the trigger; and the trigger sentence may come after the trigger.
14. The system of claim 8, wherein the profile query utilizes Application Program Interfaces (APIs) related to the external server.
15. A non-transitory computer readable medium comprising instructions, that when read by a processor, cause the processor to perform:
receiving, from a device, a first data containing a trigger;
obtaining a trigger sentence from the first data;
analyzing the trigger sentence to determine a second data including at least one keyword;
sending a profile query, including the second data to an external server;
determining, by the external server, a profile result including at least one profile related to the profile query;
receiving a response from the external server including the profile result; and
presenting the profile result to the device.
16. The non-transitory computer readable medium of claim 1, wherein the profile result includes at least one profile considered to be an expert in a same area as the at least one keyword.
17. The non-transitory computer readable medium of claim 15, wherein the trigger may be one or more of:
a predetermined text data; and
a repeated spoken word.
18. The non-transitory computer readable medium of claim 15, wherein a query preference is configured, by the device, including one or more of:
items to omit, wherein the omitted items are provided a lower priority; and
items to include, wherein the included items are provided a higher priority.
19. The non-transitory computer readable medium of claim 17, wherein the external server uses the query preference to determine the profile result.
20. The non-transitory computer readable medium of claim 15, wherein one or more of the trigger sentence may precede the trigger; and the trigger sentence may come after the trigger.
US16/442,843 2018-05-28 2019-06-17 Integrating communications into a social graph Abandoned US20200026742A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/442,843 US20200026742A1 (en) 2018-05-28 2019-06-17 Integrating communications into a social graph

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862677144P 2018-05-28 2018-05-28
US16/442,843 US20200026742A1 (en) 2018-05-28 2019-06-17 Integrating communications into a social graph

Publications (1)

Publication Number Publication Date
US20200026742A1 true US20200026742A1 (en) 2020-01-23

Family

ID=68614507

Family Applications (6)

Application Number Title Priority Date Filing Date
US16/442,804 Abandoned US20190362318A1 (en) 2018-05-28 2019-06-17 Audio-based notifications
US16/442,901 Active 2040-03-13 US11216354B2 (en) 2018-05-28 2019-06-17 Depicting outcomes of a decision
US16/442,843 Abandoned US20200026742A1 (en) 2018-05-28 2019-06-17 Integrating communications into a social graph
US16/442,778 Abandoned US20190360815A1 (en) 2018-05-28 2019-06-17 Audio aided navigation
US16/442,878 Abandoned US20200026696A1 (en) 2018-05-28 2019-06-17 Content attributes depicted in a social network
US17/536,051 Abandoned US20220083446A1 (en) 2018-05-28 2021-11-28 Depicting Outcomes of a Decision

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US16/442,804 Abandoned US20190362318A1 (en) 2018-05-28 2019-06-17 Audio-based notifications
US16/442,901 Active 2040-03-13 US11216354B2 (en) 2018-05-28 2019-06-17 Depicting outcomes of a decision

Family Applications After (3)

Application Number Title Priority Date Filing Date
US16/442,778 Abandoned US20190360815A1 (en) 2018-05-28 2019-06-17 Audio aided navigation
US16/442,878 Abandoned US20200026696A1 (en) 2018-05-28 2019-06-17 Content attributes depicted in a social network
US17/536,051 Abandoned US20220083446A1 (en) 2018-05-28 2021-11-28 Depicting Outcomes of a Decision

Country Status (1)

Country Link
US (6) US20190362318A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565403B1 (en) 2018-09-12 2020-02-18 Atlassian Pty Ltd Indicating sentiment of text within a graphical user interface
US11644330B2 (en) * 2020-07-08 2023-05-09 Rivian Ip Holdings, Llc Setting destinations in vehicle navigation systems based on image metadata from portable electronic devices and from captured images using zero click navigation
US11848655B1 (en) * 2021-09-15 2023-12-19 Amazon Technologies, Inc. Multi-channel volume level equalization based on user preferences
WO2023133172A1 (en) * 2022-01-05 2023-07-13 Apple Inc. User tracking headrest audio control

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027896A1 (en) * 2006-06-28 2010-02-04 Amir Geva Automated application interaction using a virtual operator
US8743109B2 (en) * 2006-08-31 2014-06-03 Kent State University System and methods for multi-dimensional rendering and display of full volumetric data sets
US8091065B2 (en) * 2007-09-25 2012-01-03 Microsoft Corporation Threat analysis and modeling during a software development lifecycle of a software application
US8955109B1 (en) * 2010-04-30 2015-02-10 Symantec Corporation Educating computer users concerning social engineering security threats
US9558677B2 (en) * 2011-04-08 2017-01-31 Wombat Security Technologies, Inc. Mock attack cybersecurity training system and methods
US9824609B2 (en) * 2011-04-08 2017-11-21 Wombat Security Technologies, Inc. Mock attack cybersecurity training system and methods
US10749887B2 (en) * 2011-04-08 2020-08-18 Proofpoint, Inc. Assessing security risks of users in a computing network
US9245254B2 (en) * 2011-12-01 2016-01-26 Elwha Llc Enhanced voice conferencing with history, language translation and identification
US8811638B2 (en) * 2011-12-01 2014-08-19 Elwha Llc Audible assistance
WO2014066500A1 (en) * 2012-10-23 2014-05-01 Hassell Suzanne P Cyber analysis modeling evaluation for operations (cameo) simulation system
US9398029B2 (en) * 2014-08-01 2016-07-19 Wombat Security Technologies, Inc. Cybersecurity training system with automated application of branded content
US11537777B2 (en) * 2014-09-25 2022-12-27 Huawei Technologies Co., Ltd. Server for providing a graphical user interface to a client and a client
US9473522B1 (en) * 2015-04-20 2016-10-18 SafeBreach Ltd. System and method for securing a computer system against malicious actions by utilizing virtualized elements
US10044749B2 (en) * 2015-07-31 2018-08-07 Siemens Corporation System and method for cyber-physical security
CN107113319B (en) * 2016-07-14 2020-09-25 华为技术有限公司 Method, device and system for responding in virtual network computing authentication and proxy server
US10839703B2 (en) * 2016-12-30 2020-11-17 Fortinet, Inc. Proactive network security assessment based on benign variants of known threats
US10467510B2 (en) * 2017-02-14 2019-11-05 Microsoft Technology Licensing, Llc Intelligent assistant
US20190155916A1 (en) * 2017-11-22 2019-05-23 Facebook, Inc. Retrieving Content Objects Through Real-time Query-Post Association Analysis on Online Social Networks
US10839083B2 (en) * 2017-12-01 2020-11-17 KnowBe4, Inc. Systems and methods for AIDA campaign controller intelligent records
US11152006B2 (en) * 2018-05-07 2021-10-19 Microsoft Technology Licensing, Llc Voice identification enrollment
US11195131B2 (en) * 2018-05-09 2021-12-07 Microsoft Technology Licensing, Llc Increasing usage for a software service through automated workflows

Also Published As

Publication number Publication date
US20190361785A1 (en) 2019-11-28
US20190360815A1 (en) 2019-11-28
US20190362318A1 (en) 2019-11-28
US20200026696A1 (en) 2020-01-23
US11216354B2 (en) 2022-01-04
US20220083446A1 (en) 2022-03-17

Similar Documents

Publication Publication Date Title
US11755666B2 (en) In-conversation search
US11423888B2 (en) Predicting and learning carrier phrases for speech input
US20210043212A1 (en) Mixed model speech recognition
US20200026742A1 (en) Integrating communications into a social graph
US10521189B1 (en) Voice assistant with user data context
US9542944B2 (en) Hosted voice recognition system for wireless devices
US9583107B2 (en) Continuous speech transcription performance indication
US8676577B2 (en) Use of metadata to post process speech recognition output
US8352261B2 (en) Use of intermediate speech transcription results in editing final speech transcription results
CN107430616A (en) The interactive mode of speech polling re-forms
JP2019003319A (en) Interactive business support system and interactive business support program
KR20130108173A (en) Question answering system using speech recognition by radio wire communication and its application method thereof
US11895269B2 (en) Determination and visual display of spoken menus for calls
WO2023027833A1 (en) Determination and visual display of spoken menus for calls
KR20240046508A (en) Decision and visual display of voice menu for calls
KR20240042964A (en) Selection and Transmission Method of Related Video Data through Keyword Analysis of Voice Commands
CN117882365A (en) Verbal menu for determining and visually displaying calls
CN116127101A (en) Text retrieval method, text retrieval device, electronic equipment and storage medium
TW202006563A (en) Dialogic type search display method performing an interactive search and obtain a corresponding search result by means of natural speech or natural sentence expression

Legal Events

Date Code Title Description
AS Assignment

Owner name: OPEN INVENTION NETWORK LLC, NORTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEDET, DAVID GERARD;REEL/FRAME:049484/0970

Effective date: 20190602

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION