CN114764363B - Prompting method, prompting device and computer storage medium - Google Patents

Prompting method, prompting device and computer storage medium Download PDF

Info

Publication number
CN114764363B
CN114764363B CN202110296429.XA CN202110296429A CN114764363B CN 114764363 B CN114764363 B CN 114764363B CN 202110296429 A CN202110296429 A CN 202110296429A CN 114764363 B CN114764363 B CN 114764363B
Authority
CN
China
Prior art keywords
preset
group
video
target operation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110296429.XA
Other languages
Chinese (zh)
Other versions
CN114764363A (en
Inventor
时红仁
应臻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qwik Smart Technology Co Ltd
Original Assignee
Shanghai Qwik Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qwik Smart Technology Co Ltd filed Critical Shanghai Qwik Smart Technology Co Ltd
Publication of CN114764363A publication Critical patent/CN114764363A/en
Application granted granted Critical
Publication of CN114764363B publication Critical patent/CN114764363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The embodiment of the application discloses a prompting method, a prompting device and a computer storage medium, wherein the prompting method comprises the following steps: acquiring input operation information, wherein the operation information comprises at least one item of touch operation information, voice operation information and shot image information of a target operation object; determining the target operation object based on the operation information; and presenting a target operation prompt related to the target operation object under the condition that the operation information meets the preset condition. According to the prompting method, the prompting device and the computer storage medium, when the input operation information is determined to meet the preset condition, the prompting message for operating the operation object is automatically presented, so that a user can quickly obtain the operation instruction of the operation object, the operation is simple and convenient, and the user experience is improved.

Description

Prompting method, prompting device and computer storage medium
The present application claims priority from application number "cn20201637673. X", application name "hint method, apparatus and computer storage medium" filed on 12/31/2020, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a prompting method, a prompting device, and a computer storage medium.
Background
Along with the rapid development of terminal technology, the functions of the vehicle-mounted system are more and more diversified, and meanwhile, the functions of the vehicle-mounted system are more and more configured correspondingly. In the scene of using the vehicle-mounted system, when the user is confused about the use of the vehicle-mounted system or cannot obtain a response, the user searches or searches for the help documents of the vehicle-mounted system by networking, but the method is complex in operation, and the user experience is seriously affected because the user is difficult to find the needed help.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
An object of the present application is to provide a prompting method, apparatus, and computer storage medium, which have the advantages that when acquiring input operation information to determine that a user wants to acquire a help prompt for an operation object, prompt contents for operating the operation object are automatically presented, so that the user can quickly acquire the operation prompt for the operation object, the operation is simple and convenient, and the user experience is improved.
Another object of the present invention is to provide a prompting method, apparatus, and computer storage medium, which are advantageous in that operation information can be inputted in a variety of different ways.
Another object of the present invention is to provide a prompting method, apparatus, and computer storage medium, which are advantageous in that an operation prompt for an operation object can be presented in the form of a help video.
Another object of the present invention is to provide a prompting method, apparatus, and computer storage medium, which has the advantages that the range of determining a target operation object can be narrowed, the target operation object is determined more quickly and a corresponding operation prompt is obtained by preferentially matching the target operation object in the operation objects included in the current display interface, and the related operation prompt can be directly presented on the current display interface after the target operation object is determined in the current display interface.
Another object of the present invention is to provide a prompting method, apparatus, and computer storage medium, which are capable of switching to a display interface including a target operation object and presenting a corresponding operation prompt after determining the target operation object, so that a user can directly perform a related operation on the target operation object after obtaining the operation prompt.
Another object of the present application is to provide a prompting method, apparatus, and computer storage medium, which can count learning usage conditions of a user corresponding to all operation prompts, and help the user to learn about the learning degree of the user on an operation object.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a prompting method, including:
acquiring input operation information, wherein the operation information comprises at least one item of touch operation information, voice operation information and shot image information of a target operation object;
determining the target operation object based on the operation information; and
and under the condition that the operation information meets the preset condition, presenting a target operation prompt related to the target operation object.
It can be seen that the target operation object can be determined according to the obtained and input operation information, so that the operation prompt of the target operation object is presented, and the user is helped to quickly obtain the target operation prompt, wherein the input of the operation information can be realized in a plurality of different modes, and a plurality of choices are provided for the user.
In some possible implementations, the target operational prompt includes a help video with a video identification.
It can be seen that presenting the operation prompts in the form of help videos can help users to quickly understand the specific content of the operation prompts more intuitively.
In some possible implementations, the determining the target operation object based on the operation information includes the steps of:
acquiring keywords contained in the voice operation information;
acquiring a first group of preset operation objects contained in a first display interface which is currently displayed and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects;
matching keywords contained in the voice operation information with the first group of video identifications respectively; and
and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object.
It can be seen that by narrowing the range of determining the target operation object, it is possible to preferentially determine whether or not the target operation object exists among the operation objects included in the first display interface currently displayed.
In some possible implementations, the presenting a target operation prompt related to the target operation object includes the steps of:
Determining a help video carrying the first video identifier as the target operation prompt; and
and displaying the target operation prompt in a preset area in the first display interface.
In some possible implementations, in the event that the keyword does not successfully match a first video identifier of the first set of video identifiers, the method further includes the steps of:
acquiring a second group of preset operation objects which are not included in the first display interface and a second group of video identifications carried by a second group of help videos associated with the second group of preset operation objects;
matching keywords contained in the voice operation information with the second group of video identifications; and
and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
In some possible implementations, the presenting a target operation prompt related to the target operation object includes the steps of:
determining a help video carrying the second video identifier as the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently; and
And switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
It can be seen that when the target operation object exists in the interface which is not displayed currently, the current display interface can be switched to the display interface containing the target operation object, and a corresponding operation prompt is presented, so that a user can execute related operations on the target operation object in time after obtaining the operation prompt.
In some possible implementations, the target operation prompt includes prompt information of a target operation for the target operation object, and after the presenting of the target operation prompt related to the target operation object, the method further includes the steps of:
in response to detecting that the target operation for the target operation object is performed within a preset time, updating a learning state of the target operation prompt.
Learning state data of all target operation prompts is counted so that learning completion degrees of all target operation prompts are calculated based on the learning state data.
It can be seen that the user is helped to know the learning degree of the user on the operation object by counting the learning use conditions of the user corresponding to all operation prompts.
In a second aspect, an embodiment of the present application provides a prompting device, including:
a receiving device configured to acquire input operation information including at least one of touch operation information, voice operation information, photographed image information on a target operation object;
a processor configured to determine the target operation object based on the operation information provided by the receiving apparatus, and acquire a target operation prompt regarding the target operation object; and
and a display device configured to present the target operation prompt provided by the processor in relation to the target operation object in a case where the operation information provided by the receiving device satisfies a preset condition.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a bus, where the processor and the memory are connected by the bus, where the memory is configured to store a set of program codes, and the processor is configured to invoke the program codes stored in the memory to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium having instructions stored therein which, when run on a computer, implement a method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
According to the embodiment of the application, the target operation object is determined according to the operation information after the operation information input by the user is acquired, and the target operation related to the target operation object is acquired and displayed in the display device, so that the user is helped to know related content and the using method, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a prompting method according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for determining a target operation object according to an embodiment of the present application;
fig. 3 is a schematic diagram of an application scenario of a prompting method provided by an embodiment of the present application;
fig. 4 is a second application scenario schematic diagram of the prompting method provided by the embodiment of the present application;
fig. 5 is a third application scenario diagram of the prompting method provided by the embodiment of the present application;
fig. 6 is a schematic diagram of an application scenario of a prompting method provided by an embodiment of the present application;
FIG. 7 is a flowchart of another method for determining a target operation object according to an embodiment of the present application;
fig. 8 is a schematic diagram of an application scenario of a prompting method provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a prompting device according to an embodiment of the present application;
fig. 10 is a schematic diagram of another prompting device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The terms "comprising" and "having" and any variations thereof in the description and claims of the application and in the foregoing drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
With the development of vehicle technology, the continuous expansion of vehicle functions is a future development direction, especially in the development of functions of vehicle-mounted terminal systems. The functions of the vehicle-mounted terminal system are more and more diversified, in order to facilitate users to better become familiar with the operation and use methods of various functions, in the traditional method, a vehicle manufacturer may be provided with a paper instruction book or an electronic instruction book for the vehicle, but the general content of the instruction book is complicated, and few users like to read carefully.
At present, explanation of various function using methods is usually added on a vehicle terminal, so that a use instruction can be provided for a user, but the method still needs the user to actively search for the explanation of the corresponding function using method, the process is long in time consumption, and the experience of the user is affected, so that how to help the user to obtain the use help of various functions in the vehicle in a targeted and rapid manner is a problem to be solved.
To at least partially address one or more of the above problems, as well as other potential problems, embodiments of the present application provide a prompting method. The method comprises the steps of obtaining input operation information, wherein the operation information comprises at least one item of touch operation information, voice operation information and shot image information of a target operation object; determining the target operation object based on the operation information; and presenting a target operation prompt related to the target operation object under the condition that the operation information meets the preset condition. The prompting method can be specifically implemented in the vehicle-mounted terminal, and the embodiment is described by taking the application of the prompting method to the vehicle-mounted terminal as an example.
Embodiments of the present application are described in detail below.
Referring to fig. 1, a flow chart of a prompting method provided by an embodiment of the present application may include the following steps:
s101, acquiring input operation information, wherein the operation information comprises at least one item of touch operation information, voice operation information and shot image information of a target operation object.
The operation object comprises various buttons, icons, check boxes and other controls displayed on a display interface of the vehicle-mounted terminal system. The operation information CAN be obtained by, for example, a user performing touch operation on a touch display interface of the vehicle-mounted terminal, and then the vehicle-mounted terminal obtaining touch operation information of the user through a CAN bus; the method comprises the steps that a user initiates a voice command to a vehicle-mounted terminal, and the vehicle-mounted terminal collects the voice command of the user through voice collection equipment such as a microphone and acquires voice operation information; in addition, the user can shoot the target operation object through the mobile terminal connected with the vehicle-mounted terminal or perform screenshot operation on the target operation object at the operation interface to obtain an image related to the target operation object, and the vehicle-mounted terminal further acquires shooting image information through acquiring the image and processing.
Optionally, the touch operation information includes a touch duration of the touch operation and/or a touch number of the touch operation and/or a touch track of the touch operation. The user can realize long-press, continuous click or graphic drawing operation and the like on the touch display screen of the vehicle-mounted terminal through fingers, and the vehicle-mounted terminal can correspondingly generate relevant touch operation information in response to the operation of the user. For example, when the user performs a touch operation of continuously pressing the target operation object, the vehicle-mounted terminal may use the pressing duration as the touch duration of the touch operation; when a user performs continuous clicking touch operation on a target operation object, the vehicle-mounted terminal can take the continuous clicking times as the touch times of the touch operation; and when the user draws the related graph on the touch display screen through touch. And the vehicle-mounted terminal records a touch track of the touch operation of the track seat drawn by the user.
S102, determining the target operation object based on the operation information.
Optionally, for the touch operation information, in the case that the touch operation information includes a touch duration of the touch operation and/or a touch number of times of the touch operation, the vehicle-mounted terminal may determine a target operation object according to the touch operation information, that is, an operation object that is directly touched by the user in the touch display screen. And under the condition that the touch operation information comprises a touch track of touch operation, the vehicle-mounted terminal matches the touch track with a preset track in a track database, obtains an operation object associated with the preset track successfully matched with the touch track, and determines the operation object as the target operation object.
Optionally, for the shot image information, the vehicle-mounted terminal acquires key information through image recognition based on the acquired shot image information, matches the key information with preset information in an image database, acquires an operation object associated with the preset information successfully matched with the key information, and determines that the operation object is the target operation object. Wherein, the key information can comprise graphic information and/or text information.
Optionally, for the voice operation information, after the vehicle-mounted terminal collects the voice operation information of the user, the target operation object is confirmed by extracting keywords in the voice operation information and matching the keywords.
S103, presenting a target operation prompt related to the target operation object under the condition that the operation information meets the preset condition.
The target operation prompt can comprise a help video with a video identifier, and the user can be helped to acquire a correct use method related to the target operation object or acquire an explanation of the target operation object in function by playing the help video on a display interface of the vehicle-mounted terminal.
Optionally, the preset condition includes at least one of the following conditions: the touch duration of the touch operation meets a first duration preset condition; the touch times of the touch operation meet a first time number preset condition; the number of times of continuous recognition failure of the input voice operation information meets a second time preset condition; and recognizing that the voice operation information contains preset keywords.
It may be appreciated that the touch duration meeting the first duration preset condition may be that a duration of pressing the target operation object by the user exceeds a first preset duration, where the first preset duration may be 3 seconds or 5 seconds, and so on. The touch control times meeting the first time number preset condition may be that the continuous click times of the user on the target operation object exceeds a first preset time number, where the first preset time number may be two or three.
When the user wishes to perform voice operation on the target operation object, there may be a case where the vehicle-mounted terminal fails to recognize the voice operation information of the user a plurality of times, at which time it may be considered necessary to provide the user with a target operation prompt concerning the target operation object. The second number of times of preset conditions may be that the number of times of continuous recognition failure of the input voice operation information exceeds the second preset number of times.
The recognition of the voice operation information includes preset keywords, wherein the preset keywords may be query keywords, such as "what", etc., and specific keywords, such as "use", "help", "explain" and other words of related semantics, etc. When the voice operation information comprises a preset keyword, the user is required to directly acquire prompt information for the target operation object to help the user to know the target operation object. In a specific implementation, preset keywords may also be set according to commonly used terms of the user, such as "learn … …", "teach me to use … …" or "demonstrate … …", etc.
Wherein, the recognizing the preset condition that the voice operation information includes the preset keyword may further include: detecting that the input time interval between the touch operation information and the voice operation information is smaller than a preset duration threshold value, and recognizing that the voice operation information contains preset keywords. For example, in a more common scenario, after a user touches a certain target operation object, there may be a use question about the target operation object, and at this time, the user may quickly request to acquire related operation information through voice operation. The preset condition can be met under the condition, and the target operation object can be rapidly determined according to the touch operation information to further present the target operation information related to the target operation object.
It should be noted that, the first time length preset condition, the first time number preset condition, the second time number preset condition, and the preset keyword may be set by a technician based on actual conditions, which is not limited in the embodiment of the present application.
Optionally, an association relationship exists between the target operation object and the target operation prompt, after the target operation object is determined based on the operation information, the operation information associated with the target operation object may be directly determined to be the target operation information, and displayed in a preset area in a display interface of the vehicle-mounted terminal.
The preset area exists in a display interface where the target operation object exists. And the vehicle-mounted terminal can generate a floating window in the preset area according to the initial window parameters, and the target operation information is presented through the floating window.
Optionally, in response to a full-screen operation of the user on the floating window, the floating window may be displayed in a full screen on the current display interface, or in response to a zoom-in operation of the user on the floating window, the floating window may be zoomed in or zoomed out in the first interface in a fixed proportion.
Optionally, after responding to the full-screen operation or responding to the scaling operation, the restoration operation of the floating window by the user may be responded, and the floating window may be restored to a preset size and position.
Optionally, in response to a movement operation of the user on the floating window, a position of the floating window on a current display interface may be adjusted, the floating window is displayed based on an initial transparency parameter before responding to the movement operation, and is displayed based on a preset transparency parameter when the movement operation is performed, where the preset transparency parameter is higher than the initial transparency parameter.
Optionally, the target operation prompt includes prompt information of target operation for the target operation object, and after the target operation prompt related to the target operation object is presented, whether a prompt message that needs manual assistance needs to be presented in the current display interface can be further determined by detecting whether a manual assistance condition is currently met. Wherein the manual assistance condition includes at least one of the following conditions: the target operation prompt presentation duration meets a second duration preset condition; the target operation prompt presentation times meet a third time preset condition; and detecting that the target operation for the target operation object is not completed within a preset time after the target operation prompt is closed.
It should be noted that, the second duration preset condition may be that the display time of the target operation prompt in the current display interface exceeds a second preset duration, and the third number of times preset condition may be that the repeated opening times of the target operation prompt exceeds a third preset number of times, where both the second preset duration and the third preset number of times may be set by a technician based on actual conditions, and in the embodiment of the present application, no limitation is made.
Optionally, the target operation prompt includes prompt information of a target operation for the target operation object, after the target operation prompt related to the target operation object is presented, the vehicle-mounted terminal may further update a learning state of the target operation prompt in response to detecting that the target operation for the target operation object is performed within a preset time, that is, after the target operation prompt is displayed on the display interface, if it is detected that a user performs the target operation for the target operation object related to the target operation prompt within the preset time, the learning state of the target operation prompt may be updated to a learned state. In another possible implementation manner, if there are multiple executable target operations of the target operation object, the current learning state may be correspondingly updated in the form of a learning progress percentage according to the execution progress of the multiple target operations of the target operation object by the user.
For example, a sentinel mode control in a setting interface is determined as a target operation object, through an operation prompt related to a sentinel mode, namely, a target operation, a user knows that the sentinel mode can be started by touching a switch button control of the sentinel mode on the setting interface, and in a preset time after the display of the target operation prompt is finished, if the user starts the sentinel mode by touching the switch button, the target operation for the target operation object can be determined to be executed. Updating the operation prompt related to the sentinel mode to be in a learned state.
After updating the learning state of the target operation prompts, learning state data of all the target operation prompts in the vehicle-mounted terminal can be counted, so that learning completion degrees of all the target operation prompts are calculated based on the learning state data. Based on the learning completion degree, the actual learning and the use condition of various functions in the vehicle are displayed to the user, so that the user is helped to better know and learn the rich and diverse functions on the vehicle.
In one possible implementation manner, the target operation prompt includes a help video with a video identifier, and the user can be helped to quickly and conveniently know the information of the related target operation object by playing the help video, including the function or the using method of the target operation object, and the like. When the operation information is voice operation information, the determination of the target operation object can further determine whether the target operation object exists in the current display interface based on the related operation object in the current display interface in advance, so that the range of determining the target operation object is narrowed, and in actual use of the user, a use query is usually generated for a certain operation object in the interface after a certain interface is opened, and related operation prompts need to be provided for the user. In the following, a method for quickly determining a target operation object is further described with reference to fig. 2-8, in the case that the operation information is voice operation information and the target operation prompt includes a help video with a video identifier.
Referring to fig. 2, a flowchart of a method for determining a target operation object according to an embodiment of the present application includes the following steps:
s201, acquiring keywords contained in the voice operation information.
After the vehicle-mounted terminal acquires the input voice operation information, keywords contained in the voice operation information can be determined through voice recognition. For example, the user voice input "what is a sentinel pattern", and the in-vehicle terminal may extract keywords including "sentinel", "pattern", and the like based on the voice input, and further determine the target operation object based on the keywords.
S202, a first group of preset operation objects contained in a first display interface which is currently displayed and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects are obtained.
The first display interface is an interface which is currently presented in display equipment of the vehicle-mounted terminal when the vehicle-mounted terminal acquires voice operation information of a user, the preset operation object is an operation object existing in the first display interface, the user can observe that the preset operation object exists in the first display interface in the form of a user interface control in the first display interface, and most of the user interface controls have the functions of executing functions or triggering code running through an event and completing response, namely, the user can execute related functions by performing related operations such as clicking and the like on the control. In addition, there is also a help video associated with the preset operation object in the in-vehicle terminal.
It may be appreciated that a plurality of preset operation objects may be included in the first display interface, where all preset operation objects in the first display interface are the first set of preset operation objects, and all help videos associated with all preset operation objects are the first set of help videos. Meanwhile, different help videos carry different video identifications, and one help video can carry one or more video identifications. The first set of video identifications includes video tags carried by each of all of the help videos in the first set of help videos. The first group of preset operation objects and the first group of help videos can have a one-to-one correspondence.
For example, referring to fig. 3, fig. 3 is a current interface presented by the display device, that is, the first display interface. In the first display interface, a first set of preset operation objects include a sentinel mode control 305, help videos associated with the sentinel mode control 305 are stored in the vehicle-mounted terminal, when a user has a question about a sentinel mode, the help videos can be presented to the user to explain related functions and operation using, and the help videos carry corresponding video identifications, and the video identifications can be a sentinel, a sentinel mode and the like. In addition to the sentinel mode control 305, controls 301, 302, 303, and 304 in FIG. 3 may also be included as preset operational objects in the first set of preset operational objects. It should be noted that, in the display interface, all the operable or inoperable display controls may be used as preset operation objects, and the specific determination of the preset operation objects is set by the designer based on actual situations, which is not limited in any way.
And S203, matching keywords contained in the voice operation information with the first group of video identifications respectively.
And matching the keywords contained in the voice operation information with video identifications carried by different help videos in the first group of help videos respectively, obtaining the matching degree between the keywords and the video identifications carried by the different help videos, and judging whether the highest matching degree exceeds a preset threshold. And when the matching degree exceeds a preset threshold, the keyword is successfully matched with the video identifier, and if the matching degree does not exceed the preset threshold, the keyword is unsuccessfully matched with the first group of video identifiers. The preset threshold is set by a designer based on practical situations, and the embodiment of the application is not limited in any way.
For example, a first group of help videos comprises a first help video, the first help video carries a first video identifier, and when the matching degree between the keyword and the first video identifier carried by the first help video is highest and exceeds a preset threshold, the keyword and the first video identifier in the first group of video identifiers are determined to be successfully matched; and when the matching degree between the keyword and the first video identifier carried by the first auxiliary video is highest but does not exceed a preset threshold value, determining that the keyword is not successfully matched with the first video identifier in the first group of video identifiers, and other video identifiers which can be successfully matched with the keyword are not present in the first group of video identifiers.
And S204, under the condition that the keyword is successfully matched with the first video identification in the first group of video identifications, determining a first preset operation object associated with the help video carrying the first video identification as the target operation object.
For example, referring to fig. 4, a first set of preset operation objects in the first display interface currently displayed includes a sentinel mode control 305, when a user inputs "what is a sentinel mode", the vehicle-mounted terminal may acquire that a keyword included in the voice operation information is "sentinel" or the like, and a help video associated with the sentinel mode control 305 carries a video identifier such as "sentinel" or "sentinel mode", so that the matching degree of the keyword and the video identifier carried by the help video associated with the sentinel mode control 305 is highest and exceeds a preset threshold, thereby determining that the keyword and the video identifier are successfully matched. The help video associated with the sentinel mode control 305 is a first help video, and the video identifier carried by the help video associated with the sentinel mode control 305 is a first video identifier. After the matching is successful, the sentinel mode control 305 can be determined to be the target operation object.
S205, determining the help video carrying the first video identification as the target operation prompt.
As described in the above illustration, if the video identification carried by the help video associated with the sentinel mode control 305 can be successfully matched with the keyword, it may be determined that the help video associated with the sentinel mode control 305 is the target operation prompt.
S206, displaying the target operation prompt in a preset area in the first display interface.
The preset area is shown in fig. 5, and may be the lower right corner of the current display interface, and the size of the initial area is set by a designer based on actual conditions, which is not limited in any way. And generating a floating window in the preset area, and playing the help video carrying the first video identifier in the floating window. The help video may include functional introduction to the sentinel mode or method of use, etc.
Optionally, for the floating window, the relevant operation of the user on the floating window may be responded to perform operations such as full-screen, zooming, dragging and moving on the floating window.
Referring to fig. 6, if the main page currently displayed in fig. 6 is taken as the first display interface, since the first set of preset operation objects existing in the first display interface does not include the sentinel mode control, when the user inputs the voice operation information as "what is the sentinel mode", and after the voice operation information is processed in steps S201-S203, the target operation object and the target operation prompt cannot be determined in the first display interface currently displayed. And if the keyword is not successfully matched with the first video identifier in the first group of video identifiers, determining a target operation object and target operation information in the first display interface cannot be performed.
Under the condition that the keyword is not successfully matched with a first video identifier in the first group of video identifiers, the video identifiers which can be successfully matched with the keyword can be determined by matching the keyword with video identifiers carried by other help videos stored in the vehicle-mounted terminal, so that a target operation object and a target operation prompt can be determined. The embodiments in the above case are explained below.
Referring to fig. 7, a flowchart of another method for determining a target operation object according to an embodiment of the present application includes the following steps:
s301, a second group of preset operation objects which are not included in the first display interface and a second group of video identifications carried by a second group of help videos associated with the second group of preset operation objects are obtained.
The second set of preset operation objects are preset operation objects except the first set of preset operation objects, and the second set of preset operation objects do not exist in a first display interface currently displayed on the display device.
S302, matching keywords contained in the voice operation information with the second group of video identifications.
And S303, under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
S304, determining the help video carrying the second video identifier as the target operation prompt.
S305, switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
And the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently.
It should be noted that, the currently displayed main page is shown in fig. 6, and the sentry mode control is confirmed to be the target operation object and exist in the second display interface by steps S301-S305, as shown in fig. 8, the interface shown in fig. 8 is the second display interface including the sentry mode control 801, so that the currently displayed interface may be switched from the first display interface to the second display interface, that is, from the first display interface in fig. 6 to the second display interface in fig. 8, and the target operation prompt is displayed in the preset area in the second display interface. The preset area can be the right lower corner of the current display interface, a floating window can be generated in the preset area according to initial window parameters, and the target operation information is displayed in the floating window.
When a plurality of display interfaces comprise a sentinel mode control, a designer can designate one of the display interfaces to be preset as a designated second display interface based on actual conditions, and when the second display interface needs to be switched, the designer can switch to the preset designated second display interface. In another possible implementation manner, the interface with the most frequently-used help prompt related to the sentinel mode control can be determined to be used as the second display interface by acquiring operation data of other users in the multiple display interfaces including the sentinel mode control.
For the relevant content related to the floating window response operation, please refer to the description of the content related to the floating window response operation in the foregoing method or other embodiments, which is not described herein. The specific position of the target operation object which the user wants to know can be rapidly positioned by automatically switching the display interface, so that the user can conveniently execute related operations on the target operation object after the target operation prompt is displayed.
Referring to fig. 9, for a schematic diagram of a prompting device provided in an embodiment of the present application, a prompting device 900 may include:
A receiving device 910 configured to acquire input operation information including at least one of touch operation information, voice operation information, photographed image information on a target operation object;
a processor 920 configured to determine the target operation object based on the operation information provided by the receiving device, and acquire a target operation prompt regarding the target operation object; and
and a display device 930 configured to present the target operation prompt provided by the processor with respect to the target operation object, in a case where the operation information provided by the receiving device satisfies a preset condition.
The receiving device 910 is configured to obtain operation information input by a user through the vehicle-mounted terminal, where the operation information includes but is not limited to a microphone and the like in the embodiment of the present application; the display device 930 is configured to present an operation prompt to a user, which may be an in-vehicle touch display screen in the embodiment of the present application.
In one possible implementation, the target operational prompt includes a help video with a video identification.
In one possible implementation, the processor 920 is further configured to:
Acquiring keywords contained in the voice operation information;
acquiring a first group of preset operation objects contained in a first display interface which is currently displayed and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects;
matching keywords contained in the voice operation information with the first group of video identifications respectively; and
and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object.
In a possible implementation manner, the processor 920 is further configured to determine that the help video carrying the first video identifier is the target operation prompt, and control the display device 930 to display the target operation prompt in a preset area in the first display interface.
In one possible implementation, the processor 920 is further configured to:
acquiring a second group of preset operation objects which are not included in the first display interface and a second group of video identifications carried by a second group of help videos associated with the second group of preset operation objects;
Matching keywords contained in the voice operation information with the second group of video identifications; and
and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
In a possible implementation, the processor 920 is further configured to determine that a help video carrying the second video identifier is the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is included in a second display interface that is not currently displayed, and the display device 930 is controlled to switch from the first display interface to the second display interface, so that the target operation prompt is displayed in a preset area in the second display interface.
In one possible implementation, the preset condition includes at least one of the following conditions:
the touch duration of the touch operation meets a first duration preset condition;
the touch times of the touch operation meet a first time number preset condition;
the number of times of continuous recognition failure of the input voice operation information meets a second time preset condition; and
And recognizing that the voice operation information contains preset keywords.
In one possible implementation, the target operation prompt includes prompt information of a target operation for the target operation object, and after the presenting of the target operation prompt related to the target operation object, the processor 920 is further configured to:
presenting a prompt message of whether manual assistance is needed or not in a current display interface of the display device 930 under the condition that the manual assistance condition is detected to be met;
wherein the manual assistance condition includes at least one of the following conditions:
the target operation prompt presentation duration meets a second duration preset condition;
the target operation prompt presentation times meet a third time preset condition; and
and after the target operation prompt is closed, detecting that the target operation aiming at the target operation object is not completed within a preset time.
In another possible implementation, the target operation prompt includes prompt information of a target operation for the target operation object, and after the presenting of the target operation prompt related to the target operation object, the processor 920 is further configured to:
In response to detecting that the target operation for the target operation object is performed within a preset time, updating a learning state of the target operation prompt.
Learning state data of all target operation prompts is counted so that learning completion degrees of all target operation prompts are calculated based on the learning state data.
The concepts related to the technical solutions provided by the embodiments of the present application, explanation and detailed description of the concepts related to the embodiments of the present application and other steps refer to the foregoing methods or descriptions of the contents of the method steps performed by the apparatus in other embodiments, which are not repeated herein.
Referring to fig. 10, a schematic diagram of another prompting device according to an embodiment of the present application may include:
a processor 1010, a memory 1020, and a communication interface 1030. The processor 1010, the memory 1020 and the communication interface 1030 are connected by a bus 1040, the memory 1020 is configured to store instructions, and the processor 1010 is configured to execute the instructions stored in the memory 1020 to implement the method steps corresponding to fig. 1-2 and 7.
The processor 1010 is configured to execute instructions stored in the memory 1020 to control the communication interface 1030 to receive and transmit signals to perform the steps of the method described above. The memory 1020 may be integrated into the processor 1010 or may be provided separately from the processor 1010.
In one possible implementation, the functions of the communication interface 1030 may be considered to be implemented by a transceiver circuit or a dedicated chip for transceiving. The processor 1010 may be considered to be implemented by a dedicated processing chip, a processing circuit, a processor, or a general-purpose chip.
In another possible implementation manner, a manner of using a general purpose computer may be considered to implement the apparatus provided by the embodiments of the present application. I.e. program code implementing the functions of the processor 1010, the communication interface 1030, is stored in the memory 1020. The general purpose processor implements the functions of the processor 1010, the communication interface 1030, by executing code in the memory 1020.
The concepts related to the technical solutions provided by the embodiments of the present application, explanation and detailed description of the concepts related to the embodiments of the present application and other steps refer to the foregoing methods or descriptions of the contents of the method steps performed by the apparatus in other embodiments, which are not repeated herein.
As another implementation manner of this embodiment, a computer readable storage medium is provided, which is used to store a computer program, where an instruction is stored in the computer readable storage medium, and when the instruction is executed on a computer, the method in the above method embodiment is executed.
As another implementation of this embodiment, a computer program product is provided that contains instructions that, when executed, perform the method of the method embodiment described above.
Those skilled in the art will appreciate that only one memory and processor is shown in fig. 10 for ease of illustration. In an actual terminal or server, there may be multiple processors and memories. The memory may also be referred to as a storage medium or storage device, etc., and embodiments of the present application are not limited in this respect.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (Central Processing Unit, CPU for short), other general purpose processor, digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
It should also be understood that the memory referred to in embodiments of the present application may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory. The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (Direct Rambus RAM, DR RAM).
Note that when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, the memory (storage module) is integrated into the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The bus may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. But for clarity of illustration, the various buses are labeled as buses in the figures.
It should also be understood that the first, second, third, fourth and various numerical numbers referred to herein are merely descriptive convenience and are not intended to limit the scope of the application.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
In various embodiments of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks (illustrative logical block, abbreviated ILBs) and steps described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of prompting, the method comprising the steps of:
acquiring input operation information, wherein the operation information comprises voice operation information of a target operation object;
based on the operation information, determining the target operation object includes: acquiring keywords contained in the voice operation information; the method comprises the steps that a first group of preset operation objects contained in a first display interface which is currently displayed and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects are obtained, a plurality of preset operation objects are contained in the first display interface, all preset operation objects in the first display interface are the first group of preset operation objects, and all help videos associated with all preset operation objects are the first group of help videos; and matching keywords contained in the voice operation information with the first group of video identifications respectively; and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object; under the condition that the keyword is not successfully matched with a first video identifier in the first group of video identifiers, a second group of preset operation objects which are not included in the first display interface and a second group of video identifiers carried by a second group of help videos associated with the second group of preset operation objects are obtained, wherein the second group of preset operation objects are preset operation objects except the first group of preset operation objects, and the second group of preset operation objects do not exist in the first display interface currently displayed on the display device; matching keywords contained in the voice operation information with the second group of video identifications; and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object; and
And under the condition that the operation information meets the preset condition, presenting a target operation prompt related to the target operation object.
2. The method of claim 1, the target operational prompt comprising a help video with a video identification.
3. The method of claim 1, said presenting a target operational prompt regarding said target operational object, comprising the steps of:
under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a help video carrying the first video identifier as the target operation prompt; and
and displaying the target operation prompt in a preset area in the first display interface.
4. The method of claim 1, said presenting a target operational prompt regarding said target operational object, comprising the steps of:
under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a help video carrying the second video identifier as the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently; and
And switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
5. The method of claim 1, the preset conditions comprising at least one of:
the number of times of continuous recognition failure of the input voice operation information meets a second time preset condition;
and recognizing that the voice operation information contains preset keywords.
6. The method according to any one of claims 1-5, wherein the target operation prompt includes prompt information for a target operation of the target operation object, and further comprising, after the presenting of the target operation prompt related to the target operation object, the steps of:
under the condition that the condition of meeting the manual help is detected, presenting a prompt message whether manual help is needed or not in a current display interface;
wherein the manual assistance condition includes at least one of the following conditions:
the target operation prompt presentation duration meets a second duration preset condition;
the target operation prompt presentation times meet a third time preset condition;
and after the target operation prompt is closed, detecting that the target operation aiming at the target operation object is not completed within a preset time.
7. The method according to any one of claims 1-5, the target operation prompt comprising prompt information for a target operation of the target operation object, the method further comprising, after the presenting of the target operation prompt with respect to the target operation object:
in response to detecting that the target operation for the target operation object is performed within a preset time, updating a learning state of the target operation prompt.
8. The method of claim 7, after updating the learning state of the target operational prompt, the method further comprising the steps of:
learning state data of all target operation prompts is counted so that learning completion degrees of all target operation prompts are calculated based on the learning state data.
9. A reminder device, the device comprising:
a receiving device configured to acquire input operation information including voice operation information on a target operation object;
a processor configured to determine the target operation object based on the operation information provided by the receiving device, and acquire a target operation prompt related to the target operation object, comprising: acquiring keywords contained in the voice operation information; the method comprises the steps that a first group of preset operation objects contained in a first display interface which is currently displayed and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects are obtained, a plurality of preset operation objects are contained in the first display interface, all preset operation objects in the first display interface are the first group of preset operation objects, and all help videos associated with all preset operation objects are the first group of help videos; and matching keywords contained in the voice operation information with the first group of video identifications respectively; and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object; under the condition that the keyword is not successfully matched with a first video identifier in the first group of video identifiers, a second group of preset operation objects which are not included in the first display interface and a second group of video identifiers carried by a second group of help videos associated with the second group of preset operation objects are obtained, wherein the second group of preset operation objects are preset operation objects except the first group of preset operation objects, and the second group of preset operation objects do not exist in the first display interface currently displayed on the display device; matching keywords contained in the voice operation information with the second group of video identifications; and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object; and
And a display device configured to present the target operation prompt provided by the processor in relation to the target operation object in a case where the operation information provided by the receiving device satisfies a preset condition.
10. An electronic device comprising a processor, a memory and a bus, the processor and the memory being connected by the bus, wherein the memory is configured to store a set of program code, the processor being configured to invoke the program code stored in the memory to perform the method of any of claims 1-8.
11. A computer storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-8.
CN202110296429.XA 2020-12-31 2021-03-19 Prompting method, prompting device and computer storage medium Active CN114764363B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011637673X 2020-12-31
CN202011637673 2020-12-31

Publications (2)

Publication Number Publication Date
CN114764363A CN114764363A (en) 2022-07-19
CN114764363B true CN114764363B (en) 2023-11-24

Family

ID=82135578

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202110117883.4A Active CN114690992B (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110123503.8A Pending CN114691261A (en) 2020-12-31 2021-01-28 Prompting method, prompting device, electronic equipment and computer storage medium
CN202310756068.1A Pending CN116680033A (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110296429.XA Active CN114764363B (en) 2020-12-31 2021-03-19 Prompting method, prompting device and computer storage medium

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202110117883.4A Active CN114690992B (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110123503.8A Pending CN114691261A (en) 2020-12-31 2021-01-28 Prompting method, prompting device, electronic equipment and computer storage medium
CN202310756068.1A Pending CN116680033A (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium

Country Status (1)

Country Link
CN (4) CN114690992B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185190A (en) * 2023-02-09 2023-05-30 江苏泽景汽车电子股份有限公司 Information display control method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037885A (en) * 2017-11-27 2018-05-15 维沃移动通信有限公司 A kind of operation indicating method and mobile terminal
CN109656512A (en) * 2018-12-20 2019-04-19 Oppo广东移动通信有限公司 Exchange method, device, storage medium and terminal based on voice assistant
CN110072140A (en) * 2019-03-22 2019-07-30 厦门理工学院 A kind of video information reminding method, device, equipment and storage medium
CN111506245A (en) * 2020-04-27 2020-08-07 北京小米松果电子有限公司 Terminal control method and device
CN111580724A (en) * 2020-06-28 2020-08-25 腾讯科技(深圳)有限公司 Information interaction method, equipment and storage medium
CN112017646A (en) * 2020-08-21 2020-12-01 博泰车联网(南京)有限公司 Voice processing method and device and computer storage medium
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159703A (en) * 2007-10-09 2008-04-09 施侃晟 Bidirectional interdynamic search method for generating instant communication effect
TWI438675B (en) * 2010-04-30 2014-05-21 Ibm Method, device and computer program product for providing a context-aware help content
CN106020597A (en) * 2016-05-12 2016-10-12 北京金山安全软件有限公司 Method and device for displaying information and electronic equipment
CN107426426A (en) * 2017-07-26 2017-12-01 维沃移动通信有限公司 A kind of reminding method and mobile terminal missed of sending a telegram here
CN110544473B (en) * 2018-05-28 2022-11-08 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN111664861B (en) * 2020-06-02 2023-02-28 阿波罗智联(北京)科技有限公司 Navigation prompting method, device, equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037885A (en) * 2017-11-27 2018-05-15 维沃移动通信有限公司 A kind of operation indicating method and mobile terminal
CN109656512A (en) * 2018-12-20 2019-04-19 Oppo广东移动通信有限公司 Exchange method, device, storage medium and terminal based on voice assistant
CN110072140A (en) * 2019-03-22 2019-07-30 厦门理工学院 A kind of video information reminding method, device, equipment and storage medium
CN111506245A (en) * 2020-04-27 2020-08-07 北京小米松果电子有限公司 Terminal control method and device
CN111580724A (en) * 2020-06-28 2020-08-25 腾讯科技(深圳)有限公司 Information interaction method, equipment and storage medium
CN112017646A (en) * 2020-08-21 2020-12-01 博泰车联网(南京)有限公司 Voice processing method and device and computer storage medium
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Also Published As

Publication number Publication date
CN114764363A (en) 2022-07-19
CN114690992A (en) 2022-07-01
CN116680033A (en) 2023-09-01
CN114691261A (en) 2022-07-01
CN114690992B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US11127398B2 (en) Method for voice controlling, terminal device, cloud server and system
US9817798B2 (en) Method for displaying internet page and mobile terminal using the same
RU2632160C2 (en) Method, device and terminal for displaying application messages
US9294611B2 (en) Mobile terminal, electronic system and method of transmitting and receiving data using the same
EP3285185A1 (en) Information retrieval method and apparatus, electronic device and server, computer program and recording medium
CN109165292A (en) Data processing method, device and mobile terminal
CN109032491A (en) Data processing method, device and mobile terminal
EP3644177A1 (en) Input method, device, apparatus, and storage medium
US9336242B2 (en) Mobile terminal and displaying method thereof
CN114764363B (en) Prompting method, prompting device and computer storage medium
CN104063424B (en) Web page picture shows method and demonstration device
CN111857466B (en) Message display method and device
CN105487746A (en) Search result displaying method and device
CN105630948A (en) Web page display method and apparatus
EP4220364A1 (en) Method for presenting interface information, and electronic device
CN108182020A (en) screen display processing method, device and storage medium
US10482151B2 (en) Method for providing alternative service and electronic device thereof
CN111400729B (en) Control method and electronic equipment
CN106844717A (en) Webpage search display methods and device
CN114090738A (en) Method, device and equipment for determining scene data information and storage medium
CN104239244B (en) The method and apparatus that data to be visited are carried out with display management
CN110851624A (en) Information query method and related device
KR101292050B1 (en) Mobile terminal and method of controlling operation thereof
CN107728909B (en) Information processing method and device
CN110020244B (en) Method and device for correcting website information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant