CN114764363A - Prompting method, prompting device and computer storage medium - Google Patents

Prompting method, prompting device and computer storage medium Download PDF

Info

Publication number
CN114764363A
CN114764363A CN202110296429.XA CN202110296429A CN114764363A CN 114764363 A CN114764363 A CN 114764363A CN 202110296429 A CN202110296429 A CN 202110296429A CN 114764363 A CN114764363 A CN 114764363A
Authority
CN
China
Prior art keywords
target operation
video
preset
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110296429.XA
Other languages
Chinese (zh)
Other versions
CN114764363B (en
Inventor
时红仁
应臻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qwik Smart Technology Co Ltd
Original Assignee
Shanghai Qwik Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qwik Smart Technology Co Ltd filed Critical Shanghai Qwik Smart Technology Co Ltd
Publication of CN114764363A publication Critical patent/CN114764363A/en
Application granted granted Critical
Publication of CN114764363B publication Critical patent/CN114764363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The embodiment of the application discloses a prompting method, a prompting device and a computer storage medium, wherein the prompting method comprises the following steps: acquiring input operation information, wherein the operation information comprises at least one of touch operation information, voice operation information and shot image information of a target operation object; determining the target operation object based on the operation information; and presenting a target operation prompt about the target operation object under the condition that the operation information meets a preset condition. According to the prompting method, the prompting device and the computer storage medium, when the input operation information is determined to meet the preset condition, the prompting message for operating the operation object is automatically presented, so that the user can quickly obtain the operation instruction for the operation object, the operation is simple and convenient, and the user use experience is improved.

Description

Prompting method, prompting device and computer storage medium
The present application claims priority to application number "cn202011637673. x," filing name "hinting method, apparatus and computer storage medium" filed on 31/12/2020 and incorporated herein by reference in its entirety.
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for prompting and a computer storage medium.
Background
With the rapid development of terminal technology, the functions of the vehicle-mounted system are more and more diversified, and meanwhile, the function controls configured by the vehicle-mounted system are more and more corresponding. In a scene of using the vehicle-mounted system, when a user is confused about the use of the vehicle-mounted system or cannot obtain a response to the use of the vehicle-mounted system, the user often searches or searches for a help document of the vehicle-mounted system through networking and other methods for inquiring, but the method is not only complicated in operation, but also often difficult to find needed help, and seriously affects the use experience of the user.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
An object of the present application is to provide a prompting method, apparatus and computer storage medium, which are advantageous in that when input operation information is acquired to determine that a user wishes to acquire a help prompt for an operation object, prompt contents for operating the operation object are automatically presented, so that the user can quickly obtain the operation prompt for the operation object, the operation is simple and convenient, and the user experience is improved.
Another object of the present invention is to provide a prompting method, apparatus and computer storage medium, which are advantageous in that operation information can be input in a plurality of different ways.
Another object of the present invention is to provide a prompting method, apparatus and computer storage medium, which are advantageous in that operation prompts for an operation object can be presented in the form of help videos.
Another object of the present invention is to provide a prompting method, apparatus and computer storage medium, which are advantageous in that the range of determining a target operation object can be narrowed, the target operation object can be determined and a corresponding operation prompt can be obtained more quickly by preferentially matching the target operation object in operation objects included in a current display interface, and after the target operation object is determined in the current display interface, a relevant operation prompt can be directly presented on the current display interface.
Another object of the present invention is to provide a prompting method, an apparatus and a computer storage medium, which are advantageous in that after a target operation object is determined, a display interface containing the target operation object may be switched to and a corresponding operation prompt may be presented, so that a user may directly perform a relevant operation on the target operation object after obtaining the operation prompt.
Another object of the present invention is to provide a prompting method, apparatus and computer storage medium, which are advantageous in that the learning usage of all operation prompts for a user can be counted, so as to help the user to know the learning degree of the user on an operation object.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a prompting method, including:
acquiring input operation information, wherein the operation information comprises at least one of touch operation information, voice operation information and shot image information of a target operation object;
determining the target operation object based on the operation information; and
and presenting a target operation prompt related to the target operation object under the condition that the operation information meets a preset condition.
Therefore, the target operation object can be determined according to the acquired and input operation information, and the operation prompt of the target operation object is presented, so that the user is helped to quickly acquire the target operation prompt, wherein the input of the operation information can be realized through various different modes, and various choices are provided for the user.
In some possible implementations, the target operation prompt includes a help video with a video identification.
It can be seen that the operation prompt is presented in the form of the help video, so that the user can be more intuitively helped to quickly know the specific content of the operation prompt.
In some possible implementations, the determining the target operation object based on the operation information includes:
acquiring a keyword contained in the voice operation information;
acquiring a first group of preset operation objects contained in a currently displayed first display interface and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects;
matching keywords contained in the voice operation information with the first group of video identifications respectively; and
and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with a help video carrying the first video identifier as the target operation object.
It can be seen that, by narrowing down the range of determining the target operation object, it is possible to preferentially determine whether the target operation object exists in the operation objects included in the currently displayed first display interface.
In some possible implementations, the presenting a target operation hint regarding the target operation object includes:
determining a help video carrying the first video identifier as the target operation prompt; and
and displaying the target operation prompt in a preset area in the first display interface.
In some possible implementations, in a case that the keyword does not match a first video identifier of the first set of video identifiers successfully, the method further includes:
acquiring a second group of preset operation objects which are not contained in the first display interface and a second group of video identifiers carried by a second group of help videos associated with the second group of preset operation objects;
matching the keywords contained in the voice operation information with the second group of video identifiers; and
and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
In some possible implementations, the presenting a target operation hint regarding the target operation object includes:
determining that the help video carrying the second video identifier is the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently; and
and switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
It can be seen that, when a target operation object exists in an interface which is not currently displayed, the current display interface can be switched to the display interface containing the target operation object, and a corresponding operation prompt is presented, so that a user can execute relevant operations on the target operation object in time after obtaining the operation prompt.
In some possible implementations, the target operation hint includes hint information for a target operation of the target operation object, and after the target operation hint regarding the target operation object is presented, the method further includes the steps of:
updating a learning state of the target operation prompt in response to detecting that the target operation for the target operation object is performed within a preset time.
And counting the learning state data of all the target operation prompts so as to calculate the learning completion degree of all the target operation prompts based on the learning state data.
It can be seen that the learning and use conditions of all the operation prompts corresponding to the user are counted, so that the learning degree of the user to the operation object can be conveniently known.
In a second aspect, an embodiment of the present application provides a prompting device, including:
a receiving device configured to acquire input operation information including at least one of touch operation information, voice operation information, and captured image information on a target operation object;
a processor configured to determine the target operation object based on the operation information provided by the receiving device, and obtain a target operation prompt related to the target operation object; and
a display device configured to present the target operation prompt provided by the processor and related to the target operation object if the operation information provided by the receiving device satisfies a preset condition.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a bus, where the processor and the memory are connected through the bus, where the memory is configured to store a set of program codes, and the processor is configured to call the program codes stored in the memory to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium having instructions stored therein, which when executed on a computer implement the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
According to the method and the device, the target operation object is determined according to the operation information after the operation information input by the user is acquired, and the target operation related to the target operation object is acquired and displayed in the display equipment, so that the user can be helped to know related content and a using method, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a prompting method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for determining a target operation object according to an embodiment of the present application;
fig. 3 is a first application scenario diagram of a prompt method according to an embodiment of the present application;
fig. 4 is a schematic view of an application scenario of the prompting method according to the embodiment of the present application;
fig. 5 is a schematic view of an application scenario three of the prompt method provided in the embodiment of the present application;
fig. 6 is a schematic view of an application scenario of the prompting method provided in the embodiment of the present application;
FIG. 7 is a flowchart illustrating another method for determining a target operation object according to an embodiment of the present application;
fig. 8 is a schematic view of an application scenario of the prompting method according to the embodiment of the present application;
fig. 9 is a schematic composition diagram of a prompting device according to an embodiment of the present application;
fig. 10 is a schematic composition diagram of another prompting device provided in the embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the embodiments of the present application.
The terms "including" and "having," and any variations thereof, in the description and claims of this application and the drawings described above, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
With the development of vehicle technology, the continuous expansion of vehicle functions is a development direction in the future, especially on the development of functions of vehicle-mounted terminal systems. The functions of the vehicle-mounted terminal system are more and more diversified, so that a user is more familiar with operation and use methods of various functions conveniently, a vehicle manufacturer in the traditional method may provide a paper specification or an electronic specification for a vehicle, but the general content of the specification is complicated, and few users like to read carefully.
At present, explanation descriptions of various function use methods are usually added to a vehicle terminal to provide use guidance for a user, but the method still needs the user to actively search the explanation descriptions of the corresponding function use methods, the process is time-consuming, and user experience is affected, so that how to help the user to pertinently and quickly obtain the use help of various functions in the vehicle, and learning and mastering the use methods of various vehicle functions is an urgent problem to be solved.
To address, at least in part, one or more of the above problems and other potential problems, embodiments of the present application propose a method of prompting. The method comprises the steps of obtaining input operation information, wherein the operation information comprises at least one item of touch operation information, voice operation information and shooting image information of a target operation object; determining the target operation object based on the operation information; and presenting a target operation prompt about the target operation object under the condition that the operation information meets a preset condition. The prompting method can be specifically implemented in a vehicle-mounted terminal, and the application of the prompting method to the vehicle-mounted terminal is taken as an example in the embodiment for description.
The following describes embodiments of the present application in detail.
Referring to fig. 1, a schematic flow chart of a prompting method provided in an embodiment of the present application may include the following steps:
s101, acquiring input operation information, wherein the operation information comprises at least one of touch operation information, voice operation information and shooting image information of a target operation object.
The operation objects comprise various buttons, icons, check boxes and other controls displayed on a display interface of the vehicle-mounted terminal system. The operation information CAN be obtained by a method that, for example, after a user performs touch operation on a touch display interface of the vehicle-mounted terminal, the vehicle-mounted terminal obtains the touch operation information of the user through a CAN bus; a user initiates a voice instruction to a vehicle-mounted terminal, and the vehicle-mounted terminal collects the voice instruction of the user through voice collection equipment such as a microphone and acquires voice operation information; in addition, the user can shoot the target operation object through the mobile terminal connected with the vehicle-mounted terminal or perform screenshot operation on the target operation object in the operation interface to obtain an image related to the target operation object, and the vehicle-mounted terminal further obtains shot image information through obtaining the image and processing the image.
Optionally, the touch operation information includes a touch duration of the touch operation and/or a touch frequency of the touch operation and/or a touch trajectory of the touch operation. The user can realize long-time pressing, continuous clicking or graphic drawing and other operations on the touch display screen of the vehicle-mounted terminal through fingers, and the vehicle-mounted terminal responds to the operations of the user and can correspondingly generate related touch operation information. For example, when a user performs a touch operation of continuously pressing a target operation object, the vehicle-mounted terminal may use a pressing duration as a touch duration of the touch operation; when a user performs continuous clicking touch operation on a target operation object, the vehicle-mounted terminal can take the continuous clicking times as the touch times of the touch operation; when the user draws the relevant graph through touch on the touch display screen. And the vehicle-mounted terminal records the touch control track of the touch control operation of the track drawing seat drawn by the user.
S102, determining the target operation object based on the operation information.
Optionally, for the touch operation information, in the case that the touch operation information includes the touch duration of the touch operation and/or the touch frequency of the touch operation, the vehicle-mounted terminal may determine the target operation object according to the touch operation information, that is, the operation object directly touched by the user in the touch display screen. And under the condition that the touch operation information comprises a touch track of the touch operation, the vehicle-mounted terminal matches the touch track with a preset track in a track database, acquires an operation object associated with the preset track successfully matched with the touch track, and determines the operation object as the target operation object.
Optionally, for the shot image information, the vehicle-mounted terminal acquires key information through image recognition based on the acquired shot image information, matches the key information with preset information in an image database, acquires an operation object associated with the preset information successfully matched with the key information, and determines that the operation object is the target operation object. Wherein the key information may include graphical information and/or textual information.
Optionally, for the voice operation information, after the voice operation information of the user is collected by the vehicle-mounted terminal, the target operation object is confirmed by extracting the keywords in the voice operation information and using the keywords for matching.
And S103, presenting a target operation prompt related to the target operation object when the operation information meets a preset condition.
The target operation prompt may include a help video with a video identifier, and playing the help video on a display interface of the vehicle-mounted terminal may help a user to obtain a correct use method of the target operation object or obtain a functional explanation of the target operation object.
Optionally, the preset condition includes at least one of the following conditions: the touch duration of the touch operation meets a first duration preset condition; the touch frequency of the touch operation meets a first frequency preset condition; the frequency of continuous recognition failure of the input voice operation information meets a second frequency preset condition; and recognizing that the voice operation information contains preset keywords.
It can be understood that the touch duration satisfying the first duration preset condition may be that the duration of the target operation object pressed by the user exceeds a first preset duration, where the first preset duration may be 3 seconds or 5 seconds, and the like. The touch frequency meeting the first preset condition may be that the number of consecutive clicks of the target operation object by the user exceeds a first preset number, where the first preset number may be two or three.
When a user wishes to perform voice operation on a target operation object, there may be a case where the in-vehicle terminal fails to recognize the user voice operation information multiple times, and at this time, it may be considered that a target operation prompt related to the target operation object needs to be provided for the user. The second number preset condition may be that the number of times of continuous recognition failure of the input voice operation information exceeds a second preset number of times.
The recognized voice operation information includes preset keywords, where the preset keywords may be query-type keywords, such as "what", and the like, or may be specific key words, such as "use", "help", "explain", and words with other relevant semantics, and the like. When the voice operation information comprises the preset keyword, the fact that the user needs to directly obtain prompt information of the target operation object to help the user to know the target operation object is described. In a specific implementation, preset keywords may also be set according to common words of the user, such as "learn to go … …", "teach me to go … …", or "demonstrate … …", and so on.
The recognizing that the voice operation information includes the preset condition of the preset keyword may further include: and detecting that the input time interval between the touch operation information and the voice operation information is smaller than a preset time threshold, and recognizing that the voice operation information contains preset keywords. For example, in a relatively common scenario, after a user touches a certain target operation object, there may be a question about the target operation object, and at this time, the user may quickly request to acquire related operation information through voice operation. In the above case, the preset condition may be satisfied, and a target operation object may be quickly determined according to the touch operation information to further present target operation information related to the target operation object.
It should be noted that the first duration preset condition, the first number preset condition, the second number preset condition, and the preset keyword may be set by a technician based on an actual situation, and are not limited in this embodiment of the application.
Optionally, an association relationship exists between the target operation object and the target operation prompt, and after the target operation object is determined based on the operation information, the operation information associated with the target operation object may be directly determined to be the target operation information, and is displayed in a preset area in a display interface of the vehicle-mounted terminal.
And the preset area exists in a display interface where the target operation object exists. The vehicle-mounted terminal can generate a floating window in the preset area according to the initial window parameters, and the target operation information is presented through the floating window.
Optionally, in response to a full-screen operation of the user on the floating window, the floating window may be displayed in a full screen on a current display interface, or in response to a scaling operation of the user on the floating window, the floating window may be enlarged or reduced in the first interface in a fixed ratio.
Optionally, after responding to the full-screen operation or the scaling operation, the floating window may be restored to a preset size and position in response to a user's restoring operation on the floating window.
Optionally, in response to a moving operation of the floating window by a user, a position of the floating window on the current display interface may be adjusted, the floating window is displayed based on an initial transparency parameter before responding to the moving operation, and is displayed based on a preset transparency parameter when the moving operation is performed, where the preset transparency parameter is higher than the initial transparency parameter.
Optionally, the target operation prompt includes prompt information of a target operation for the target operation object, and after the target operation prompt about the target operation object is presented, it may also be determined whether a prompt message indicating whether manual assistance is required to be presented in the current display interface by detecting whether a manual assistance condition is currently satisfied. Wherein the artificial help condition comprises at least one of the following conditions: the target operation prompt presenting time length meets a second time length preset condition; the target operation prompt presenting times meet a third time preset condition; and after the target operation prompt is closed, detecting that the target operation aiming at the target operation object is not completed within preset time.
It should be noted that the second preset time duration condition may be that the display time of the target operation prompt in the current display interface exceeds a second preset time duration, and the third preset time duration condition may be that the repeated opening times of the target operation prompt exceed a third preset time, where both the second preset time duration and the third preset time duration may be set by a technician based on an actual situation, and are not limited in this embodiment of the application.
Optionally, the target operation prompt includes prompt information of a target operation for the target operation object, and after the target operation prompt about the target operation object is presented, the vehicle-mounted terminal may further update the learning state of the target operation prompt in response to detecting that the target operation for the target operation object is executed within a preset time, that is, after the target operation prompt is displayed on the display interface, if it is detected that the target operation is executed by the user for the target operation object related to the target operation prompt within the preset time, the learning state of the target operation prompt may be updated to the learned state. In another possible implementation manner, if there are a plurality of executable target operations in the target operation object, the current learning state may be updated correspondingly in a form of a learning progress percentage according to the execution progress of the user on the plurality of target operations of the target operation object.
For example, a sentinel mode control in the setting interface is determined as a target operation object, and through an operation prompt related to the sentinel mode, that is, a target operation, the user knows that the sentinel mode can be turned on by touching a switch button control of the sentinel mode on the setting interface, and within a preset time after the target operation prompt is displayed, if the user turns on the sentinel mode by touching the switch button, the target operation for the target operation object can be determined to be executed. And updating the operation prompt related to the sentinel model into a learned state.
After the learning state of the target operation prompt is updated, the learning state data of all the target operation prompts in the vehicle-mounted terminal can be counted, so that the learning completion degree of all the target operation prompts can be calculated based on the learning state data. The actual learning and the use condition of various functions in the vehicle of the user at present are displayed to the user on the basis of the learning completion degree, so that the user is helped to better know and learn abundant and various functions in the vehicle.
In a possible implementation manner, the target operation prompt includes a help video with a video identifier, and the user can be helped to quickly and conveniently know the information of the relevant target operation object, including the function or the use method of the target operation object, and the like by playing the help video. When the operation information is voice operation information, the determination of the target operation object may further determine in advance whether the target operation object exists in the current display interface based on the related operation object in the current display interface, and narrow the range of determining the target operation object, because in the actual use of the user, usually after opening a certain interface, a question about the use of a certain operation object in the interface needs to be provided to the user at this time. In the following, with reference to fig. 2 to fig. 8, a method for quickly determining a target operation object is further described in the case that the operation information is voice operation information, and the target operation prompt includes a help video with a video identifier.
Referring to fig. 2, a schematic flowchart of a method for determining a target operation object according to an embodiment of the present application includes the following steps:
s201, obtaining keywords contained in the voice operation information.
After the vehicle-mounted terminal acquires the input voice operation information, the keywords contained in the voice operation information can be determined through voice recognition. For example, the user speech inputs "what is a sentry mode", the in-vehicle terminal may extract keywords including "sentry", "mode", and the like based on the speech inputs, and further determine a target operation object based on the keywords.
S202, a first group of preset operation objects contained in a first display interface which is displayed currently and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects are obtained.
The first display interface is an interface which is presented in a display device of the vehicle-mounted terminal at present when the vehicle-mounted terminal acquires voice operation information of a user, a preset operation object is an operation object existing in the first display interface, the user can observe that the preset operation object exists in the first display interface in the form of a user interface control in the first display interface, and most of the user interface controls have an execution function or a function of triggering code operation and completing response through an event, namely, the user can perform relevant operations such as clicking on the control to execute relevant functions. In addition, a help video associated with the preset operation object also exists in the vehicle-mounted terminal.
It can be understood that a plurality of preset operation objects may be included in the first display interface, all the preset operation objects in the first display interface are the first group of preset operation objects, and all the help videos associated with all the preset operation objects are the first group of help videos. Meanwhile, different help videos carry different video identifications, and one help video can carry one or more video identifications. The first group of video identifications comprises video tags carried by all help videos in the first group of help videos respectively. The first group of preset operation objects and the first group of help videos can have a one-to-one correspondence relationship.
For example, referring to fig. 3, fig. 3 is an interface currently presented by the display device, that is, the first display interface. In the first display interface, a first group of preset operation objects includes a sentinel mode control 305, a help video associated with the sentinel mode control 305 is stored in the vehicle-mounted terminal, when a user has a question about the sentinel mode, the help video can be presented to the user to explain relevant functions and using operations, and the help video carries a corresponding video identifier, which may be a "sentinel", "sentinel mode", or the like. In addition to the sentinel mode control 305, the controls 301, 302, 303, and 304 in fig. 3 may also be included as preset operation objects in the first set of preset operation objects. It should be noted that, in the display interface, all the operable or inoperable display controls may be used as preset operation objects, and a specific preset operation object is determined and set by a designer based on an actual situation, which is not limited in any way in the embodiment of the present application.
S203, matching the keywords contained in the voice operation information with the first group of video identifiers respectively.
And matching the keywords contained in the voice operation information with the video identifications carried by different help videos in the first group of help videos respectively, acquiring the matching degree between the keywords and the video identifications carried by the different help videos, and judging whether the highest matching degree exceeds a preset threshold value or not. When the matching degree exceeds a preset threshold value, the keyword and the video identifier can be determined to be successfully matched, and if the matching degree does not exceed the preset threshold value, the keyword and the first group of video identifiers are determined to be unsuccessfully matched. The preset threshold is set by a designer based on actual conditions, and the embodiment of the present application is not limited at all.
For example, a first group of help videos includes a first help video, the first help video carries a first video identifier, and when the matching degree between the keyword and the first video identifier carried by the first help video is highest and exceeds a preset threshold, it is determined that the keyword is successfully matched with the first video identifier in the first group of video identifiers; when the matching degree between the keyword and a first video identifier carried by the first help video is highest but not exceeds a preset threshold value, determining that the keyword is unsuccessfully matched with the first video identifier in the first group of video identifiers, and no other video identifiers which can be successfully matched with the keyword exist in the first group of video identifiers.
S204, under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with a help video carrying the first video identifier as the target operation object.
For example, referring to fig. 4, a first group of preset operation objects in a currently displayed first display interface includes a sentinel mode control 305, when a user inputs "what is a sentinel mode", the vehicle-mounted terminal may obtain that a keyword included in the voice operation information is "sentinel" or the like, and a help video associated with the sentinel mode control 305 carries a video identifier such as "sentinel" or "sentinel mode", so that the matching degree of the keyword and the video identifier carried by the help video associated with the sentinel mode control 305 is the highest and exceeds a preset threshold, thereby determining that the keyword and the video identifier are successfully matched. The help video associated with the sentinel mode control 305 is a first help video, and the video identifier carried by the help video associated with the sentinel mode control 305 is a first video identifier. After the matching is successful, the sentinel mode control 305 can be determined to be the target operation object.
S205, determining the help video carrying the first video identifier as the target operation prompt.
As described in the above example, if the video identifier carried by the help video associated with the sentinel mode control 305 can be successfully matched with the keyword, it may be determined that the help video associated with the sentinel mode control 305 is the target operation prompt.
S206, displaying the target operation prompt in a preset area in the first display interface.
The preset area may be a lower right corner of the current display interface as shown in fig. 5, the size of the initial area is set by a designer based on an actual situation, and the embodiment of the present application is not limited at all. And a floating window can be generated in a preset area, and the help video carrying the first video identifier is played in the floating window. The help video may include a functional introduction or method of use to the sentinel model, etc.
Optionally, for the floating window, operations such as full screen, scaling, dragging and moving may be performed on the floating window in response to a user's relevant operation on the floating window.
Referring to fig. 6, if the main page currently displayed in fig. 6 is used as the first display interface, since the first group of preset operation objects existing in the first display interface does not include the sentinel mode control, when the user inputs the voice operation information as "what is the sentinel mode", and the voice operation information is processed through steps S201 to S203, the target operation object and the target operation prompt cannot be determined in the currently displayed first display interface. A situation that the matching of the keyword and the first video identifier in the first group of video identifiers is unsuccessful may occur, that is, the target operation object and the target operation information may not be determined in the first display interface.
Under the condition that the keyword is unsuccessfully matched with the first video identifier in the first group of video identifiers, matching the keyword with video identifiers carried by other help videos stored in the vehicle-mounted terminal, and determining the video identifiers which can be successfully matched with the keyword, so as to determine a target operation object and a target operation prompt. The following describes an example of the above case.
Referring to fig. 7, a flowchart of another method for determining a target operation object according to an embodiment of the present application is shown, including the following steps:
s301, a second group of preset operation objects which are not included in the first display interface and a second group of video identifications carried by a second group of help videos associated with the second group of preset operation objects are obtained.
The second group of preset operation objects are preset operation objects except the first group of preset operation objects, and the second group of preset operation objects do not exist in the first display interface displayed on the display device currently.
S302, matching the keywords contained in the voice operation information with the second group of video identifications.
S303, under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
S304, determining the help video carrying the second video identifier as the target operation prompt.
S305, switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
And the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently.
It should be noted that, the currently displayed main page is the first display interface as shown in fig. 6, and it is confirmed through steps S301 to S305 that the sentinel mode control is the target operation object and exists in the second display interface, as shown in fig. 8, at this time, the interface displayed in fig. 8 is the second display interface including the sentinel mode control 801, so that the currently displayed interface can be switched from the first display interface to the second display interface, that is, from the first display interface in fig. 6 to the second display interface in fig. 8, and the target operation prompt is displayed in the preset area in the second display interface. The preset area can be the lower right corner of the current display interface, a floating window can be generated in the preset area according to initial window parameters, and the target operation information is displayed in the floating window.
When a plurality of display interfaces comprise the sentinel mode control, a designer can designate one of the display interfaces as a designated second display interface in advance based on actual conditions, and when the second display interface needs to be switched, the second display interface is switched to the second display interface which is designated in advance. In another possible implementation manner, the interface with the most frequent help prompt triggered by other users in practice and related to the sentinel mode control can be determined among the plurality of display interfaces including the sentinel mode control by obtaining operation data of other users, and the interface can be used as the second display interface.
For the related content related to the response of the floating window to the user operation, please refer to the description of the content related to the response operation of the floating window in the foregoing method or other embodiments, which is not described herein again. The specific position of the target operation object which the user wants to know can be quickly positioned by automatically switching the display interface, so that the user can conveniently execute related operations on the target operation object after the target operation prompt is displayed.
Referring to fig. 9, a schematic view of a prompting device according to an embodiment of the present disclosure is shown, in which the prompting device 900 includes:
a receiving device 910 configured to acquire input operation information including at least one of touch operation information, voice operation information, and captured image information on a target operation object;
a processor 920, configured to determine the target operation object based on the operation information provided by the receiving device, and obtain a target operation prompt related to the target operation object; and
a display device 930 configured to present the target operation prompt provided by the processor about the target operation object if the operation information provided by the receiving device satisfies a preset condition.
The receiving device 910 is configured to obtain operation information input by a user through a vehicle-mounted terminal, and in this embodiment, the operation information includes, but is not limited to, a microphone and the like; the display device 930 is configured to present an operation prompt to a user, and may be a vehicle-mounted touch display screen in this embodiment.
In one possible implementation, the target operation prompt includes a help video with a video identification.
In one possible implementation manner, the processor 920 is further configured to:
acquiring a keyword contained in the voice operation information;
acquiring a first group of preset operation objects contained in a currently displayed first display interface and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects;
matching keywords contained in the voice operation information with the first group of video identifications respectively; and
and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object.
In a possible implementation manner, the processor 920 is further configured to determine that a help video carrying the first video identifier is the target operation prompt, and control the display device 930 to display the target operation prompt in a preset area in the first display interface.
In a possible implementation manner, the processor 920 is further configured to:
acquiring a second group of preset operation objects which are not contained in the first display interface and a second group of video identifiers carried by a second group of help videos associated with the second group of preset operation objects;
matching the keywords contained in the voice operation information with the second group of video identifiers; and
and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
In a possible implementation manner, the processor 920 is further configured to determine that a help video carrying the second video identifier is the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is included in a second display interface that is not currently displayed, and the display device 930 is controlled to switch from the first display interface to the second display interface, so that the target operation prompt is displayed in a preset area in the second display interface.
In a possible implementation manner, the preset condition includes at least one of the following conditions:
the touch control time length of the touch control operation meets a first time length preset condition;
the touch frequency of the touch operation meets a first frequency preset condition;
the frequency of continuous recognition failure of the input voice operation information meets a second frequency preset condition; and
and recognizing that the voice operation information contains preset keywords.
In one possible implementation, the target operation hint includes hint information of a target operation for the target operation object, and after the target operation hint regarding the target operation object is presented, the processor 920 is further configured to:
in the case that it is detected that the manual help condition is satisfied, presenting a prompt message whether manual help is needed in the current display interface of the display device 930;
wherein the artificial help condition comprises at least one of the following conditions:
the target operation prompt presenting time length meets a second time length preset condition;
the target operation prompt presenting times meet a third time preset condition; and
after the target operation prompt is closed, detecting that the target operation aiming at the target operation object is not completed within preset time.
In another possible implementation manner, the target operation hint includes hint information of a target operation for the target operation object, and after the target operation hint about the target operation object is presented, the processor 920 is further configured to:
updating a learning state of the target operation prompt in response to detecting that the target operation for the target operation object is performed within a preset time.
And counting the learning state data of all the target operation prompts so as to calculate the learning completion degree of all the target operation prompts based on the learning state data.
For the concepts, explanations, details and other steps related to the technical solutions provided in the embodiments of the present application, please refer to the description of the method or the contents of the method steps executed by the apparatus in other embodiments, which are not described herein again.
Referring to fig. 10, a schematic composition diagram of another prompting device provided in the embodiment of the present application may include:
a processor 1010, a memory 1020, and a communication interface 1030. The processor 1010, the memory 1020, and the communication interface 1030 are coupled by a bus 1040, the memory 1020 is configured to store instructions, and the processor 1010 is configured to execute the instructions stored by the memory 1020 to implement the method steps corresponding to fig. 1-2 and 7 above.
The processor 1010 is configured to execute the instructions stored in the memory 1020 to control the communication interface 1030 to receive and transmit signals, thereby implementing the steps of the above-described method. The memory 1020 may be integrated in the processor 1010 or may be provided separately from the processor 1010.
In one possible implementation, the functions of the communication interface 1030 may be implemented by a transceiver circuit or a dedicated chip for transceiving. Processor 1010 may be considered to be implemented by a dedicated processing chip, processing circuit, processor, or a general-purpose chip.
In another possible implementation manner, the apparatus provided in the embodiment of the present application may be implemented by using a general-purpose computer. Program code that implements the functions of the processor 1010 and the communication interface 1030 may be stored in the memory 1020, and a general-purpose processor may implement the functions of the processor 1010 and the communication interface 1030 by executing the code in the memory 1020.
For the concepts, explanations, details and other steps related to the technical solutions provided in the embodiments of the present application, please refer to the description of the method or the contents of the method steps executed by the apparatus in other embodiments, which are not described herein again.
As another implementation manner of the present embodiment, a computer-readable storage medium is provided for storing a computer program, and the computer-readable storage medium stores instructions that, when executed on a computer, perform the method in the above-described method embodiment.
As another implementation of the present embodiment, a computer program product is provided that contains instructions that, when executed, perform the method in the above-described method embodiments.
Those skilled in the art will appreciate that only one memory and processor are shown in fig. 10 for ease of illustration. In an actual terminal or server, there may be multiple processors and memories. The memory may also be referred to as a storage medium or a storage device, and the like, which is not limited in this application.
It should be understood that, in the embodiment of the present Application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct bus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. But for clarity of illustration the various buses are labeled as buses in the figures.
It should also be understood that reference herein to first, second, third, fourth, and various numerical numbering is merely for convenience of description and is not intended to limit the scope of the present application.
It should be understood that the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various Illustrative Logical Blocks (ILBs) and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of prompting, the method comprising the steps of:
acquiring input operation information, wherein the operation information comprises at least one of touch operation information, voice operation information and shot image information of a target operation object;
determining the target operation object based on the operation information; and
and presenting a target operation prompt related to the target operation object under the condition that the operation information meets a preset condition.
2. The method of claim 1, the target operation prompt comprising a help video with a video identification.
3. The method of claim 2, said determining said target operational object based on said operational information, comprising the steps of:
acquiring a keyword contained in the voice operation information;
acquiring a first group of preset operation objects contained in a currently displayed first display interface and a first group of video identifications carried by a first group of help videos associated with the first group of preset operation objects;
matching keywords contained in the voice operation information with the first group of video identifications respectively; and
and under the condition that the keyword is successfully matched with a first video identifier in the first group of video identifiers, determining a first preset operation object associated with the help video carrying the first video identifier as the target operation object.
4. The method of claim 3, said presenting with a target operation hint regarding said target operation object, comprising the steps of:
determining a help video carrying the first video identifier as the target operation prompt; and
and displaying the target operation prompt in a preset area in the first display interface.
5. The method of claim 3, in the event that the keyword does not match a first video identity of the first set of video identities successfully, the method further comprising the steps of:
acquiring a second group of preset operation objects which are not contained in the first display interface and a second group of video identifiers carried by a second group of help videos associated with the second group of preset operation objects;
matching the keywords contained in the voice operation information with the second group of video identifiers; and
and under the condition that the keyword is successfully matched with a second video identifier in the second group of video identifiers, determining a second preset operation object associated with the help video carrying the second video identifier as the target operation object.
6. The method of claim 5, said presenting with a target operation hint regarding said target operation object, comprising the steps of:
determining that the help video carrying the second video identifier is the target operation prompt; the second preset operation object associated with the help video carrying the second video identifier is contained in a second display interface which is not displayed currently; and
and switching from the first display interface to the second display interface so as to display the target operation prompt in a preset area in the second display interface.
7. The method of claim 1, the preset condition comprising at least one of:
the touch duration of the touch operation meets a first duration preset condition;
the touch frequency of the touch operation meets a first frequency preset condition;
the number of times of continuous recognition failure of the input voice operation information meets a second-time preset condition; and
and recognizing that the voice operation information contains preset keywords.
8. The method according to any one of claims 1 to 7, wherein the target operation hint includes hint information of a target operation for the target operation object, and after the target operation hint about the target operation object is presented, the method further comprises the following steps:
under the condition that the manual help condition is met is detected, presenting a prompt message for judging whether manual help is needed or not in the current display interface;
wherein the artificial help condition comprises at least one of the following conditions:
the target operation prompt presenting time length meets a second time length preset condition;
the target operation prompt presenting times meet a third time preset condition; and
after the target operation prompt is closed, detecting that the target operation aiming at the target operation object is not completed within preset time.
9. The method according to any one of claims 1-7, said target operation hint comprising hint information for a target operation of said target operation object, said method further comprising, after said presenting a target operation hint regarding said target operation object, the steps of:
in response to detecting that the target operation for the target operation object is performed within a preset time, updating a learning state of the target operation prompt.
10. The method of claim 9, after updating the learning state of the target operational cue, further comprising the steps of:
and counting the learning state data of all the target operation prompts so as to calculate the learning completion degree of all the target operation prompts based on the learning state data.
11. A prompting device, the device comprising:
a receiving device configured to acquire input operation information including at least one of touch operation information, voice operation information, and captured image information on a target operation object;
a processor configured to determine the target operation object based on the operation information provided by the receiving device, and obtain a target operation prompt related to the target operation object; and
a display device configured to present the target operation prompt provided by the processor and related to the target operation object if the operation information provided by the receiving device satisfies a preset condition.
12. An electronic device comprising a processor, a memory and a bus, the processor and the memory being connected via the bus, wherein the memory is configured to store a set of program codes, and the processor is configured to call the program codes stored in the memory to perform the method according to any one of claims 1-10.
13. A computer storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 1-10.
CN202110296429.XA 2020-12-31 2021-03-19 Prompting method, prompting device and computer storage medium Active CN114764363B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011637673X 2020-12-31
CN202011637673 2020-12-31

Publications (2)

Publication Number Publication Date
CN114764363A true CN114764363A (en) 2022-07-19
CN114764363B CN114764363B (en) 2023-11-24

Family

ID=82135578

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202310756068.1A Pending CN116680033A (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110117883.4A Active CN114690992B (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110123503.8A Pending CN114691261A (en) 2020-12-31 2021-01-28 Prompting method, prompting device, electronic equipment and computer storage medium
CN202110296429.XA Active CN114764363B (en) 2020-12-31 2021-03-19 Prompting method, prompting device and computer storage medium

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN202310756068.1A Pending CN116680033A (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110117883.4A Active CN114690992B (en) 2020-12-31 2021-01-28 Prompting method, prompting device and computer storage medium
CN202110123503.8A Pending CN114691261A (en) 2020-12-31 2021-01-28 Prompting method, prompting device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (4) CN116680033A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185190A (en) * 2023-02-09 2023-05-30 江苏泽景汽车电子股份有限公司 Information display control method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037885A (en) * 2017-11-27 2018-05-15 维沃移动通信有限公司 A kind of operation indicating method and mobile terminal
CN109656512A (en) * 2018-12-20 2019-04-19 Oppo广东移动通信有限公司 Exchange method, device, storage medium and terminal based on voice assistant
CN110072140A (en) * 2019-03-22 2019-07-30 厦门理工学院 A kind of video information reminding method, device, equipment and storage medium
CN111506245A (en) * 2020-04-27 2020-08-07 北京小米松果电子有限公司 Terminal control method and device
CN111580724A (en) * 2020-06-28 2020-08-25 腾讯科技(深圳)有限公司 Information interaction method, equipment and storage medium
CN112017646A (en) * 2020-08-21 2020-12-01 博泰车联网(南京)有限公司 Voice processing method and device and computer storage medium
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159703A (en) * 2007-10-09 2008-04-09 施侃晟 Bidirectional interdynamic search method for generating instant communication effect
TWI438675B (en) * 2010-04-30 2014-05-21 Ibm Method, device and computer program product for providing a context-aware help content
CN106020597A (en) * 2016-05-12 2016-10-12 北京金山安全软件有限公司 Method and device for displaying information and electronic equipment
CN107426426A (en) * 2017-07-26 2017-12-01 维沃移动通信有限公司 A kind of reminding method and mobile terminal missed of sending a telegram here
CN110544473B (en) * 2018-05-28 2022-11-08 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN111664861B (en) * 2020-06-02 2023-02-28 阿波罗智联(北京)科技有限公司 Navigation prompting method, device, equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037885A (en) * 2017-11-27 2018-05-15 维沃移动通信有限公司 A kind of operation indicating method and mobile terminal
CN109656512A (en) * 2018-12-20 2019-04-19 Oppo广东移动通信有限公司 Exchange method, device, storage medium and terminal based on voice assistant
CN110072140A (en) * 2019-03-22 2019-07-30 厦门理工学院 A kind of video information reminding method, device, equipment and storage medium
CN111506245A (en) * 2020-04-27 2020-08-07 北京小米松果电子有限公司 Terminal control method and device
CN111580724A (en) * 2020-06-28 2020-08-25 腾讯科技(深圳)有限公司 Information interaction method, equipment and storage medium
CN112017646A (en) * 2020-08-21 2020-12-01 博泰车联网(南京)有限公司 Voice processing method and device and computer storage medium
CN112148408A (en) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 Barrier-free mode implementation method and device based on image processing and storage medium

Also Published As

Publication number Publication date
CN114764363B (en) 2023-11-24
CN114691261A (en) 2022-07-01
CN116680033A (en) 2023-09-01
CN114690992B (en) 2023-06-27
CN114690992A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN104105169B (en) From method and the device of the WLAN (wireless local area network) that is dynamically connected
CN113938456A (en) Session message top processing method and device
US9466298B2 (en) Word detection functionality of a mobile communication terminal
CN103716451A (en) Image display control apparatus, image display apparatus and image display control method
CN105225096A (en) The disposal route of reminder announced message, device and terminal
EP3337146A1 (en) Method and apparatus for displaying notification message
CN107370772A (en) Account login method, device and computer-readable recording medium
CN109165292A (en) Data processing method, device and mobile terminal
EP3509012B1 (en) Fingerprint recognition method and device
CN105677392A (en) Method and apparatus for recommending applications
CN109032491A (en) Data processing method, device and mobile terminal
CN105516944A (en) Short message canceling method and device
EP3644177A1 (en) Input method, device, apparatus, and storage medium
CN109947522B (en) Information display method, device, terminal, server and storage medium
CN105242837A (en) Application page acquisition method and terminal
CN114764363A (en) Prompting method, prompting device and computer storage medium
CN107977127B (en) Method, device and terminal for updating page
CN111177521A (en) Method and device for determining query term classification model
CN112148148A (en) Touch operation identification method and device, mobile terminal and storage medium
CN104636320A (en) Data processing method and device
CN110166621B (en) Word processing method and terminal equipment
CN107957789B (en) Text input method and mobile terminal
CN111400729B (en) Control method and electronic equipment
CN110113485B (en) Information processing method and mobile terminal
CN107491251B (en) Mobile terminal and fingerprint control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant