CN111147777A - Intelligent terminal voice interaction method and device and storage medium - Google Patents

Intelligent terminal voice interaction method and device and storage medium Download PDF

Info

Publication number
CN111147777A
CN111147777A CN201910970868.7A CN201910970868A CN111147777A CN 111147777 A CN111147777 A CN 111147777A CN 201910970868 A CN201910970868 A CN 201910970868A CN 111147777 A CN111147777 A CN 111147777A
Authority
CN
China
Prior art keywords
voice interaction
view
sub
intelligent terminal
clickable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910970868.7A
Other languages
Chinese (zh)
Inventor
刘燚
周文杰
杨掌州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL Digital Technology Co Ltd
Original Assignee
Shenzhen TCL Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL Digital Technology Co Ltd filed Critical Shenzhen TCL Digital Technology Co Ltd
Priority to CN201910970868.7A priority Critical patent/CN111147777A/en
Publication of CN111147777A publication Critical patent/CN111147777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a voice interaction method for an intelligent terminal. The method comprises the following steps: acquiring a clickable sub-view in the display interface of the intelligent terminal; adding a sequence corner mark to the clickable sub-view; and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information. The invention also discloses an intelligent terminal voice interaction device and a computer readable storage medium. The invention can realize and provide a better, simpler and more convenient intelligent terminal voice interaction method.

Description

Intelligent terminal voice interaction method and device and storage medium
Technical Field
The invention relates to the field of television interaction, in particular to a voice interaction method and device for an intelligent terminal and a computer readable storage medium.
Background
With the development of internet and artificial intelligence, in recent years, the television speech recognition technology has developed rapidly, and more television manufacturers are continuously pushing television products with speech recognition function to meet the needs of users. The television product with the voice recognition function can better interact with the user, simplify the user film selection process and provide better watching experience for the user.
However, at present, television voice interaction products on the market have some problems, for example, many applications installed on smart televisions do not support a voice interaction function, and particularly, third-party applications which are not adapted by a system are not available basically in a voice interaction scene; meanwhile, due to the problem of the intelligent voice system, in the process of interacting with the user, the interaction intention of the user is often not accurately grasped, and interaction errors occur. Therefore, the interactive experience of the user is influenced, and the voice interactive willingness of the user is reduced.
Disclosure of Invention
The invention mainly aims to provide an intelligent terminal voice interaction method, an intelligent terminal voice interaction device and a computer readable storage medium, and aims to provide a better and simpler intelligent terminal voice interaction method.
In order to achieve the above object, the present invention provides an intelligent terminal voice interaction method, which comprises the following steps:
acquiring a clickable sub-view in the display interface of the intelligent terminal;
adding a sequence corner mark to the clickable sub-view;
and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information.
Optionally, the step of obtaining a clickable sub-view in the display interface of the intelligent terminal includes:
acquiring a display interface of the intelligent terminal;
and traversing the display interface to obtain a clickable sub-view in the display interface.
Optionally, the step of traversing the display interface and acquiring a clickable sub-view in the display interface includes:
acquiring a window of the display interface;
obtaining a view root node corresponding to the window according to the window;
traversing the view root node, obtaining a sub-view corresponding to the view root node, and generating a sub-view list;
and traversing the sub-view list to obtain the clickable sub-view.
Optionally, the step of adding a sequence corner mark to the clickable sub-view includes:
acquiring coordinate information of the clickable sub-view;
and adding corresponding sequence corner marks to the clickable sub-views according to the coordinate information.
Optionally, the step of adding a corresponding sequence corner mark to the clickable sub-view according to the coordinate information includes:
generating an auxiliary layer with sequence corner marks according to the coordinate information;
and binding the auxiliary image layer with the sequence corner mark with the clickable sub-view.
Optionally, the step of adding a corresponding sequence corner mark to the clickable sub-view according to the coordinate information includes:
and adding a corresponding sequence corner mark at the lower left corner of the clickable sub-view according to the coordinate information.
Optionally, the step of interacting according to the voice interaction information of the receiving user and according to the information about the corner mark in the voice interaction information includes:
and receiving voice interaction information of a user, and triggering the sub-view corresponding to the corner mark according to the information about the corner mark in the voice interaction information to perform voice interaction.
Optionally, the intelligent terminal voice interaction method further includes the following steps:
and after the voice interaction is finished, hiding the sequence corner mark of the clickable sub-view.
In addition, in order to achieve the above object, the present invention further provides an intelligent terminal voice interaction apparatus, including: the intelligent terminal voice interaction method comprises a memory, a processor and an intelligent terminal voice interaction program which is stored on the memory and can run on the processor, wherein the steps of the intelligent terminal voice interaction method are realized when the intelligent terminal voice interaction program is executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a computer readable storage medium, where an intelligent terminal voice interaction program is stored, and the intelligent terminal voice interaction program, when executed by a processor, implements the steps of the intelligent terminal voice interaction method.
The invention provides an intelligent terminal voice interaction method, an intelligent terminal voice interaction device and a computer storage medium. In the method, a clickable sub-view in a display interface of the intelligent terminal is obtained; adding a sequence corner mark to the clickable sub-view; and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information. Through the mode, the clickable sub-view in the display interface is obtained, the sequence corner mark is added to the clickable sub-view, the corresponding relation between the sequence corner mark and the sub-view is built, and a user can clearly and simply issue the voice instruction of the sub-view corresponding to the sequence corner mark through the sequence corner mark. And then according to the information about the corner mark in the voice interaction information of the user, completing the voice interaction of the user on the sub-view corresponding to the sequence corner mark, and enabling the user to visually know the sequence corner mark corresponding to the sub-view by establishing a corresponding relation between the sequence corner mark and the sub-view, so that an interaction instruction for the sub-view is issued through the sequence corner mark, the voice interaction process of the user is greatly simplified and optimized, and the voice interaction efficiency of the user is improved. Meanwhile, the scheme is designed from the system bottom layer, the configuration and the improvement of voice are not needed to be carried out on the application, the problem that the voice interaction can be carried out only when the system application needs to be configured or improved is solved, the voice interaction range of a user is greatly expanded, and the voice interaction experience and the voice interaction activity of the user can be improved.
Drawings
FIG. 1 is a schematic diagram of an apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a voice interaction method of an intelligent terminal according to the present invention;
FIG. 3 is a flowchart illustrating a voice interaction method of an intelligent terminal according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a voice interaction method of an intelligent terminal according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a fourth embodiment of a voice interaction method of an intelligent terminal according to the present invention;
FIG. 6 is a flowchart illustrating a fifth embodiment of a voice interaction method of an intelligent terminal according to the present invention;
FIG. 7 is a flowchart illustrating a voice interaction method of an intelligent terminal according to a sixth embodiment of the present invention;
FIG. 8 is a flowchart illustrating a voice interaction method of an intelligent terminal according to a seventh embodiment of the present invention;
FIG. 9 is a flowchart illustrating an eighth embodiment of a voice interaction method for an intelligent terminal according to the present invention;
fig. 10 is a schematic view of a display interface of a voice interaction method for an intelligent terminal according to a sixth embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a terminal device with a data processing function, such as a smart phone, a tablet computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a Wi-Fi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a smart terminal voice interaction program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the intelligent terminal voice interaction program stored in the memory 1005, and perform the following operations:
acquiring a clickable sub-view in the display interface of the intelligent terminal;
adding a sequence corner mark to the clickable sub-view;
and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
acquiring a display interface of the intelligent terminal;
and traversing the display interface to obtain a clickable sub-view in the display interface.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
acquiring a window of the display interface;
obtaining a view root node corresponding to the window according to the window;
traversing the view root node, obtaining a sub-view corresponding to the view root node, and generating a sub-view list;
and traversing the sub-view list to obtain the clickable sub-view.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
acquiring coordinate information of the clickable sub-view;
and adding corresponding sequence corner marks to the clickable sub-views according to the coordinate information.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
generating an auxiliary layer with sequence corner marks according to the coordinate information;
and binding the auxiliary image layer with the sequence corner mark with the clickable sub-view.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
and adding a corresponding sequence corner mark at the lower left corner of the clickable sub-view according to the coordinate information.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
and receiving voice interaction information of a user, and triggering the sub-view corresponding to the corner mark according to the information about the corner mark in the voice interaction information to perform voice interaction.
Further, the processor 1001 may call the smart terminal voice interaction program stored in the memory 1005, and further perform the following operations:
and after the voice interaction is finished, hiding the sequence corner mark of the clickable sub-view.
The specific embodiment of the intelligent terminal voice interaction device of the present invention is basically the same as the following embodiments of the intelligent terminal voice interaction method, and is not described herein again.
Referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of a voice interaction method of an intelligent terminal according to the present invention, where the voice interaction method of the intelligent terminal includes:
s100, acquiring a clickable sub-view in the display interface of the intelligent terminal;
in the existing intelligent terminal products, the following problems often exist:
1. the voice information of the user has accent, and the intelligent voice system cannot recognize the accent;
2. chinese and English are mixed in the voice information of the user, and an intelligent voice system cannot recognize the voice information;
3. the voice information of the user is complex, part of words have multiple translations, and the intelligent voice system cannot recognize the words.
Due to the reasons, the voice interaction effect of the existing intelligent terminal product is not ideal. Therefore, the application provides an intelligent terminal voice interaction method, which can solve the above problems.
The implementation method is applied to the voice interaction process of the intelligent terminal product, and the intelligent terminal product can be an intelligent television, an intelligent display screen or other intelligent terminal equipment. In the application, a clickable sub-view in the display interface of the intelligent terminal is obtained first.
Step S200, adding sequence corner marks to the clickable sub-views;
after the clickable sub-view in the display interface of the intelligent terminal is obtained, adding a sequence corner mark to the clickable sub-view. The sequence index may be a sequence index indicating a numeric sequence such as 1, 2, 3, or 4, may be a sequence index indicating an english sequence such as a, b, c, or d, or may be a sequence index indicating another sequence. Because the numerical or English pronunciation is basically the same globally, the influence of the accent is small. After the sequence corner mark is added to the clickable sub-view, the sequence corner mark which is specifically corresponding to the sub-view can be clearly and intuitively determined, and the sequence corner mark is linked with the sub-view.
Step S300, receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information.
And receiving voice interaction information of a user, wherein the voice interaction information contains a control instruction related to the sequence corner mark. And carrying out intelligent voice recognition on the voice interaction information, acquiring information about the corner mark in the user voice interaction information, and carrying out interaction according to the information about the corner mark in the voice interaction information.
The invention provides an intelligent terminal voice interaction method, an intelligent terminal voice interaction device and a computer storage medium. In the method, a clickable sub-view in a display interface of the intelligent terminal is obtained; adding a sequence corner mark to the clickable sub-view; and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information. Through the mode, the clickable sub-view in the display interface is obtained, the sequence corner mark is added to the clickable sub-view, the corresponding relation between the sequence corner mark and the sub-view is built, and a user can clearly and simply issue the voice instruction of the sub-view corresponding to the sequence corner mark through the sequence corner mark. And then according to the information about the corner mark in the voice interaction information of the user, completing the voice interaction of the user on the sub-view corresponding to the sequence corner mark, and enabling the user to visually know the sequence corner mark corresponding to the sub-view by establishing a corresponding relation between the sequence corner mark and the sub-view, so that an interaction instruction for the sub-view is issued through the sequence corner mark, the voice interaction process of the user is greatly simplified and optimized, and the voice interaction efficiency of the user is improved. Meanwhile, the scheme is designed from the system bottom layer, the configuration and the improvement of voice are not needed to be carried out on the application, the problem that the voice interaction can be carried out only when the system application needs to be configured or improved is solved, the voice interaction range of a user is greatly expanded, and the voice interaction experience and the voice interaction activity of the user can be improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a voice interaction method for an intelligent terminal according to a second embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S100 includes:
step S110, acquiring a display interface of the intelligent terminal;
in this embodiment, a display interface of the intelligent terminal is obtained first. The intelligent terminal product can be an intelligent television, an intelligent display screen or other intelligent terminal equipment. The display interface may be used to obtain clickable sub-views in the display interface.
And step S120, traversing the display interface to obtain a clickable sub-view in the display interface.
And after the display interface of the intelligent terminal is obtained, traversing the display interface to obtain a clickable sub-view in the display interface. Specifically, a form for displaying an interface is obtained first, a view root node corresponding to the form is obtained, a sub-view list is generated by traversing according to a sub-view corresponding to the view root node, and the sub-view list is traversed to obtain a clickable sub-view in the sub-view list.
Referring to fig. 4, fig. 4 is a flowchart illustrating a voice interaction method of an intelligent terminal according to a third embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S120 includes:
step S121, obtaining a form of the display interface;
in this embodiment, a frame of the display interface is obtained first. Specifically, when the system is compiled, a system code packet is added into a framework layer of the system, so that a form of a display interface of an application currently running in the intelligent terminal is obtained.
Step S122, obtaining a view root node corresponding to the form according to the form;
and after the form is obtained, obtaining a view root node corresponding to the form according to the form. The window body and the view root node have a corresponding relation, and the corresponding view root node can be obtained according to the window body.
Step S123, traversing the view root node, obtaining a sub-view corresponding to the view root node, and generating a sub-view list;
and after the view root node is obtained, traversing the view root node, obtaining a sub-view corresponding to the view root node, and correspondingly generating a sub-view list. The view root node and the child view have a corresponding relationship, and the corresponding child view can be obtained according to the view root node.
Step S124, traversing the child view list to obtain the clickable child view.
And traversing the sub-View list after obtaining the sub-View list, obtaining the clickable sub-View in the sub-View list, namely obtaining the Window of the current App, then obtaining the rootView, traversing all the sub-ChildViews under the rootView, and storing all the views of which the View.
Referring to fig. 5, fig. 5 is a flowchart illustrating a voice interaction method for an intelligent terminal according to a fourth embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S200 includes:
step S210, obtaining coordinate information of the clickable sub-view;
in this embodiment, coordinate information of the clickable sub-view is obtained, and the coordinates include a two-dimensional coordinate system and a three-dimensional coordinate system. Preferably, the coordinates are a two-dimensional coordinate system, and an x coordinate and a y coordinate of the current sub-view on the screen coordinate system and a rectangular area are acquired. The coordinate information may be located to the position of the sub-view on the screen. The method comprises the steps of traversing the View in the current list, acquiring the display state of the View, generating a sequence corner mark for the View, and acquiring the x, y and rectangular areas of the current View on a coordinate system.
And step S220, adding corresponding sequence corner marks to the clickable sub-views according to the coordinate information.
And adding corresponding sequence corner marks to the clickable sub-views according to the coordinate information. Specifically, the sequence corner mark may be generated from top to bottom and from left to right of the current display interface and pasted to the lower left corner of the current sub-view, or the sequence corner mark may be generated from bottom to top and from right to left of the current display interface and pasted to the lower right corner of the current sub-view.
Referring to fig. 6, fig. 6 is a flowchart illustrating a voice interaction method for an intelligent terminal according to a fifth embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S220 includes:
step S221, generating an auxiliary layer with sequence corner marks according to the coordinate information;
in this embodiment, a corresponding sequence corner mark is added to the clickable sub-view according to the coordinate information, an auxiliary layer with a corner mark may be generated according to the coordinate information, and the auxiliary layer is provided with the sequence corner mark generated according to the coordinate information.
Step S222, binding the auxiliary image layer with the sequence corner mark with the clickable sub-view.
And after an auxiliary layer with a sequence corner mark is generated, binding the auxiliary layer with the sequence corner mark with the clickable sub-view. The bound sequence corner mark can be displayed at a corresponding position of the sub-view, such as the lower left corner, the lower right corner and the like.
Referring to fig. 7, fig. 7 is a flowchart illustrating a voice interaction method for an intelligent terminal according to a sixth embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S220 includes:
step S223, adding a corresponding sequence corner mark in the lower left corner of the clickable sub-view according to the coordinate information.
In this embodiment, a corresponding sequence corner mark may be added to the lower left corner of the clickable sub-view according to the coordinate information. Specifically, referring to fig. 10, fig. 10 is a diagram illustrating a numeric sequence corner mark added to the lower left corner of a sub-view on a screen of a display interface of a television.
Referring to fig. 8, fig. 8 is a flowchart illustrating a voice interaction method for an intelligent terminal according to a seventh embodiment of the present invention.
Based on the foregoing embodiment, in this embodiment, step S300 includes:
step S310, receiving voice interaction information of a user, triggering a sub-view corresponding to the corner mark according to information about the corner mark in the voice interaction information, and carrying out voice interaction.
In this embodiment, voice interaction information of a user is received, interaction is performed according to information about a corner mark in the voice interaction information, and voice interaction can be performed by triggering a sub-view corresponding to the corner mark according to the information about the corner mark in the voice interaction information.
Referring to fig. 9, fig. 9 is a flowchart illustrating an intelligent terminal voice interaction method according to an eighth embodiment of the present invention.
Based on the above embodiment, the present embodiment further includes the following steps:
and S400, after the voice interaction is finished, hiding the sequence corner mark of the clickable sub-view.
In this embodiment, after the user voice interaction is completed, the sequence corner mark of the clickable sub-view may be hidden. And restoring the television display interface to the original television display interface picture.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer readable storage medium of the present invention stores an intelligent terminal voice interaction program, and the intelligent terminal voice interaction program, when executed by a processor, implements the steps of the intelligent terminal voice interaction method as described above.
The method implemented when the intelligent terminal voice interaction program running on the processor is executed may refer to each embodiment of the intelligent terminal voice interaction method of the present invention, and details are not described here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent terminal voice interaction method is characterized by comprising the following steps:
acquiring a clickable sub-view in the display interface of the intelligent terminal;
adding a sequence corner mark to the clickable sub-view;
and receiving voice interaction information of a user, and interacting according to information about the corner mark in the voice interaction information.
2. The intelligent terminal voice interaction method of claim 1, wherein the step of obtaining the clickable sub-view in the intelligent terminal display interface comprises:
acquiring a display interface of the intelligent terminal;
and traversing the display interface to obtain a clickable sub-view in the display interface.
3. The intelligent terminal voice interaction method according to claim 2, wherein the step of traversing the display interface and acquiring a clickable sub-view in the display interface comprises:
acquiring a window of the display interface;
obtaining a view root node corresponding to the window according to the window;
traversing the view root node, obtaining a sub-view corresponding to the view root node, and generating a sub-view list;
and traversing the sub-view list to obtain the clickable sub-view.
4. The intelligent terminal voice interaction method of any one of claims 1-3, wherein the step of adding a sequence corner mark to the clickable sub-view comprises:
acquiring coordinate information of the clickable sub-view;
and adding corresponding sequence corner marks to the clickable sub-views according to the coordinate information.
5. The intelligent terminal voice interaction method of claim 4, wherein the step of adding the corresponding sequence corner mark to the clickable sub-view according to the coordinate information comprises:
generating an auxiliary layer with sequence corner marks according to the coordinate information;
and binding the auxiliary image layer with the sequence corner mark with the clickable sub-view.
6. The intelligent terminal voice interaction method of claim 4, wherein the step of adding the corresponding sequence corner mark to the clickable sub-view according to the coordinate information comprises:
and adding a corresponding sequence corner mark at the lower left corner of the clickable sub-view according to the coordinate information.
7. The intelligent terminal voice interaction method according to claim 1, wherein the step of performing interaction according to the voice interaction information of the receiving user and the information about the corner mark in the voice interaction information comprises:
and receiving voice interaction information of a user, and triggering the sub-view corresponding to the corner mark according to the information about the corner mark in the voice interaction information to perform voice interaction.
8. The intelligent terminal voice interaction method according to claim 1, further comprising the steps of:
and after the voice interaction is finished, hiding the sequence corner mark of the clickable sub-view.
9. The utility model provides an intelligent terminal voice interaction device which characterized in that, intelligent terminal voice interaction device includes: the intelligent terminal voice interaction method comprises a memory, a processor and an intelligent terminal voice interaction program stored on the memory and capable of running on the processor, wherein the steps of the intelligent terminal voice interaction method are realized when the intelligent terminal voice interaction program is executed by the processor.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores thereon a smart terminal voice interaction program, and the smart terminal voice interaction program, when executed by a processor, implements the steps of the smart terminal voice interaction method according to any one of claims 1 to 8.
CN201910970868.7A 2019-10-12 2019-10-12 Intelligent terminal voice interaction method and device and storage medium Pending CN111147777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910970868.7A CN111147777A (en) 2019-10-12 2019-10-12 Intelligent terminal voice interaction method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910970868.7A CN111147777A (en) 2019-10-12 2019-10-12 Intelligent terminal voice interaction method and device and storage medium

Publications (1)

Publication Number Publication Date
CN111147777A true CN111147777A (en) 2020-05-12

Family

ID=70516849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910970868.7A Pending CN111147777A (en) 2019-10-12 2019-10-12 Intelligent terminal voice interaction method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111147777A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102823A (en) * 2020-07-21 2020-12-18 深圳市创维软件有限公司 Voice interaction method of intelligent terminal, intelligent terminal and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888800A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Control method and control device
CN104182124A (en) * 2014-08-25 2014-12-03 广东欧珀移动通信有限公司 Operating method and device of mobile terminal
CN105100460A (en) * 2015-07-09 2015-11-25 上海斐讯数据通信技术有限公司 Method and system for controlling intelligent terminal by use of sound
CN105183291A (en) * 2015-09-02 2015-12-23 深圳Tcl数字技术有限公司 Method and system for extracting information in display interface
CN105988933A (en) * 2016-01-29 2016-10-05 腾讯科技(深圳)有限公司 Interface operable node identification method and application test method, device and system
CN107657953A (en) * 2017-09-27 2018-02-02 上海爱优威软件开发有限公司 Sound control method and system
CN108364645A (en) * 2018-02-08 2018-08-03 北京奇安信科技有限公司 A kind of method and device for realizing page interaction based on phonetic order
CN109817204A (en) * 2019-02-26 2019-05-28 深圳安泰创新科技股份有限公司 Voice interactive method and device, electronic equipment, readable storage medium storing program for executing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888800A (en) * 2012-12-20 2014-06-25 联想(北京)有限公司 Control method and control device
CN104182124A (en) * 2014-08-25 2014-12-03 广东欧珀移动通信有限公司 Operating method and device of mobile terminal
CN105100460A (en) * 2015-07-09 2015-11-25 上海斐讯数据通信技术有限公司 Method and system for controlling intelligent terminal by use of sound
CN105183291A (en) * 2015-09-02 2015-12-23 深圳Tcl数字技术有限公司 Method and system for extracting information in display interface
CN105988933A (en) * 2016-01-29 2016-10-05 腾讯科技(深圳)有限公司 Interface operable node identification method and application test method, device and system
CN107657953A (en) * 2017-09-27 2018-02-02 上海爱优威软件开发有限公司 Sound control method and system
CN108364645A (en) * 2018-02-08 2018-08-03 北京奇安信科技有限公司 A kind of method and device for realizing page interaction based on phonetic order
CN109817204A (en) * 2019-02-26 2019-05-28 深圳安泰创新科技股份有限公司 Voice interactive method and device, electronic equipment, readable storage medium storing program for executing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102823A (en) * 2020-07-21 2020-12-18 深圳市创维软件有限公司 Voice interaction method of intelligent terminal, intelligent terminal and storage medium
CN112102823B (en) * 2020-07-21 2024-06-21 深圳市创维软件有限公司 Voice interaction method of intelligent terminal, intelligent terminal and storage medium

Similar Documents

Publication Publication Date Title
CN112509524B (en) Ink screen quick refreshing method, device, equipment and computer readable storage medium
CN117056622A (en) Voice control method and display device
CN110035181B (en) Method and terminal for setting theme of quick application card
EP2743814A2 (en) Display apparatus and method of providing user interface thereof
CN110865758B (en) Display method and electronic equipment
CN107390922B (en) Virtual touch method, device, storage medium and terminal
CN107957841B (en) Rolling screen capture method and device
US20190369847A1 (en) Image display apparatus and operating method of the same
CN112596609A (en) Display processing method, display processing device and wearable equipment
CN111078113A (en) Sidebar editing method, mobile terminal and computer-readable storage medium
KR20230057932A (en) Data processing method and computer equipment
CN111147777A (en) Intelligent terminal voice interaction method and device and storage medium
CN112416486A (en) Information guiding method, device, terminal and storage medium
CN108471549B (en) Remote control method and terminal
CN111010528A (en) Video call method, mobile terminal and computer readable storage medium
CN111147790A (en) Auxiliary function starting method, mobile terminal and computer readable storage medium
CN110955332A (en) Man-machine interaction method and device, mobile terminal and computer readable storage medium
CN107589954B (en) Application program updating method and device, terminal and computer readable storage medium
CN115423680A (en) Face makeup migration method, device and computer-readable storage medium
CN115379113A (en) Shooting processing method, device, equipment and storage medium
CN112199560B (en) Search method of setting items and display equipment
CN111147750B (en) Object display method, electronic device, and medium
CN109002239B (en) Information display method and terminal equipment
CN110377192B (en) Method, device, medium and electronic equipment for realizing interactive effect
CN112788425A (en) Dynamic area display method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512