CN113301415A - Voice searching method suitable for video playing state - Google Patents

Voice searching method suitable for video playing state Download PDF

Info

Publication number
CN113301415A
CN113301415A CN202110481884.7A CN202110481884A CN113301415A CN 113301415 A CN113301415 A CN 113301415A CN 202110481884 A CN202110481884 A CN 202110481884A CN 113301415 A CN113301415 A CN 113301415A
Authority
CN
China
Prior art keywords
voice
frame
displaying
voice command
box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481884.7A
Other languages
Chinese (zh)
Inventor
余锋
金凌琳
雷钧杰
李振汉
戴承梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dangqu Network Technology Hangzhou Co Ltd
Original Assignee
Dangqu Network Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dangqu Network Technology Hangzhou Co Ltd filed Critical Dangqu Network Technology Hangzhou Co Ltd
Priority to CN202110481884.7A priority Critical patent/CN113301415A/en
Publication of CN113301415A publication Critical patent/CN113301415A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention discloses a voice searching method suitable for a video playing state, relates to the technical field of intelligent televisions, and is used for solving the problem that playing of a video is interrupted because the intelligent television directly jumps from a playing interface of the video to a searching interface corresponding to a voice command in the related technology. The method comprises the following steps: receiving a wake-up request, responding to the wake-up request to judge whether a video is played on a current interface, if so, waking up a voice system, and acquiring a voice command through the voice system; under the condition that the search type of the voice command is question search, displaying a first voice frame on the current interface, and displaying an answer corresponding to the voice command on the first voice frame in a matching manner; and under the condition that the search type of the voice command is film source search, displaying a second voice frame on the current interface, and displaying the program corresponding to the voice command on the second voice frame in a matching manner. The invention enables the video to display the search result corresponding to the voice command in the playing state.

Description

Voice searching method suitable for video playing state
Technical Field
The invention relates to the technical field of intelligent televisions, in particular to a voice searching method suitable for a video playing state.
Background
With the continuous development of the internet and big data, smart tvs are also gradually becoming intelligent and diversified, for example: and the set top box/television/projector and other equipment are provided with the intelligent television operating system. The intelligent television can realize operations such as portal navigation, program retrieval, software downloading, information uploading and the like through the intelligent television operating system.
In the related art, in the case that the smart television plays a video, if the voice system is awakened, the voice command can be obtained through the voice system, and the smart television directly skips from the video playing interface to the search interface corresponding to the voice command to display the search result, thereby interrupting the playing of the video.
At present, no effective solution is provided for the problem that playing of a video is interrupted due to the fact that a smart television directly jumps from a playing interface of the video to a searching interface corresponding to a voice command in the related art.
Disclosure of Invention
In order to overcome the disadvantages of the related art, an object of the present invention is to provide a voice search method suitable for a video playing state, so that a video can display a search result corresponding to a voice command in the playing state.
The purpose of the invention is realized by adopting the following technical scheme:
a voice search method suitable for a video playing state, the method comprising:
receiving a wake-up request, responding to the wake-up request to judge whether a video is played on a current interface, if so, waking up a voice system, and acquiring a voice command through the voice system;
under the condition that the search type of the voice command is question search, displaying a first voice frame on the current interface, and displaying an answer corresponding to the voice command on the first voice frame in a matching manner;
and under the condition that the search type of the voice command is film source search, displaying a second voice frame on the current interface, and displaying a program corresponding to the voice command on the second voice frame in a matching manner.
In some embodiments, the second voice box displays more than one second entry, and each program corresponding to the voice command is displayed on the second entry in a matching manner, wherein if any second entry is selected, the current interface jumps to an interface associated with the program corresponding to the any second entry.
In some embodiments, the second voice box comprises a dialog bar and a presentation bar, and the voice command is displayed in the dialog bar of the second voice box in a text form; the second item is displayed on a display bar of the second voice frame.
In some of these embodiments, the method further comprises:
recording a program corresponding to a second item where a focus is located as a current program, inquiring a resource library where the current program is located, and acquiring a text introduction corresponding to the current program from the resource library;
and displaying the text introduction corresponding to the current program in the dialog box of the second voice box.
In some embodiments, in the case that the focus is switched from the source second entry to the target second entry, the text introduction corresponding to the source second entry is deleted from the dialog box of the second speech box, and the text introduction corresponding to the target second entry is displayed.
In some of these embodiments, after the obtaining a voice command via the voice system, the method further comprises:
judging whether the current voice command search type is matched with the currently displayed voice frame or not, and if so, displaying the search result on the currently displayed voice frame in a matching manner; if not, switching from the currently displayed voice frame to the voice frame corresponding to the current voice command, and displaying the search result on the voice frame corresponding to the current voice command in a matching manner.
In some of these embodiments, in the case that the voice system is woken up, the method further comprises: displaying an initial voice frame on the current interface;
the displaying the first voice frame on the current interface comprises: switching from the initial speech frame to the first speech frame; the current interface displaying the second voice frame comprises: switching from the initial speech frame to the second speech frame.
In some of these embodiments, the first and second speech boxes each have an area that is greater than the area of the initial speech box.
In some embodiments, in the case that the answer corresponding to the voice command is cooperatively displayed on the first voice box, the method further comprises: and under the condition that the display time of a first cancellation signal is received or the answer exceeds a first preset value, canceling the first voice frame or switching from the first voice frame to the initial voice frame.
In some embodiments, in the case that the program corresponding to the voice command is cooperatively displayed on the second voice box, the method further comprises: and under the condition that a second cancel signal is received or the display time of the program exceeds a second preset value, canceling the second voice frame or switching from the second voice frame to the initial voice frame.
Compared with the related technology, the invention has the beneficial effects that: under the condition that the video is played on the current interface, if the voice system is awakened, popping up a voice frame and keeping playing the video, so that the video can be kept playing under the condition that the voice system is awakened; and correspondingly setting a first voice frame and a second voice frame respectively based on the problem search and the film source search so as to display the search result in a targeted manner and improve the utilization rate of the area occupied by the voice frames.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a voice search method suitable for a video playing state according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface of a smart television displaying a first voice box according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interface of a smart television displaying a second voice box according to an embodiment of the present application;
fig. 4 is a flowchart of steps S401 to S404 in the embodiment of the present application;
FIG. 5 is a flowchart of another implementation manner of a voice search method suitable for a video playing status according to an embodiment of the present application;
fig. 6 is a schematic diagram of an interface of a smart television displaying an initial voice box according to an embodiment of the present application.
Detailed description of the invention
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It will be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, and is not intended to limit the scope of this disclosure.
The embodiment of the application provides a voice searching method suitable for a video playing state, and aims to solve the problem that in the related art, when a video is played on an intelligent television, the playing of the video is interrupted due to the fact that a voice system is awakened.
Fig. 1 is a flowchart of a voice search method suitable for a video playing state according to an embodiment of the present application; 2, the interface schematic diagram of the smart television for displaying the first voice frame in the embodiment of the application is shown; fig. 3 is a schematic diagram of an interface of a smart television displaying a second voice box according to an embodiment of the present application. Referring to fig. 1 to 3, the method includes steps S101 to S105.
Step S101, receiving a wake-up request, responding to the wake-up request, judging whether the current interface plays a video, if so, executing step S102.
And S102, waking up the voice system.
Step S103, acquiring a voice command through the voice system.
And step S104, under the condition that the search type of the voice command is question search, displaying a first voice frame on the current interface, and displaying the answer corresponding to the voice command on the first voice frame in a matching manner.
And step S105, under the condition that the search type of the voice command is film source search, displaying a second voice frame on the current interface, and displaying the program corresponding to the voice command on the second voice frame in a matching manner.
It is to be understood that the steps of the method may be performed based on the executing device. Specifically, the execution device may be a server, a cloud server, a smart television, a processor, and the like, but the execution device is not limited to the above type, but it is worth explaining here that, in the case that the execution device is not a smart television, the execution device is in communication connection with the smart television and controls the smart television to perform corresponding operations in cooperation with the smart television.
In summary, in the case that the video is played on the current interface, if the voice system is awakened, the voice frame pops up and the video is kept playing, so that the video can be kept playing when the voice system is awakened; and correspondingly setting a first voice frame and a second voice frame respectively based on the problem search and the film source search so as to display the search result in a targeted manner and improve the utilization rate of the area occupied by the voice frames.
As an alternative embodiment, for step S101, the generation manner of the wake-up signal is not limited herein. However, the manner of generating the wake-up signal may be exemplified herein, for example: a voice icon is displayed on the current interface of the smart television, a user can control the focus to move to the voice icon through the remote control equipment and select the voice icon, and accordingly the smart television generates a wake-up signal; another example is: the intelligent television comprises a voice collecting device, the voice collecting device can collect voice packets and upload the voice packets, the intelligent television judges whether the received voice packets are used for awakening a voice system, and if yes, the intelligent television generates awakening signals.
As an alternative embodiment, for step S102 and step S103, in the case that the voice system is woken up, the smart television may implement voice control, that is, a user may upload a voice packet via the voice collecting system, the voice system may convert the voice packet into a computer-readable voice command, and the executing device may perform a relevant operation according to the voice command, where the type of the voice command is not limited to a search type, and may further include focus moving, entry selection, exiting, powering off, sleeping, and the like.
As an alternative embodiment, for step S104 and step S105, after acquiring the voice command, the execution device may determine the type of the voice command, and the determination manner is not limited as long as type classification can be implemented. For example: playing xxx, and judging the type of xxx to be film source search; xxx is, and can decide its type as problem search, etc.
As an alternative embodiment, referring to fig. 2, the first voice box includes a dialog bar, and in a case that the current interface of the smart television plays a video, the first voice box may be surfaced on the current interface, and the voice command may be displayed in a text form on the dialog bar of the first voice box, and an answer corresponding to the voice command is displayed in a text form on the dialog bar of the first voice box.
Further, the method may further include a first speech box displaying step, specifically: and judging whether the video is played in a full screen mode, if so, the first voice frame is floated in a video playing area, and if not, the first voice frame is floated in a non-video playing area. By the technical scheme, the interference on video playing can be reduced, so that the user experience is improved. It should be noted that, when the first speech frame emerges from the current interface, the transparency of the border and/or the ground color of the first speech frame may be adjusted, and may preferably be 40% to 60%.
Further, the displaying step of the first voice frame may also adopt another implementation manner, and the first voice frame and the original area of the current interface are in split-screen arrangement, and the specific manner is not limited herein.
As an alternative embodiment, the method may further include a second speech frame display step, which may specifically refer to the relevant description of the first speech frame display step, and only differ in the format of the first speech frame and the second speech frame.
As an alternative embodiment, referring to fig. 3, the second voice box displays more than one second entry, and each program corresponding to the voice command is displayed on the second entry in a matching manner, wherein, in the case that any second entry is selected, the current interface jumps to the interface associated with the program corresponding to any second entry.
Through the technical scheme, the user can browse the second item on the basis of watching the video, and correspondingly, the user can select the second item to jump the interface so as to realize interface jump based on the subjective consciousness of the user.
It is worth mentioning here that the second item may be placed on the second item according to the positive feedback of the rank of any one or more combinations of the relevance, popularity, recommendation, collection, click, evaluation, for example: under the condition of adopting the popularity, the 1 st place of the popularity in the recommended program is positioned on the 1 st second item, and so on.
Further, referring to fig. 3, the second voice box includes a dialog box and a presentation box, wherein the voice command may be displayed in the form of text on the dialog box of the second voice box, and each program corresponding to the voice command is displayed on the second entry in a matching manner.
It should be noted that each program is respectively presented on the corresponding second item in the form of an introduction picture and/or an introduction picture. In the case where the second item only shows the introduction picture, when the focus stays on any second item, the introduction picture on the any second item can be played, and the size of the any second item is preferably expanded and restored after the playing is completed, and the expanded range may be 10% to 20%, but other second items must not be occluded.
It should be further noted that, in the case that the introduction picture and the introduction poster are displayed in the second entry, for any second entry, the introduction poster is preferentially displayed, and when the focus stays, the introduction poster is switched to the introduction picture, and the introduction picture may be played with reference to the above description, which is not described herein in detail, but the introduction poster may be switched back or played continuously after the introduction picture is played.
As an alternative embodiment, the method may further comprise the following steps.
Recording the program corresponding to the second item where the focus is located as the current program, inquiring the resource library where the current program is located, and obtaining the text introduction corresponding to the current program from the resource library. The movement of the focus can be controlled by voice through a voice system or by a remote controller. The repository of textual presentations should be the same as the repository of presentation posters/presentation pictures described above. The resource pool is preferably under the player of the video played by the current interface, so that the corresponding interface can be quickly jumped to under the condition that any second item is selected.
And displaying the text introduction corresponding to the current program in the dialog box of the second voice box, so that the user can easily know the program content, and the user experience is improved.
Further, when the focus is switched from the source second entry to the target second entry, the text introduction corresponding to the source second entry is deleted from the dialog box of the second voice box and displayed on the text introduction corresponding to the target second entry. Therefore, only one text introduction is displayed in the dialog bar of the second voice box and corresponds to the second item with the focus, so that the area occupied by the text introduction is reduced, the second item corresponding to the text introduction can be well determined, and the second item can be known and selected as soon as possible.
As an alternative embodiment, fig. 4 is a flowchart of steps S401 to S403 shown in the embodiment of the present application, and referring to fig. 1 to 4, the method may further include steps S401 to S404.
Step S401, recording the current voice command acquired through the voice system as the current voice command. It should be noted that this step is performed when the current interface of the smart television plays a video, and the search result corresponding to the current voice command is not displayed in the first voice box or the second voice box of the smart television.
Step S402, determining whether the search type of the current voice command is adapted to the currently displayed voice frame, if so, executing step S403, and if not, executing step S404. It should be noted that, when the current voice command is the first voice command, the smart television may directly display the first voice frame or the second voice frame corresponding to the search type of the smart television.
Step S403, the search result is displayed in a currently displayed voice frame in a matching manner. For example: the search type of the current voice command is problem search, and the voice frame currently displayed by the smart television is the first voice frame, so that the switching of the voice frames is not needed at the moment, and accordingly, the search result based on the current voice command can be directly displayed in the first voice frame.
Step S404, switching from the currently displayed voice frame to the voice frame corresponding to the current voice command, and displaying the search result on the voice frame corresponding to the current voice command in a matching manner. For example: the search type of the current voice command is problem search, the voice frame currently displayed by the smart television is a second voice frame, and at this moment, switching of the voice frames is needed.
Through the technical scheme, on the basis of ensuring that the video can display the search result corresponding to the voice command in the playing state, the search result can be always matched with the voice frame. Therefore, only one voice frame is displayed on the current interface, so that the phenomenon that a plurality of voice frames are displayed on the current interface to seriously influence the appearance of the video is avoided.
As an alternative embodiment, fig. 5 is a flowchart of another implementation manner of a voice search method suitable for a video playing state according to the embodiment of the present application; fig. 6 is a schematic diagram of an interface of a smart television displaying an initial voice box according to an embodiment of the present application.
Referring to fig. 2, 3, 5 and 6, the method may include steps S501 to S506. It is worth mentioning here that the related description of some steps can refer to the related description of any embodiment.
Step S501, receiving a wake-up request, responding to the wake-up request, judging whether the current interface plays a video, if so, executing step S502.
And step S502, awakening the voice system and displaying an initial voice frame on the current interface. It should be noted that the area of the initial frame is smaller than the area of the first frame, and the area of the initial frame is smaller than the area of the second frame.
Step S503, acquiring a voice command through the voice system. The initial voice box may display the text corresponding to the voice command one by one, may also display only the voice icon, and may also display a prompt, for example: what needs help, please output voice, you have turned on voice control, etc.
Step S504, if the search type of the voice command is question search, switching from the initial voice frame to the first voice frame, and displaying the answer corresponding to the voice command in the first voice frame in a matching manner.
Step S505, if the search type of the voice command is film source search, switching from the initial voice frame to a second voice frame, and displaying the program corresponding to the voice command on the second voice frame in a matching manner.
Through the technical scheme, the awakening time of the voice system and the uploading time of the voice command are not synchronous, so that the initial voice frame is displayed after the voice system is awakened to inform a user that voice control is awakened, the area of the initial voice frame is small, and the influence of viewing is reduced.
Further, for step S504, in the case that the display duration of receiving the first cancellation signal or the answer exceeds the first preset value, the first voice frame is cancelled or switched from the first voice frame to the initial voice frame. It is worth mentioning here that the first cancellation signal corresponds to a voice command of which the search type is a question search. The first preset value is not limited herein, but should correspond to the number of characters of the search result presented by the first voice box in a positive feedback manner, i.e. the larger the number is, the larger the first preset value is.
Further, for step S505, in the case that the display duration of the received second cancellation signal or program exceeds the second preset value, the second voice frame is cancelled or switched from the second voice frame to the initial voice frame. It is worth mentioning here that the second cancel signal corresponds to a voice command of which the search type is a film source search. Since the number of the second entries is generally fixed, the second preset value should be fixed.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink), DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The terms "comprises," "comprising," "including," "has," "having," and any variations thereof, as referred to herein, are intended to cover a non-exclusive inclusion. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describe the association relationship of the associated objects, meaning that three relationships may exist. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A voice search method suitable for a video playing state, the method comprising:
receiving a wake-up request, responding to the wake-up request to judge whether a video is played on a current interface, if so, waking up a voice system, and acquiring a voice command through the voice system;
under the condition that the search type of the voice command is question search, displaying a first voice frame on the current interface, and displaying an answer corresponding to the voice command on the first voice frame in a matching manner;
and under the condition that the search type of the voice command is film source search, displaying a second voice frame on the current interface, and displaying a program corresponding to the voice command on the second voice frame in a matching manner.
2. The method according to claim 1, wherein the second voice box displays more than one second entry, and each program corresponding to the voice command is displayed on the second entry in a matching manner, wherein if any second entry is selected, the current interface jumps to an interface associated with the program corresponding to the any second entry.
3. The method of claim 2, wherein the second voice box comprises a dialog box and a presentation box, and the voice command is displayed in text on the dialog box of the second voice box; the second item is displayed on a display bar of the second voice frame.
4. The method of claim 2, further comprising:
recording a program corresponding to a second item where a focus is located as a current program, inquiring a resource library where the current program is located, and acquiring a text introduction corresponding to the current program from the resource library;
and displaying the text introduction corresponding to the current program in the dialog box of the second voice box.
5. The method of claim 4, wherein in the event that focus switches from a source second entry to a target second entry, deleting the textual description corresponding to the source second entry from the dialog box of the second speech box and displaying the textual description corresponding to the target second entry.
6. The method of claim 1, wherein after said obtaining a voice command via the voice system, the method further comprises:
judging whether the current voice command search type is matched with the currently displayed voice frame or not, and if so, displaying the search result on the currently displayed voice frame in a matching manner; if not, switching from the currently displayed voice frame to the voice frame corresponding to the current voice command, and displaying the search result on the voice frame corresponding to the current voice command in a matching manner.
7. The method according to any of claims 1 to 6, wherein in case the speech system is woken up, the method further comprises: displaying an initial voice frame on the current interface;
the displaying the first voice frame on the current interface comprises: switching from the initial speech frame to the first speech frame; the current interface displaying the second voice frame comprises: switching from the initial speech frame to the second speech frame.
8. The method of claim 7, wherein the first and second speech boxes each have an area that is larger than an area of the initial speech box.
9. The method of claim 7, wherein in the case that the answer corresponding to the voice command is cooperatively displayed on the first voice box, the method further comprises: and under the condition that the display time of a first cancellation signal is received or the answer exceeds a first preset value, canceling the first voice frame or switching from the first voice frame to the initial voice frame.
10. The method of claim 8, wherein in the case that the program corresponding to the voice command is cooperatively displayed on the second voice box, the method further comprises: and under the condition that a second cancel signal is received or the display time of the program exceeds a second preset value, canceling the second voice frame or switching from the second voice frame to the initial voice frame.
CN202110481884.7A 2021-04-30 2021-04-30 Voice searching method suitable for video playing state Pending CN113301415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481884.7A CN113301415A (en) 2021-04-30 2021-04-30 Voice searching method suitable for video playing state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481884.7A CN113301415A (en) 2021-04-30 2021-04-30 Voice searching method suitable for video playing state

Publications (1)

Publication Number Publication Date
CN113301415A true CN113301415A (en) 2021-08-24

Family

ID=77320969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481884.7A Pending CN113301415A (en) 2021-04-30 2021-04-30 Voice searching method suitable for video playing state

Country Status (1)

Country Link
CN (1) CN113301415A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891157A (en) * 2021-11-11 2022-01-04 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product
CN114125541A (en) * 2021-11-11 2022-03-01 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916708A (en) * 2013-01-07 2014-07-09 三星电子株式会社 Display apparatus and method for controlling the display apparatus
CN106462617A (en) * 2014-06-30 2017-02-22 苹果公司 Intelligent automated assistant for tv user interactions
US20170264939A1 (en) * 2011-07-19 2017-09-14 Lg Electronics Inc. Electronic device and method for controlling the same
CN107958668A (en) * 2017-12-15 2018-04-24 中广热点云科技有限公司 The acoustic control of smart television selects broadcasting method, acoustic control to select broadcast system
CN108965968A (en) * 2018-07-25 2018-12-07 聚好看科技股份有限公司 Methods of exhibiting, device and the computer storage medium of smart television operation indicating
CN111696549A (en) * 2020-06-02 2020-09-22 深圳创维-Rgb电子有限公司 Picture searching method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170264939A1 (en) * 2011-07-19 2017-09-14 Lg Electronics Inc. Electronic device and method for controlling the same
CN103916708A (en) * 2013-01-07 2014-07-09 三星电子株式会社 Display apparatus and method for controlling the display apparatus
CN106462617A (en) * 2014-06-30 2017-02-22 苹果公司 Intelligent automated assistant for tv user interactions
CN107958668A (en) * 2017-12-15 2018-04-24 中广热点云科技有限公司 The acoustic control of smart television selects broadcasting method, acoustic control to select broadcast system
CN108965968A (en) * 2018-07-25 2018-12-07 聚好看科技股份有限公司 Methods of exhibiting, device and the computer storage medium of smart television operation indicating
CN111696549A (en) * 2020-06-02 2020-09-22 深圳创维-Rgb电子有限公司 Picture searching method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891157A (en) * 2021-11-11 2022-01-04 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product
CN114125541A (en) * 2021-11-11 2022-03-01 百度在线网络技术(北京)有限公司 Video playing method, video playing device, electronic equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
US20200396497A1 (en) Recommended content display method and apparatus, terminal, and computer-readable storage medium
CN108881994B (en) Video access method, client, device, terminal, server and storage medium
US9285945B2 (en) Method and apparatus for displaying multi-task interface
WO2020000973A1 (en) Information access method, client, information access apparatus, terminal, server, and storage medium
US8321888B2 (en) TV tutorial widget
KR20120052285A (en) System and method for searching in internet on a video device
KR20130018464A (en) Electronic apparatus and method for controlling electronic apparatus thereof
CN103546821A (en) Method and device for regulating video playing interface
CN112437353B (en) Video processing method, video processing device, electronic apparatus, and readable storage medium
US20190230311A1 (en) Video interface display method and apparatus
CN113301415A (en) Voice searching method suitable for video playing state
US20230045363A1 (en) Video playback method and apparatus, computer device, and storage medium
CN111327931A (en) Viewing history display method and display device
CN108810580B (en) Media content pushing method and device
CN111479145A (en) Display device and television program pushing method
CN111526402A (en) Method for searching video resources through voice of multi-screen display equipment and display equipment
CN111654732A (en) Advertisement playing method and display device
CN113852870A (en) Channel list display method and display equipment
CN110798701B (en) Video update pushing method and terminal
CN112543365B (en) Media information playing method, device, equipment and computer readable storage medium
CN113301395B (en) Voice searching method combined with user grade in video playing state
CN115460452A (en) Display device and channel playing method
CN113301394A (en) Voice control method combined with user grade
CN113301416B (en) Voice frame display method
JP6970729B2 (en) TV desktop display method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824