CN114339331A - Playing method, intelligent terminal and computer readable storage medium - Google Patents

Playing method, intelligent terminal and computer readable storage medium Download PDF

Info

Publication number
CN114339331A
CN114339331A CN202011031326.2A CN202011031326A CN114339331A CN 114339331 A CN114339331 A CN 114339331A CN 202011031326 A CN202011031326 A CN 202011031326A CN 114339331 A CN114339331 A CN 114339331A
Authority
CN
China
Prior art keywords
instruction
information
audio
playing
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011031326.2A
Other languages
Chinese (zh)
Inventor
杨文�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202011031326.2A priority Critical patent/CN114339331A/en
Publication of CN114339331A publication Critical patent/CN114339331A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a playing method, an intelligent terminal and a computer readable storage medium, wherein the method comprises the following steps: when receiving a confirmation instruction for the audiovisual file, acquiring instruction information; determining the name and the playing mode of the corresponding target equipment according to the instruction information; and generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode. The invention can switch and play the same audio-visual resource between two different devices, and improves the convenience for users to switch and play the resources of the devices.

Description

Playing method, intelligent terminal and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a playing method, an intelligent terminal, and a computer storage medium.
Background
With the development of the internet, more and more devices are available for people to watch programs and listen to music, for example, in the home, the devices may include a television, a mobile phone, a tablet, a stereo, and the like. For example, many users share cooking videos and broadcasts on the internet, so that many people can carry mobile terminals, such as mobile phones and tablets, to a kitchen when cooking to learn new dish styles, and generally cook while watching videos. General users need to additionally find a suitable place in a kitchen to place the mobile terminal, so that the users can watch videos conveniently and avoid that the mobile terminal is not polluted by kitchen oil stains.
However, although there are more and more devices, there is no association between the devices, and after a user watches a certain audiovisual program on the device a, if the device a is temporarily not nearby, the user has to search for the corresponding audiovisual program on the device B again. Taking the above-mentioned dish as an example, many times, the user does not prepare materials to search how to cook a certain food on the integrated cooker, but the idea of cooking a certain food is only triggered when the user goes to the internet or watches a certain program, so even if the integrated cooker can be networked, the user needs to worry about searching when entering the kitchen after the user goes to the cooking method of a certain food on the internet or a program, and the integrated cooker is very inconvenient. In addition, if the user watches a certain program during cooking, the user may want to continue watching after cooking, and at this time, the user can only retrieve and play the program he wants to watch again. Therefore, currently, in viewing audiovisual programs, the association degree between a plurality of devices is low, and thus users are inconvenient to use.
Disclosure of Invention
The invention mainly aims to provide a playing method, an intelligent terminal and a computer readable storage medium, and aims to solve the problem that the playing of audio-visual programs by switching different devices is inconvenient in the prior art.
In order to achieve the above object, the present invention provides a playing method, including the following steps:
when receiving a confirmation instruction for the audiovisual file, acquiring instruction information;
determining the name and the playing mode of the corresponding target equipment according to the instruction information;
and generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode.
Optionally, in the playback method, the instruction information includes gesture information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction for the audio-visual file is received, video shooting is carried out on the current environment according to preset shooting time, and a gesture video is generated;
and drawing a gesture track according to the gesture video to generate the gesture information.
Optionally, in the playing method, the instruction information includes voice information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction aiming at the audio-visual file is received, acquiring audio in a direction corresponding to the positioning information according to preset positioning information, and generating voice information.
Optionally, the playing method, wherein the determining, according to the instruction information, a corresponding name and a playing mode of the target device includes:
determining a gesture code corresponding to the gesture information according to a preset gesture list;
and determining a corresponding playing mode and a target device according to the gesture code.
Optionally, the playing method, wherein the determining, according to the instruction information, a corresponding name and a playing mode of the target device includes:
performing voice recognition on the voice information according to a preset language type to generate text information;
and extracting keywords in the text information according to a preset keyword extraction rule to generate the name and the playing mode of the target equipment.
Optionally, the playing method, wherein the voice type includes chinese mandarin, chinese dialect and foreign language.
Optionally, the playing method, where the generating, according to the audiovisual file and the playing manner, a corresponding playing instruction and sending the playing instruction to a target device corresponding to the name of the target device, so that the target device plays the audiovisual file according to the playing manner includes:
generating a corresponding operation instruction according to the audio-visual file and the playing mode;
and sending the operating instruction to a pre-connected central control system, and controlling the central control system to send the operating instruction to the target equipment according to the name of the target equipment so that the target equipment can play the audio-visual file.
Optionally, the playing method, wherein the generating a corresponding operation instruction according to the audiovisual file and the playing mode includes:
and writing the audio-visual file, breakpoint information corresponding to the audio-visual file and the playing mode into a preset blank file to generate a corresponding operation instruction.
In addition, to achieve the above object, the present invention further provides an intelligent terminal, wherein the intelligent terminal includes: a memory, a processor and a playback program stored on the memory and executable on the processor, the playback program, when executed by the processor, implementing the steps of the playback method as described above.
In addition, to achieve the above object, the present invention further provides a computer readable storage medium, wherein the computer readable storage medium stores a playback program, and the playback program implements the steps of the playback method as described above when executed by a processor.
In the invention, when the current equipment receives the audio-visual file which needs to be sent and played and confirmed by the user, the instruction information sent by the user is obtained, then the name and the playing mode of the target equipment for subsequently playing the audio-visual file are determined according to the instruction information, and then the corresponding playing instruction is generated according to the audio-visual file and the playing mode and is sent to the target equipment corresponding to the name of the target equipment. Therefore, the user can switch the audio-visual files between the current equipment and the target equipment, and the user can continuously watch the audio-visual resources which the user wants to watch on two different equipment without searching again, so that the method is convenient and quick.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the playing method of the present invention;
FIG. 2 is a flowchart of step S100 according to a preferred embodiment of the playing method of the present invention;
FIG. 3 is a flowchart of a first implementation of step S200 according to a preferred embodiment of the playing method of the present invention;
FIG. 4 is a flowchart illustrating a second implementation manner of step S200 according to the preferred embodiment of the playing method of the present invention;
FIG. 5 is a flowchart of step S300 according to the preferred embodiment of the playing method of the present invention;
fig. 6 is a schematic operating environment diagram of an intelligent terminal according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the playing method according to the preferred embodiment of the present invention includes the following steps:
in step S100, when a confirmation instruction for the audiovisual file is received, instruction information is acquired.
In this embodiment, the execution subject is an intelligent terminal, such as an intelligent television. When a user watches a program related to cooking, trying to cook the same food, selecting a selection key corresponding to the program as a confirmation instruction through a remote controller, then selecting equipment which wants to play the program, and selecting a 'play after transmission' key on an interface, wherein the intelligent television receives the key, namely, the intelligent television obtains instruction information sent by the user as 'play after transmission'.
Further, referring to fig. 2, the instruction information includes gesture information, and step S100 includes:
and step S110, when receiving a confirmation instruction aiming at the audio-visual file, shooting the video of the current environment according to the preset shooting time to generate a gesture video.
Specifically, to simplify the convenience of user operation, when the smart television receives a confirmation instruction for an audiovisual file, that is, when the user selects the program, the camera set on the top is turned on, and the user is prompted to make a gesture in a manner of voice or a display screen pop-up display window. And according to the preset shooting time, the camera shoots the video of the current environment to generate a gesture video. For example, if the preset shooting time is 10s, the camera continues to shoot the current environment for 10s after shooting is started.
Because the height of each user is different and the camera shooting can have a certain range, the camera can not always shoot the gesture of the user when shooting. Therefore, in order to completely shoot the gesture of the user, a picture can be shot in advance, and whether the position of the user in the picture is shot by all gestures in subsequent video shooting is analyzed. The photo can be subjected to hand recognition, and then prediction is performed on the basis of recognized hand coordinates so as to judge whether the user can completely shoot the position of the hand when performing a gesture. If yes, video shooting is carried out on the current environment; if not, a prompt is sent out through a loudspeaker or a display, so that the user can adjust the position where the user stands.
And step S120, drawing a gesture track according to the gesture video, and generating the gesture information.
Specifically, after the gesture video is obtained, the position coordinates of the hand in each frame image in the gesture video are analyzed, and the position coordinates are sequentially connected according to the position coordinates of the hand in each frame image and the corresponding time, so that the gesture track of the gesture video is drawn, and finally, gesture information is generated.
Further, the instruction information includes voice information, and step S120 includes: when a confirmation instruction aiming at the audio-visual file is received, acquiring audio in a direction corresponding to the positioning information according to preset positioning information, and generating voice information.
Specifically, in order to simplify the convenience of user operation, a user can send instruction information in a voice form through voice, and the smart television collects the instruction information of the user through a built-in microphone, so that the voice information is generated. However, because sound is a wave, interference exists between waves, and if the current environment is noisy, the quality of sound collected by a microphone is poor, and the effectiveness of audio is low. In order to improve the effectiveness of the collected audio, a positioning message, such as a forward direction, may be preset, and when the audio is collected, the audio in the forward direction is gained, and the audio from other directions is denoised.
When the smart television receives a confirmation instruction aiming at the audio-visual file, namely when the user selects the program, the built-in microphone is started, and audio in the corresponding direction is collected according to the preset positioning information, so that voice information is generated. When audio acquisition is carried out, video shooting can be carried out similarly to the above, acquisition is carried out within a preset time, an audio interval threshold value can also be set, when no voice exists in the acquired audio, the current time is recorded as a first time, then a timer is started, the duration of the absence of the voice is calculated, if the duration is greater than the audio interval threshold value, the acquisition of the audio is finished, and the voice information is generated.
And step S200, determining the name and the playing mode of the corresponding target equipment according to the instruction information.
Specifically, after receiving the signal of the "send and play" key, the smart television determines that the name of the device playing the program selected by the user is the name of the target device, and then determines that the playing mode is "play". In addition to the normal playback manner, the playback manner may include "add to playlist", "timed playback", "loop playback", and the like.
Further, in a first implementation manner of this embodiment, the instruction information is gesture information, referring to fig. 3, and step S200 includes:
step S211, determining a gesture code corresponding to the gesture information according to a preset gesture list.
Specifically, a preset gesture list includes a gesture code corresponding to each gesture, and a playing mode and a target device corresponding to each gesture code. For example, the user's gesture is a circle, and the corresponding gesture code is "0". The user's gesture may be a number, a letter, or a fixed shape, such as a star. The purpose of the gesture code is to quickly find a corresponding playing mode and a target device, and therefore, the gesture code may be the same as or different from the gesture code of the user.
The gesture code corresponding to each gesture in the gesture list may be set by a factory manufacturer when the user leaves a factory, or may be set by the user before using the function. For example, when the smart television is started for the first time, a window is provided to allow a user to perform initial setting, where the initial setting includes a user account, a language type used for voice recognition, password management, and a gesture code corresponding to each gesture. When a user starts gesture recognition, the intelligent television controls the camera to shoot the user to obtain an initial track, then the initial track is displayed on a display screen, the user is informed to judge whether the initial track is a gesture track corresponding to a gesture code which the user wants to perform, and if yes, the initial track is used as the gesture track to be bound with the gesture code which is determined by the user before. Since the coordinates of each endpoint in each gesture trajectory of the user are not necessarily identical, when the initial trajectory is displayed, the initial trajectory is converted into a corresponding number, letter or symbol, such as a five-pointed star, for the user to confirm. Therefore, the gesture information corresponding to the bound gesture track is numbers, letters or symbols, when the gesture track corresponding to the gesture code to be determined is subsequently received, the gesture track is converted into the numbers, the letters or the symbols, then the numbers, the letters or the symbols are compared with the gesture information stored in advance, and if the numbers, the letters or the symbols are the same with the gesture information, the corresponding gesture code is determined according to the gesture information.
Step S212, according to the gesture code, determining a corresponding playing mode and a corresponding target device.
Specifically, a code list is preset, and the code list comprises a playing mode and a target device corresponding to each gesture code. And determining a corresponding playing mode according to the gesture code. For example, the playing mode corresponding to the gesture code "12" is "playing", and the corresponding target device is "integrated cooker".
Further, in a first implementation manner of this embodiment, the instruction information is voice information, and referring to fig. 4, step S200 includes:
step S221, according to a preset language type, performing voice recognition on the voice information to generate text information.
Specifically, a plurality of recognizable voice types are preset. In order to adapt to users with different language habits, in the embodiment, when the user uses the smart television for the first time, the language type adopted by the voice recognition can be set in the setting module. The phonetic types include Chinese Mandarin, Chinese dialect, and foreign language. For example, a user often uses the dialect of a place, so the speech recognition type is set in the smart television as the dialect of a place in advance. And after the voice information is acquired, performing voice recognition on the voice information according to the voice type of the place A, so as to generate corresponding text information. Common voice recognition methods can be divided into three major categories, the first category is a model matching method, including vector quantization, dynamic time warping and the like; the second category is probabilistic statistical methods, including gaussian mixture models, hidden markov models, etc.; the third category is the discriminator classification method such as support vector machine, artificial neural network, deep neural network, and the like, as well as various combination methods.
Step S222, extracting keywords in the text information according to a preset keyword extraction rule, and generating the name and the playing mode of the target device.
Specifically, a plurality of keyword extraction rules are set in advance. The basic keyword extraction rules include a play mode extraction rule and an equipment name extraction rule. The playing mode extraction rule and the equipment name extraction rule are rules for searching each word in the text information by adopting a keyword list. The keyword list may be generated according to a large amount of text training, for example, first performing name entity recognition on the text information to generate a plurality of words in the text information, then classifying each word, determining a noun related to an operation and determining the noun as a play mode keyword, and determining a noun related to an equipment name as an equipment name keyword. And then writing the playing mode keywords and the equipment noun keywords into a preset list to generate a keyword list. Extracting keywords according to the keyword extraction rule, that is, extracting a plurality of keywords according to the keyword list, where the extraction manner may be in the form of a regular expression, for example, a function re. And then determining the name of the target device and the player according to the extracted keywords. Due to the diversity of the words, different words refer to the same thing in many times, so after the keywords are extracted, the name and the playing mode of the target device can be determined according to the target words corresponding to each pre-stored keyword, such as the same target word 'pineapple' corresponding to the two keywords of 'pineapple'. For example, the smart television and the integrated cooker are linked in the current internet of things, keywords such as the name of the integrated cooker, "integrated cooker", the model "a type", and the name of the integrated cooker "correspond to the same target secondary" integrated cooker ", the text information is sequentially matched, and if the matching is successful, the target device is determined to be the integrated cooker, so that the name of the target device is generated. For example, a certain text message is "send this program to the integrated cooker", the name of the target device extracted last is "integrated cooker", and the playing mode is "play".
Step S300, generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode.
Specifically, the program watched by the user may be a downloaded video, a network video in the form of streaming media, or a fixed video link, and thus the form of the audiovisual file includes, but is not limited to, a network link, a video file in a different format, an image file in a different format, and an audio file in a different format. And after the playing mode specified by the user is determined, packaging the audio-visual file and the playing mode to generate a corresponding playing instruction. Before sending, the smart television can send a small data packet to the integrated oven, and if a feedback confirming that data is received returned by the integrated oven can be received, it is determined that a communication channel is established between the smart television and the integrated oven. If a communication channel which is directly communicated with each other is established between the smart television and the integrated cooker, the playing instruction can be directly sent to the target equipment based on the communication channel, the target equipment decompresses the playing instruction to obtain the audio-visual file and the playing mode, and then corresponding operations such as direct playing, timing playing, circular playing, adding in a list and waiting for playing are executed on the audio-visual file according to the playing mode.
Further, when only the smart tv and the integrated cooker exist in the home, the smart tv and the integrated cooker may be directly transmitted, but there are a plurality of devices that can be interconnected with each other in the home, and at this time, a central system is required to perform control, referring to fig. 5, step S300 includes:
and step S310, generating a corresponding operation instruction according to the audio-visual file and the playing mode.
Specifically, similar to the above example, the audio-visual file and the playing mode are first packed to generate a corresponding operation instruction.
Further, when different devices are switched to watch programs, the time difference problem may exist, which brings inconvenience to users, for example, when a user watches a certain program for ten minutes, the user needs to go to a kitchen to continue cooking, and when the user watches the program in the kitchen using an integrated stove, the user needs to fast forward to drag a playing progress bar to find a previous watching position. In this embodiment, the generation manner of the operation instruction is as follows: and writing the audio-visual file, breakpoint information corresponding to the audio-visual file and the playing mode into a preset blank file to generate a corresponding operation instruction. The breakpoint information is information for recording that a user interrupts viewing the audiovisual file, in this embodiment, the breakpoint information includes a playing time or a playing content in the audiovisual file when the audiovisual file is interrupted to be played, for example, when the user wants to send the audiovisual file to an integrated oven to be played after the audiovisual file is played for 1 minute, the breakpoint information is 1 minute.
Step S320, the operation instruction is sent to a pre-connected central control system, and the central control system is controlled to send the operation instruction to the target equipment according to the name of the target equipment, so that the target equipment can play the audio-visual file.
Specifically, The Internet of Things (IOT) is to realize connection between objects and between people through various information sensors, radio frequency identification technologies, and The like, so as to realize intelligent sensing, identification, and management. In this embodiment, based on the internet of things, a central control system is preset, which is in communication connection with all the devices in a specific environment and is responsible for communication among the devices. Common internet of things technologies include narrowband internet of things, local area networks and the like. And after the operation instruction is generated, based on communication connection built with a central control system in advance, the operation instruction is sent to the central control system, the central control system is controlled to determine equipment with the same name as the target equipment in a plurality of pieces of equipment which are connected in advance according to the name of the target equipment, and the central control system sends the operation instruction to the target equipment. And the target equipment decompresses the operation instruction to obtain the audio-visual file and the playing mode. And the target equipment plays the audio-visual file according to the playing mode.
Before playing, whether the audio-visual file has corresponding breakpoint information is judged, if yes, the breakpoint information is loaded, and then a user is informed to determine whether to play the audio-visual file at the breakpoint information. For example, after a user watches a certain program for one minute, the program is sent to an integrated oven, the integrated oven displays characters such as 'playing for one minute last time and continuing watching or not' on a display interface, and if the user clicks and confirms, the integrated oven continues playing from the position of 1 minute of the audio-visual file; and if the user clicks the denial button, the integrated cooker starts to play the audio-visual file from the beginning, so that the user can realize breakpoint continuous playing when switching and playing among different devices.
The execution main part is the smart television in this embodiment, and the target device is integrated kitchen, and the execution main part can also be integrated kitchen, and the target device is smart television or other intelligent terminal to the program that the user wants to watch continues to watch after accomplishing the culinary art.
Further, as shown in fig. 6, based on the above playing method, the present invention also provides an intelligent terminal, which includes a processor 10, a memory 20 and a display 30. Fig. 6 shows only some of the components of the smart terminal, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may be an internal storage unit of the intelligent terminal in some embodiments, such as a hard disk or a memory of the intelligent terminal. The memory 20 may also be an external storage device of the Smart terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the Smart terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the smart terminal. The memory 20 is used for storing application software installed in the intelligent terminal and various data, such as program codes of the installed intelligent terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In an embodiment, the memory 20 stores a playback program 40, and the playback program 40 can be executed by the processor 10 to implement the playback method of the present application.
The processor 10 may be a Central Processing Unit (CPU), a microprocessor or other data Processing chip in some embodiments, and is used for running the program codes stored in the memory 20 or Processing data, such as executing the playing method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the intelligent terminal and for displaying a visual user interface. The components 10-30 of the intelligent terminal communicate with each other via a system bus.
In one embodiment, when the processor 10 executes the playing program 40 in the memory 20, the following steps are implemented:
when receiving a confirmation instruction for the audiovisual file, acquiring instruction information;
determining the name and the playing mode of the corresponding target equipment according to the instruction information;
and generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode.
Wherein the instruction information comprises gesture information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction for the audio-visual file is received, video shooting is carried out on the current environment according to preset shooting time, and a gesture video is generated;
and drawing a gesture track according to the gesture video to generate the gesture information.
Wherein the instruction information comprises voice information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction aiming at the audio-visual file is received, acquiring audio in a direction corresponding to the positioning information according to preset positioning information, and generating voice information.
Wherein, the determining the corresponding name and playing mode of the target device according to the instruction information comprises:
determining a gesture code corresponding to the gesture information according to a preset gesture list;
and determining a corresponding playing mode and a target device according to the gesture code.
Wherein, the determining the corresponding name and playing mode of the target device according to the instruction information comprises:
performing voice recognition on the voice information according to a preset language type to generate text information;
and extracting keywords in the text information according to a preset keyword extraction rule to generate the name and the playing mode of the target equipment.
Wherein the voice types include Chinese Mandarin, Chinese dialect, and foreign language.
Generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode, wherein the method comprises the following steps:
generating a corresponding operation instruction according to the audio-visual file and the playing mode;
and sending the operating instruction to a pre-connected central control system, and controlling the central control system to send the operating instruction to the target equipment according to the name of the target equipment so that the target equipment can play the audio-visual file.
Wherein, the generating of the corresponding operation instruction according to the audio-visual file and the playing mode comprises:
and writing the audio-visual file, breakpoint information corresponding to the audio-visual file and the playing mode into a preset blank file to generate a corresponding operation instruction.
The present invention also provides a storage medium, wherein the storage medium stores a playback program, and the playback program implements the steps of the playback method described above when executed by a processor.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A playback method, the playback method comprising:
when receiving a confirmation instruction for the audiovisual file, acquiring instruction information;
determining the name and the playing mode of the corresponding target equipment according to the instruction information;
and generating a corresponding playing instruction according to the audio-visual file and the playing mode, and sending the corresponding playing instruction to the target equipment corresponding to the name of the target equipment so that the target equipment can play the audio-visual file according to the playing mode.
2. The playback method according to claim 1, wherein the instruction information includes gesture information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction for the audio-visual file is received, video shooting is carried out on the current environment according to preset shooting time, and a gesture video is generated;
and drawing a gesture track according to the gesture video to generate the gesture information.
3. The playback method according to claim 1, wherein the instruction information includes voice information; when receiving a confirmation instruction for the audiovisual file, acquiring instruction information, including:
when a confirmation instruction aiming at the audio-visual file is received, acquiring audio in a direction corresponding to the positioning information according to preset positioning information, and generating voice information.
4. The playback method according to claim 2, wherein the determining, according to the instruction information, a name and a playback mode of the corresponding target device includes:
determining a gesture code corresponding to the gesture information according to a preset gesture list;
and determining a corresponding playing mode and a target device according to the gesture code.
5. The playback method according to claim 3, wherein the determining, according to the instruction information, a name and a playback mode of the corresponding target device includes:
performing voice recognition on the voice information according to a preset language type to generate text information;
and extracting keywords in the text information according to a preset keyword extraction rule to generate the name and the playing mode of the target equipment.
6. The playback method as claimed in claim 5, wherein the speech types include Mandarin Chinese, dialect Chinese, and foreign language.
7. The playback method according to any one of claims 1 to 6, wherein the generating a corresponding playback instruction according to the audiovisual file and the playback manner, and sending the playback instruction to a target device corresponding to the name of the target device, so that the target device plays the audiovisual file according to the playback manner includes:
generating a corresponding operation instruction according to the audio-visual file and the playing mode;
and sending the operating instruction to a pre-connected central control system, and controlling the central control system to send the operating instruction to the target equipment according to the name of the target equipment so that the target equipment can play the audio-visual file.
8. The playback method according to claim 7, wherein the generating a corresponding operation instruction according to the audiovisual file and the playback mode comprises:
and writing the audio-visual file, breakpoint information corresponding to the audio-visual file and the playing mode into a preset blank file to generate a corresponding operation instruction.
9. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor and a playback program stored on the memory and executable on the processor, the playback program realizing the steps of the playback method as claimed in any one of claims 1 to 8 when executed by the processor.
10. A computer-readable storage medium, characterized in that the storage medium stores a playback program, which when executed by a processor implements the steps of the playback method according to any one of claims 1 to 8.
CN202011031326.2A 2020-09-27 2020-09-27 Playing method, intelligent terminal and computer readable storage medium Pending CN114339331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011031326.2A CN114339331A (en) 2020-09-27 2020-09-27 Playing method, intelligent terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011031326.2A CN114339331A (en) 2020-09-27 2020-09-27 Playing method, intelligent terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114339331A true CN114339331A (en) 2022-04-12

Family

ID=81011975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011031326.2A Pending CN114339331A (en) 2020-09-27 2020-09-27 Playing method, intelligent terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114339331A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192411A (en) * 2007-12-27 2008-06-04 北京中星微电子有限公司 Large distance microphone array noise cancellation method and noise cancellation system
CN104199552A (en) * 2014-09-11 2014-12-10 福州瑞芯微电子有限公司 Multi-screen display method, device and system
CN108520754A (en) * 2018-04-09 2018-09-11 广东思派康电子科技有限公司 A kind of noise reduction meeting machine
CN110730373A (en) * 2019-12-18 2020-01-24 南京创维信息技术研究院有限公司 Method and system for pushing videos across screens among devices with screens
CN111176431A (en) * 2019-09-23 2020-05-19 广东小天才科技有限公司 Screen projection control method of sound box and sound box

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192411A (en) * 2007-12-27 2008-06-04 北京中星微电子有限公司 Large distance microphone array noise cancellation method and noise cancellation system
CN104199552A (en) * 2014-09-11 2014-12-10 福州瑞芯微电子有限公司 Multi-screen display method, device and system
CN108520754A (en) * 2018-04-09 2018-09-11 广东思派康电子科技有限公司 A kind of noise reduction meeting machine
CN111176431A (en) * 2019-09-23 2020-05-19 广东小天才科技有限公司 Screen projection control method of sound box and sound box
CN110730373A (en) * 2019-12-18 2020-01-24 南京创维信息技术研究院有限公司 Method and system for pushing videos across screens among devices with screens

Similar Documents

Publication Publication Date Title
AU2020203023B2 (en) Intelligent automated assistant for TV user interactions
US9582246B2 (en) Voice-command suggestions based on computer context
US9489171B2 (en) Voice-command suggestions based on user identity
US20150256873A1 (en) Relayed voice control of devices
US20140006022A1 (en) Display apparatus, method for controlling display apparatus, and interactive system
KR102147329B1 (en) Video display device and operating method thereof
EP1160664A2 (en) Agent display apparatus displaying personified agent for selectively executing process
CN103686200A (en) Intelligent television video resource searching method and system
CN111462744A (en) Voice interaction method and device, electronic equipment and storage medium
US10503776B2 (en) Image display apparatus and information providing method thereof
CN116847131A (en) Play control method, device, remote controller, play system and storage medium
CN109564758A (en) Electronic equipment and its audio recognition method
CN114339331A (en) Playing method, intelligent terminal and computer readable storage medium
KR102667407B1 (en) Display apparatus for performing a voice control and method thereof
EP3905707A1 (en) Display device and operating method thereof
CN117672216A (en) Large-screen voice recognition method based on intelligent peripheral

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination