CN107291416B - Audio playing method, system and terminal equipment - Google Patents

Audio playing method, system and terminal equipment Download PDF

Info

Publication number
CN107291416B
CN107291416B CN201710468864.XA CN201710468864A CN107291416B CN 107291416 B CN107291416 B CN 107291416B CN 201710468864 A CN201710468864 A CN 201710468864A CN 107291416 B CN107291416 B CN 107291416B
Authority
CN
China
Prior art keywords
user
playing
audio
state
audio information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710468864.XA
Other languages
Chinese (zh)
Other versions
CN107291416A (en
Inventor
裴曾妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201710468864.XA priority Critical patent/CN107291416B/en
Publication of CN107291416A publication Critical patent/CN107291416A/en
Application granted granted Critical
Publication of CN107291416B publication Critical patent/CN107291416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Abstract

The invention is suitable for the technical field of communication, and provides an audio playing method, an audio playing system and terminal equipment, wherein the audio playing method comprises the following steps: detecting the current state of a user, and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user; after receiving the confirmation of playing the audio information by the user, separating the audio information in a preset time period in the currently played video file, wherein the video file comprises the video information and the audio information; and playing the separated audio information in the screen extinguishing state. In the process, the audio information playing prompt is pushed to the user according to the detected current state of the user, and the audio is played, so that the situation that the body is not affected due to the fact that the user continues to watch the video in a state that the user is not suitable for watching the video is effectively avoided, and the user experience is improved.

Description

Audio playing method, system and terminal equipment
Technical Field
The invention belongs to the technical field of communication, and particularly relates to an audio playing method, an audio playing system and terminal equipment.
Background
With the continuous emergence of various new functions of intelligent devices, more and more users begin to acquire various information through the intelligent devices. For example, the user can obtain and play various teaching resources in the internet through the internet function of the family education machine, and can even receive remote lessons through the family education machine. The visual fatigue of a user can be caused no matter what intelligent equipment is used for watching videos, and in a video file played by the intelligent equipment, some contents do not need to be watched by the user, and the user only needs to listen to the audio; another part of the content must be combined with video and audio to achieve the desired effect.
At present, the intelligent device cannot automatically select to play audio or simultaneously play video and audio according to the state of a user.
Disclosure of Invention
In view of this, embodiments of the present invention provide an audio playing method, an audio playing system, and a terminal device, so as to solve the problem that an intelligent device in the prior art cannot play audio according to a current state of a user when playing a same video file.
A first aspect of an embodiment of the present invention provides an audio playing method, where the audio playing method includes:
detecting the current state of a user, and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
after receiving the confirmation of playing the audio information by the user, separating the audio information in a preset time period in the currently played video file, wherein the video file comprises the video information and the audio information;
and playing the separated audio information in the screen extinguishing state.
A second aspect of an embodiment of the present invention provides an audio playing system, where the audio playing system includes:
the prompt information pushing unit is used for detecting the current state of a user and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
the audio information separation unit is used for separating audio information in a preset time period in a currently played video file after receiving the confirmation of playing the audio information by a user, wherein the video file comprises the video information and the audio information;
and the audio information playing unit is used for playing the separated audio information in the screen-off state.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the audio playing methods when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the method according to any one of the audio playback methods.
In the implementation of the invention, the current state of the user is detected, so that the audio playing prompt is pushed to the user according to the detected current state of the user, after the audio playing message is confirmed to be played by the user, the audio information in the preset time period is separated from the current playing position of the currently played video file, and the separated audio information is played in the screen-off state. In the process, the audio information playing prompt is pushed to the user according to the detected current state of the user, and the audio is played, so that the situation that the body is not affected due to the fact that the user continues to watch the video in a state that the user is not suitable for watching the video is effectively avoided, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart illustrating an implementation process of an audio playing method according to an embodiment of the present invention;
fig. 2 is a block diagram of an audio playing system according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal device according to a third embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 1 shows a flowchart of an implementation of an audio playing method according to an embodiment of the present invention, which is detailed as follows:
step S11, detecting the current state of the user, and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
in the embodiment of the invention, the user may not be suitable for watching the video file by using the intelligent device due to various factors, for example, when the user feels fatigue, the user may not perceive the fatigue for a light degree; for example, when the user is in an ambulatory state; for example, when the user is in an environment with poor light, the user is not suitable to continue watching the video; therefore, in the embodiment of the invention, the current state of the user is detected firstly, and whether the prompt of audio playing needs to be pushed to the user or not is judged according to the detected current state of the user. Specifically, the current fatigue degree of a user is detected, and when the user is detected to be in a fatigue state, an audio playing prompt is pushed to wait for user feedback; or/and detecting the position change condition of the user by an intelligent equipment positioning system, if the position change speed of the user is detected to exceed a preset value, judging that the user is in a walking state which is not suitable for watching the video file at present, and pushing an audio playing prompt to the user at the moment to wait for the feedback of the user; or/and detecting people stream information in the environment where the user is located through a sensor, if the detected people stream around the user is larger than a preset value, judging that the user is currently located in an area with large people flow, and pushing an audio playing prompt to the user at the moment to wait for feedback of the user; optionally, people stream information in the environment where the user is located is detected to be combined with a positioning system of the intelligent device, and the position of the current user is determined, for example, the user may be located on a road with a large people stream or in a subway station, and the positions and the current environment are not suitable for the user to continuously watch the video file, at this time, an audio playing prompt is pushed to the user, and the user feedback is waited; or/and detecting the current environment state of the user through a brightness sensor, and pushing an audio playing prompt to wait for feedback of the user when detecting that the current environment brightness is not suitable for the user to watch the video file.
Optionally, before the step S11, the method includes:
judging whether the subject corresponding to the currently played video file is a designated subject, wherein the designated subject comprises a subject with weak writing property;
when the subject corresponding to the currently played video file is a designated subject, judging whether the playing time length of the currently played video file is greater than a preset time length threshold value;
and when the playing time length of the currently played video file is greater than a preset time length threshold, executing the step S11.
When a user learns some subjects (such as Chinese) with weak writing performance by using the intelligent device, the user may need to watch the video file for a long time to learn the content of one chapter, so that the intelligent device can detect the current state of the user after the user watches the video file for a certain time, push an audio playing prompt to the user according to a detection result, and wait for the feedback of the user.
Preferably, the detecting a current state of a user and pushing an audio play prompt according to the current state specifically includes:
detecting the current fatigue degree of a user, and pushing an audio playing prompt according to the current fatigue degree of the user;
and/or the first and/or second light sources,
and detecting the brightness of the current environment of the user, and pushing an audio playing prompt according to the brightness.
Specifically, when the time for watching the video file is too long, eyes of the user may be dry, lacrimation, and the blinking frequency may deviate from a normal value, so that the current fatigue degree of the user may be determined by detecting the fatigue degree of the eyes (or eyes) of the user, and an audio play prompt may be pushed to the user when the current eye fatigue degree of the user exceeds a preset value; the current eye fatigue degree of the user can be determined according to the time length of the user watching the video; and/or, because the brightness of the environment where the user is located is too dark or too bright, the eyes of the user are damaged to a certain extent, but the user cannot accurately know whether the brightness of the environment where the user is located is within the brightness range suitable for watching the video at ordinary times, the brightness of the environment where the user is located at present can be detected through the brightness sensor, whether the detected brightness is within the brightness range suitable for watching the video is judged, and if the detected brightness is not within the brightness range suitable for watching the video, an audio playing prompt is pushed to the user.
Optionally, before the detecting the current fatigue degree of the user and pushing an audio playing prompt according to the current fatigue degree of the user, the method includes:
detecting and recording the video watching time length of a user in a preset time period and the self state of the user under the corresponding time length;
and establishing a relation between the video watching time of the user and the fatigue degree grade of the user according to the self state of the user under different time lengths.
Specifically, when the audio playing prompt is pushed according to the current fatigue degree of the user (for example, according to the eye fatigue degree of the user), since there is a certain relationship between the eye fatigue degree of the user and the time length for watching the video, the eye fatigue degree of the user when watching the video with the preset time length is firstly detected and recorded. For example, according to the habit of watching videos of users, the degree of eye fatigue of users when watching videos for 1 hour continuously is counted for a plurality of times under the same environmental condition between 9:00-10:00 (or other time periods) per day, between 9:00-11:00 (or other time periods) per day, and the degree of eye fatigue of users when watching videos for 2 hours continuously is counted for a plurality of times. Determining a relationship between the video watching time of the user and the eye fatigue degree grade of the user according to the statistical result, and further determining the current fatigue degree grade of the user according to the eye fatigue degree grade of the user, optionally, the eye fatigue degree grade of the user is equal to the current fatigue degree grade of the user, and certainly, since the current fatigue degree of the user is not only related to the eye fatigue degree of the user, but also possibly related to discomfort caused by other organs, the fatigue degree grades of other organs can be determined, and the fatigue degree grades of other organs and the eye fatigue degree grade are added to obtain the current fatigue degree grade of the user. For example, the degree of decrease in the user reaction rate may be detected, and the current degree of user reaction rate decrease may be combined with the degree of eye fatigue to determine the current degree of user fatigue.
Step S12, after receiving the confirmation of playing audio information from the user, separating the audio information in the preset time period in the currently played video file, wherein the video file comprises video information and audio information;
in the embodiment of the invention, after receiving an audio playing prompt pushed by intelligent equipment, a user confirms the content of the currently played position of a video file, if the video content from the current position to a certain period of time only needs the user to listen to audio information, according to a preset time period, the audio information corresponding to the video information in the currently played video file in the preset time period is separated from the played position of the current video file; the video file comprises video information and audio information. For example, when a user watches a certain chapter of a video learning language subject by using a terminal device, the user receives audio playing prompt information, at this time, the user checks whether the position where a video related to the currently played language subject is played can obtain knowledge information contained in the video only by listening to the audio information, if so, the user sends and confirms the audio information to be played, and the intelligent device starts to extract the audio information; specifically, when the audio information is extracted, the audio information is extracted according to a preset time period, when the preset time period is half a hour, the audio information corresponding to the video information is extracted every time within half an hour, and then the audio information within the next half an hour is extracted from the position where the video information in the current video file is played. Of course, the user may set the preset time period to be other time periods according to the characteristics of the video file watched by the user, which is not limited herein.
And step S13, playing the separated audio information in the screen-off state.
In the embodiment of the invention, after receiving the information which is sent by the user and confirms to play the audio, the audio information corresponding to the video information in the preset time period is separated, then the playing of the video information of the current video file is stopped, and the separated audio information is played in the screen-off state. When only audio playing is carried out on part of video information in the video file, as the audio information is played in a screen-off state during playing, not only can the fatigue of a user be relieved, but also the electric quantity of the intelligent equipment can be saved, and the purposes of saving the electric quantity and prolonging the service life of the intelligent equipment are achieved.
Optionally, after playing the separated audio information in the screen-off state, the method includes:
receiving a video playing instruction sent by a user, and ending the playing of the audio information;
and calling a complete video file corresponding to the audio information, and starting to play the corresponding video file from the position where the audio information stops playing.
Specifically, when the intelligent device is in an audio information playing state and a user needs to play video information again by the intelligent device, the intelligent device can stop playing the audio information by double clicking the screen of the intelligent device (the user can also send a message for stopping playing the audio information to the intelligent device by other preset conversion modes), and then the video information playing page is returned. When the audio information playing is stopped, the position where the audio information playing is stopped is recorded, a corresponding complete video file is called, and corresponding video information is played from the video information position corresponding to the position where the audio information playing is stopped.
Optionally, the intelligent device performs operations of stopping, starting, fast forwarding, rewinding and the like on the audio playing by sensing the change of the user gesture in the process of the audio playing. Specifically, the corresponding gesture can be preset to realize various operations on the audio playing process in the screen-off state of the intelligent device. If the screen is clicked in the screen off state, the audio playing is started if the screen is clicked again and the audio playing is stopped; sliding up the screen corresponds to backward playing the audio, sliding down corresponds to fast forward playing of the audio information, etc.
Optionally, after playing the separated audio information in the screen-off state, the method includes:
and continuously detecting the state of the user, and stopping playing the audio information when the user is detected to be in the sleep state.
Specifically, when the user uses the intelligent device at night, the phenomenon that the user enters the sleep state in the use process is easy to occur, so that the intelligent device of the embodiment of the invention continuously detects the self state of the user when playing the audio so as to judge whether the user is in the sleep state. When judging whether the user is in the sleep state, the eye movement of the user can be monitored to judge, for example, if the blinking movement of the user is not detected within the preset time, the user is judged to be in the sleep state. And if the user is detected to be in the sleep state, automatically stopping playing of the audio information, and adjusting the intelligent equipment to be in the standby state or the power-off state. When the audio information is detected to be in the sleep state, the audio information is stopped from being played, the intelligent equipment can save the electric quantity when entering the standby state or the power-off state, meanwhile, the user can have a better rest, and the user experience is improved.
In the implementation of the invention, the current state of the user is detected, so that the audio playing prompt is pushed to the user according to the detected current state of the user, after the audio playing message is confirmed to be played by the user, the audio information in the preset time period is separated from the current playing position of the currently played video file, and the separated audio information is played in the screen-off state. In the process, when the fact that the user is in the fatigue state at present or the environment state where the user is in the present is not suitable for continuously watching the video is detected, an audio information playing prompt is pushed to the user, and the situation that the user continuously watches the video in the fatigue state or the environment where the user is not suitable for watching the video and the body is badly affected is effectively avoided; meanwhile, if the user needs to adjust the video information playing state during the audio information playing, the user can play the video information from the position where the current audio information playing is stopped by operating the intelligent device according to the preset switching action, so that the free switching between the audio playing and the video playing is realized, and the user experience is improved.
Example two:
fig. 2 shows a block diagram of an audio playing system provided in an embodiment of the present invention, which corresponds to the audio playing method described in the above embodiment, and for convenience of description, only the relevant parts related to the embodiment of the present invention are shown.
Referring to fig. 2, the audio playing system includes: a prompt information pushing unit 21, an audio information separating unit 22 and an audio information playing unit 23; wherein:
the prompt information pushing unit 21 is configured to detect a current state of a user, and push an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
in the embodiment of the invention, the user may not be suitable for watching various video files by using the intelligent device due to various factors, for example, when the user feels fatigue, the user may not perceive the user with slight fatigue; for example, when the user is in an ambulatory state; for example, when the user is in an environment with poor light, the user is not suitable to continue watching the video; therefore, in the embodiment of the invention, the current state of the user is detected firstly, and whether the prompt of audio playing needs to be pushed to the user or not is judged according to the detected current state of the user. Specifically, the current fatigue degree of a user is detected, and when the user is detected to be in a fatigue state, an audio playing prompt is pushed to wait for user feedback; or/and detecting the position change condition of the user by an intelligent equipment positioning system, if the position change speed of the user is detected to exceed a preset value, judging that the user is in a walking state which is not suitable for watching the video file at present, and pushing an audio playing prompt to the user at the moment to wait for the feedback of the user; or/and detecting people stream information in the environment where the user is located through a sensor, if the detected people stream around the user is larger than a preset value, judging that the user is currently located in an area with large people flow, and pushing an audio playing prompt to the user at the moment to wait for feedback of the user; optionally, people stream information in the environment where the user is located is detected to be combined with a positioning system of the intelligent device, the position of the current user is determined, and if the user is possibly located on a road with a large people stream or in a subway station, the positions and the current environment are not suitable for the user to continuously watch the video file, at the moment, an audio playing prompt is pushed to the user to wait for feedback of the user; or/and detecting the current environment state of the user through a brightness sensor, and pushing an audio playing prompt to wait for feedback of the user when detecting that the current environment brightness is not suitable for the user to watch the video file.
Optionally, the audio playing system further includes:
the judging unit is used for judging whether the subject corresponding to the currently played video file is a specified subject, wherein the specified subject comprises a subject with weak blackboard writing property; when the subject corresponding to the currently played video file is a designated subject, judging whether the playing time length of the currently played video file is greater than a preset time length threshold value; and when the playing time of the currently played video file is greater than a preset time threshold, executing the steps in the prompt information pushing unit 21.
When a user learns some subjects (such as Chinese) with weak writing performance by using the intelligent device, the user may need to watch the video file for a long time to learn the content of one chapter, so that the intelligent device can detect the current state of the user after the user watches the video file for a certain time, push an audio playing prompt to the user according to a detection result, and wait for the feedback of the user.
Preferably, the prompt information pushing unit 21 specifically includes:
the fatigue degree detection module is used for detecting the current fatigue degree of a user and pushing an audio playing prompt according to the current fatigue degree of the user;
and the environment brightness detection module is used for detecting the brightness of the current environment of the user and pushing an audio playing prompt according to the brightness.
Specifically, when the user watches the video file for too long time, the eyes may have the frequency of dryness, lacrimation, blinking, and the like, which have a deviation from a normal value, so that the current fatigue degree of the user can be determined by detecting the fatigue degree of the eyes (or eyes) of the user, and an audio playing prompt is pushed to the user when the current eye fatigue degree of the user exceeds a preset value; the current eye fatigue degree of the user can be determined according to the time length of the user watching the video; and/or, because the brightness of the environment where the user is located is too dark or too bright, the eyes of the user are damaged to a certain extent, but the user cannot accurately know whether the brightness of the environment where the user is located is within the brightness range suitable for watching the video at ordinary times, the brightness of the environment where the user is located at present can be detected through the brightness sensor, whether the detected brightness is within the brightness range suitable for watching the video is judged, and if the detected brightness is not within the brightness range suitable for watching the video, an audio playing prompt is pushed to the user.
Optionally, the audio playing system further includes:
the fatigue degree grade determining unit is used for detecting and recording the video watching time length of the user in a preset time period and the self state of the user under the corresponding time length; and establishing a relation between the video watching time of the user and the fatigue degree grade of the user according to the self state of the user under different time lengths.
Specifically, when the audio playing prompt is pushed according to the current fatigue degree of the user (for example, according to the eye fatigue degree of the user), since there is a certain relationship between the eye fatigue degree of the user and the time length for watching the video, the eye fatigue degree of the user when watching the video with the preset time length is firstly detected and recorded. Determining a relationship between the video watching time of the user and the eye fatigue degree grade of the user according to the statistical result, and further determining the current fatigue degree grade of the user according to the eye fatigue degree grade of the user, optionally, the eye fatigue degree grade of the user is equal to the current fatigue degree grade of the user, and certainly, since the current fatigue degree of the user is not only related to the eye fatigue degree of the user, but also possibly related to discomfort caused by other organs, the fatigue degree grades of other organs can be determined, and the fatigue degree grades of other organs and the eye fatigue degree grade are added to obtain the current fatigue degree grade of the user. For example, the degree of decrease in the user reaction rate may be detected, and the current degree of user reaction rate decrease may be combined with the degree of eye fatigue to determine the current degree of user fatigue.
The audio information separation unit 22 is configured to, after receiving the user confirmation to play the audio information, separate the audio information within a preset time period in a currently played video file, where the video file includes video information and audio information;
in the embodiment of the invention, after receiving an audio playing prompt pushed by intelligent equipment, a user confirms the content of the currently played position of a video file, if the video content from the current position to a certain period of time only needs the user to listen to audio information, according to a preset time period, the audio information corresponding to the video information in the currently played video file in the preset time period is separated from the played position of the current video file; the video file comprises video information and audio information. For example, when a user watches a certain chapter of a video learning language subject by using a terminal device, the user receives audio playing prompt information, at this time, the user checks whether the position where a video related to the currently played language subject is played needs to listen to the audio information to obtain the knowledge information contained in the video, if so, the user sends confirmation playing audio information, the intelligent device starts to extract the audio information, specifically, when the audio information is extracted, the audio information is extracted according to a preset time period, when the preset time period is half hour, the audio information within half hour corresponding to the video information is extracted every time, and then the audio information within the next half hour is extracted from the position where the video information is played in the current video file. Of course, the user may set the preset time period to be other time periods according to the characteristics of the video file watched by the user, which is not limited herein.
And an audio information playing unit 23, configured to play the separated audio information in the screen-off state.
In the embodiment of the invention, after receiving the information which is sent by the user and confirms to play the audio, the audio information corresponding to the video information in the preset time period is separated, then the playing of the video information of the current video file is stopped, and the separated audio information is played in the screen-off state. When only audio playing is carried out on part of video information in the video file, as the audio playing is carried out in a screen-off state, the fatigue of a user can be relieved, meanwhile, the electric quantity of the intelligent equipment can be saved, and the purposes of saving the electric quantity and prolonging the service life of the intelligent equipment are achieved.
Optionally, the audio playing system further includes:
the video playing switching unit is used for receiving a video playing instruction sent by a user and ending the playing of the audio information; and calling a complete video file corresponding to the audio information, and starting to play the corresponding video file from the position where the audio information stops playing.
Specifically, when the intelligent device is in an audio information playing state and a user needs to play video information again by the intelligent device, the intelligent device can stop playing the audio information by double clicking the screen of the intelligent device (the user can also send a message for stopping playing the audio information to the intelligent device by other preset conversion modes), and then the video information playing page is returned. When the audio information playing is stopped, the position where the audio information playing is stopped is recorded, a corresponding complete video file is called, and corresponding video information is played from the video information position corresponding to the position where the audio information playing is stopped.
Optionally, the intelligent device performs operations of stopping, starting, fast forwarding, rewinding and the like on the audio playing by sensing the change of the user gesture in the process of the audio playing. Specifically, the corresponding gesture can be preset to realize various operations on the audio playing process in the screen-off state of the intelligent device. If the screen is clicked in the screen off state, the audio playing is started if the screen is clicked again and the audio playing is stopped; sliding up the screen corresponds to backward playing the audio, sliding down corresponds to fast forward playing of the audio information, etc.
Optionally, the audio playing system further includes:
and the playing stopping unit is used for continuously detecting the self state of the user and stopping the playing of the audio information when the user is detected to be in the sleep state.
Specifically, when the user uses the intelligent device at night, the phenomenon that the user enters the sleep state in the use process is easy to occur, so that the intelligent device of the embodiment of the invention continuously detects the self state of the user when playing the audio so as to judge whether the user is in the sleep state. When judging whether the user is in the sleep state, the eye movement of the user can be monitored to judge, for example, if the blinking movement of the user is not detected within the preset time, the user is judged to be in the sleep state. And if the user is detected to be in the sleep state, automatically stopping playing of the audio information, and adjusting the intelligent equipment to be in the standby state or the power-off state. When the audio information is detected to be in the sleep state, the audio information is stopped from being played, the intelligent equipment can save the electric quantity when entering the standby state or the power-off state, meanwhile, the user can have a better rest, and the user experience is improved.
In the implementation of the invention, the current state of the user is detected, so that the audio playing prompt is pushed to the user according to the detected current state of the user, after the audio playing message is confirmed to be played by the user, the audio information in the preset time period is separated from the current playing position of the currently played video file, and the separated audio information is played in the screen-off state. In the process, when the fact that the user is in the fatigue state at present or the environment state where the user is in the present is not suitable for continuously watching the video is detected, an audio information playing prompt is pushed to the user, and the situation that the user continuously watches the video in the fatigue state or the environment where the user is not suitable for watching the video and the body is badly affected is effectively avoided; meanwhile, if the user needs to adjust the video information playing state during the audio information playing, the user can play the video information from the position where the current audio information playing is stopped by operating the intelligent device according to the preset switching action, so that the free switching between the audio playing and the video playing is realized, and the user experience is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example three:
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 3, the terminal device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the above-mentioned embodiments of the problem pushing method, such as the steps S11 to S13 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the units 21 to 23 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 32 in the terminal device 3. For example, the computer program 32 may be divided into a prompt information pushing unit, an audio information separating unit, and an audio information playing unit, and the specific functions of each unit are as follows:
the prompt information pushing unit is used for detecting the current state of a user and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
preferably, the prompt information pushing unit specifically includes:
the fatigue degree detection module is used for detecting the current fatigue degree of a user and pushing an audio playing prompt according to the current fatigue degree of the user;
and the environment brightness detection module is used for detecting the brightness of the current environment of the user and pushing an audio playing prompt according to the brightness.
The audio information separation unit is used for separating audio information in a preset time period in a currently played video file after receiving the confirmation of playing the audio information by a user, wherein the video file comprises the video information and the audio information;
and the audio information playing unit is used for playing the separated audio information in the screen-off state.
Optionally, the audio playing system further includes:
the fatigue degree grade determining unit is used for detecting and recording the video watching time length of the user in a preset time period and the self state of the user under the corresponding time length; and establishing a relation between the video watching time of the user and the fatigue degree grade of the user according to the self state of the user under different time lengths.
Optionally, the audio playing system further includes:
the video playing switching unit is used for receiving a video playing instruction sent by a user and ending the playing of the audio information; and calling a complete video file corresponding to the audio information, and starting to play the corresponding video file from the position where the audio information stops playing.
Optionally, the audio playing system further includes:
and the playing stopping unit is used for continuously detecting the self state of the user and stopping the playing of the audio information when the user is detected to be in the sleep state.
The terminal device 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 30, a memory 31. It will be understood by those skilled in the art that fig. 3 is only an example of the terminal device 3, and does not constitute a limitation to the terminal device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may also be an external storage device of the terminal device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3. The memory 31 is used for storing the computer program and other programs and data required by the terminal device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An audio playing method, characterized in that the audio playing method comprises:
when the playing time of a currently played video file is greater than a preset time threshold, detecting the current state of a user, and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
after receiving the confirmation of playing the audio information by the user, separating the audio information in a preset time period in the currently played video file, wherein the video file comprises the video information and the audio information;
and playing the separated audio information in the preset time period in the screen extinguishing state, and controlling the stop, start, fast forward and backward operations of the audio information playing by sensing the change of user gestures in the process of playing the audio information.
2. The audio playing method according to claim 1, wherein the detecting a current state of a user and pushing an audio playing prompt according to the current state specifically includes:
detecting the current fatigue degree of a user, and pushing an audio playing prompt according to the current fatigue degree of the user;
and/or the first and/or second light sources,
and detecting the brightness of the current environment of the user, and pushing an audio playing prompt according to the brightness.
3. The audio playing method according to claim 2, wherein before the detecting the current fatigue level of the user and pushing the audio playing prompt according to the current fatigue level of the user, the method comprises:
detecting and recording the video watching time length of a user in a preset time period and the self state of the user under the corresponding time length;
and establishing a relation between the video watching time of the user and the fatigue degree grade of the user according to the self state of the user under different time lengths.
4. The audio playback method of claim 1, wherein after playing the separated audio information in the screen-off state, comprising:
receiving a video playing instruction sent by a user, and ending the playing of the audio information;
and calling a complete video file corresponding to the audio information, and starting to play the corresponding video file from the position where the audio information stops playing.
5. The audio playback method of claim 1, wherein after playing the separated audio information in the screen-off state, comprising:
and continuously detecting the state of the user, and stopping playing the audio information when the user is detected to be in the sleep state.
6. An audio playback system, comprising:
the device comprises a prompt information pushing unit, a prompt information processing unit and a display unit, wherein the prompt information pushing unit is used for detecting the current state of a user when the playing time of a currently played video file is greater than a preset time threshold value, and pushing an audio playing prompt according to the current state of the user; the current state of the user comprises the self state of the user or/and the current environment state of the user;
the audio information separation unit is used for separating audio information in a preset time period in a currently played video file after receiving the confirmation of playing the audio information by a user, wherein the video file comprises the video information and the audio information;
and the audio information playing unit is used for playing the separated audio information in the preset time period in the screen off state, and controlling the stop, start, fast forward and backward operations of the audio information playing by sensing the change of user gestures in the process of playing the audio information.
7. The audio playing system of claim 6, wherein the prompt information pushing unit specifically comprises:
the fatigue degree detection module is used for detecting the current fatigue degree of a user and pushing an audio playing prompt according to the current fatigue degree of the user;
and the environment brightness detection module is used for detecting the brightness of the current environment of the user and pushing an audio playing prompt according to the brightness.
8. The audio playback system of claim 7, further comprising:
the fatigue degree grade determining unit is used for detecting and recording the video watching time length of the user in a preset time period and the self state of the user under the corresponding time length; and establishing a relation between the video watching time of the user and the fatigue degree grade of the user according to the self state of the user under different time lengths.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201710468864.XA 2017-06-20 2017-06-20 Audio playing method, system and terminal equipment Active CN107291416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710468864.XA CN107291416B (en) 2017-06-20 2017-06-20 Audio playing method, system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710468864.XA CN107291416B (en) 2017-06-20 2017-06-20 Audio playing method, system and terminal equipment

Publications (2)

Publication Number Publication Date
CN107291416A CN107291416A (en) 2017-10-24
CN107291416B true CN107291416B (en) 2021-02-12

Family

ID=60096857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710468864.XA Active CN107291416B (en) 2017-06-20 2017-06-20 Audio playing method, system and terminal equipment

Country Status (1)

Country Link
CN (1) CN107291416B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107889324A (en) * 2017-10-27 2018-04-06 广东小天才科技有限公司 A kind of light adjusting method, device, audio-frequence player device and storage medium
CN108182219A (en) * 2017-12-25 2018-06-19 维沃移动通信有限公司 A kind of music recommends method and mobile terminal
CN108388584A (en) * 2018-01-26 2018-08-10 北京览科技有限公司 A kind of method and apparatus for information recommendation
KR102533443B1 (en) * 2018-05-04 2023-05-17 삼성전자 주식회사 Method for providing content and electronic device using the same
CN110166820B (en) * 2019-05-10 2021-04-09 华为技术有限公司 Audio and video playing method, terminal and device
CN110740343B (en) * 2019-09-11 2022-08-26 深圳壹账通智能科技有限公司 Video type-based play control implementation method and device and computer equipment
CN111078183A (en) * 2019-12-16 2020-04-28 北京明略软件系统有限公司 Audio and video information control method and device, intelligent equipment and computer readable storage medium
CN111949821A (en) * 2020-06-24 2020-11-17 百度在线网络技术(北京)有限公司 Video recommendation method and device, electronic equipment and storage medium
CN112883228A (en) * 2021-02-26 2021-06-01 北京有竹居网络技术有限公司 Recommended video display method, recommended video display device, recommended video display medium and electronic equipment
CN113825025A (en) * 2021-09-18 2021-12-21 四川启睿克科技有限公司 Information reminding method based on video attributes of smart television

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3035656B1 (en) * 2014-12-18 2021-08-04 Samsung Electronics Co., Ltd. Method and apparatus for controlling an electronic device
CN105490894A (en) * 2015-11-27 2016-04-13 上海斐讯数据通信技术有限公司 Terminal capable of switching audio/video play modes and switching method of the terminal
CN105979355A (en) * 2015-12-10 2016-09-28 乐视网信息技术(北京)股份有限公司 Method and device for playing video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An adaptive multi-channel audio-play system with sound-source relocation capabilities;K. H. Kim et al;《2010 Digest of Technical Papers International Conference on Consumer Electronics (ICCE)》;20100222;1-2 *
数字音频处理器控制软件的设计;曹晓敏;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20150415;第2015年卷(第4期);全文 *

Also Published As

Publication number Publication date
CN107291416A (en) 2017-10-24

Similar Documents

Publication Publication Date Title
CN107291416B (en) Audio playing method, system and terminal equipment
CN109002387B (en) User reminding method and device of application program, terminal equipment and storage medium
CN107801096B (en) Video playing control method and device, terminal equipment and storage medium
CN110769319B (en) Standby wakeup interaction method and device
CN110580125B (en) Partial refreshing method, device, equipment and medium for display interface
CN110704004A (en) Voice-controlled split-screen display method and electronic equipment
CN106921883B (en) Video playing processing method and device
US20170277382A1 (en) Page switching method and device applied to electronic equipment
EP4123983A1 (en) Video stream playing control method and apparatus, and storage medium
US20170134789A1 (en) Operation instructing method and apparatus for remote controller of intelligent television
CN112214112A (en) Parameter adjusting method and device
CN105280204A (en) Multi-media file play method, device and system
CN108986809B (en) Portable equipment and awakening method and device thereof
EP4184310A1 (en) Volume recommendation method and apparatus, device and storage medium
CN104038812A (en) Information push method and device
CN112243064A (en) Audio processing method and device
US20170171374A1 (en) Method and electronic device for user manual callouting in smart phone
WO2023030325A1 (en) Buffer processing method and apparatus
CN108491067B (en) Intelligent fan control method, intelligent fan and computer readable storage medium
CN110806909A (en) Method and device for determining page frame dropping information of application program and electronic equipment
CN113572986B (en) Course recording and broadcasting guiding method and device, readable storage medium and teaching all-in-one machine
WO2019184142A1 (en) Information prompting method, electronic apparatus, terminal device, and storage medium
CN113676761B (en) Multimedia resource playing method and device and main control equipment
CN112910875B (en) Display method and device
CN108519813A (en) A kind of protection cervical vertebra method based on hand-held intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant