CN105577947B - Control method and electronic device - Google Patents

Control method and electronic device Download PDF

Info

Publication number
CN105577947B
CN105577947B CN201510959123.2A CN201510959123A CN105577947B CN 105577947 B CN105577947 B CN 105577947B CN 201510959123 A CN201510959123 A CN 201510959123A CN 105577947 B CN105577947 B CN 105577947B
Authority
CN
China
Prior art keywords
video
instruction
mode
audio
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510959123.2A
Other languages
Chinese (zh)
Other versions
CN105577947A (en
Inventor
段利军
陈实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510959123.2A priority Critical patent/CN105577947B/en
Publication of CN105577947A publication Critical patent/CN105577947A/en
Application granted granted Critical
Publication of CN105577947B publication Critical patent/CN105577947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • H04M1/73Battery saving arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

The present disclosure relates to a control method applied to an electronic device, including: acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment; receiving a first instruction; and judging whether the first instruction meets a first preset condition or not, wherein if the first instruction meets the first preset condition, the first file is played in a first mode, wherein in the first mode, the audio data is decoded, an audio signal corresponding to the decoded audio data is output, and the output of a video signal corresponding to the video data is forbidden.

Description

Control method and electronic device
Technical Field
The present disclosure relates to a control method and an electronic device, and more particularly, to a control method and an electronic device capable of saving power consumption.
Background
With the wide popularization of electronic products such as tablet computers and smart phones, users often use the electronic products to watch videos. In certain scenarios, a user may not need to view images through a display screen of such electronic products, but may only need to play sound through a speaker. For example, a user can take notes on a notebook while playing a certain teaching video using a smartphone, and at this time, the smartphone does not need to display the teaching video but only plays a sound corresponding to the teaching video. However, conventionally, in the above situation, the video and the sound are output synchronously, thus causing waste of the computing resources and power consumption of the smartphone. Accordingly, if the user turns off the display screen of the smartphone or returns it to the standby page, the video disappears while the sound stops, and thus the diverse use requirements of the user cannot be satisfied.
Disclosure of Invention
An object of the present disclosure is to provide a control method and an electronic device that substantially obviate one or more problems due to limitations and disadvantages of the related art.
According to an aspect of the present disclosure, there is provided a control method applied to an electronic device, including: acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment; receiving a first instruction; and judging whether the first instruction meets a first preset condition or not, wherein if the first instruction meets the first preset condition, the first file is played in a first mode, wherein in the first mode, the audio data is decoded, an audio signal corresponding to the decoded audio data is output, and the output of a video signal corresponding to the video data is forbidden.
According to another aspect of the present disclosure, there is provided an electronic device including: a display unit configured to output a video signal; an audio unit configured to output an audio signal; and a control unit configured to: acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment; receiving a first instruction; and judging whether the first instruction meets a first preset condition or not, wherein if the first instruction meets the first preset condition, the first file is played in a first mode, wherein in the first mode, the audio data is decoded, an audio signal corresponding to the decoded audio data is output, and the output of a video signal corresponding to the video data is forbidden.
Therefore, the present disclosure is directed to a control method and an electronic device, which can turn off video output when a user only needs audio output, thereby saving computational resources of the electronic device, and restore video output and synchronize audio and video when the user needs video and audio output, thereby satisfying diversified use requirements of the user for the electronic device.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The drawings are not to be considered as drawn to scale unless explicitly indicated. In the drawings, like reference numbers generally represent the same component or step. In the drawings:
FIG. 1 is a flow chart illustrating a control method according to the present disclosure; and
fig. 2 is a block diagram showing a configuration of an electronic apparatus according to the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, exemplary embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments described herein without inventive step, are intended to be within the scope of the present disclosure. Moreover, descriptions of well-known functions and constructions in the art are omitted herein for clarity and conciseness.
A control method 100 according to the present disclosure is first explained with reference to fig. 1. Fig. 1 is a flow chart illustrating a control method 100 according to the present disclosure. The control method 100 according to the present disclosure may be applied in mobile electronic devices such as tablet computers, smart phones, personal digital assistants, smart wearable devices, and the like. Hereinafter, for convenience of description, a smartphone will be explained as an example of such a mobile electronic device, and thus the "smartphone" described below should be understood as an exemplary expression of the mobile electronic device to which the control method 100 of the present disclosure is applied, and should not be construed as a limitation to such a mobile electronic device.
As shown in fig. 1, in step S101, a first file is acquired, the first file includes audio data and video data associated with the audio data, and the first file can be played through an application of a smartphone.
Specifically, the first file may be a video file of various formats such as MPEG4, RMVB, RM, AVI, MKV, or the like. The first file includes audio data and video data, wherein the audio data is data previously subjected to an audio encoding process and stored in the first file, and the video data is data previously subjected to a video encoding process and stored in the first file. When the smart phone plays the first file through the application program, the application program can decode audio data through the audio decoder to output an audio signal, and can decode video data through the video decoder to output a video signal. In decoding video data, an image in units of frames is obtained, and the decoded image is output at a predetermined frame rate (e.g., 30 frames/second), thereby realizing video output.
It is noted that, as known to those skilled in the art, audio data and video data generally appear in the form of data blocks, that is, audio data generally includes one or more audio data blocks, and video data includes one or more video data blocks, and thus, the audio data and video data referred to herein are merely collective terms used for convenience of description, and hereinafter, if not explicitly stated, the term audio data is intended to include one or more audio data blocks constituting it, and the term video data is intended to include one or more video data blocks constituting it.
Although the format of the first file is exemplarily illustrated above, the present disclosure is not limited thereto, and the format of the first file may be any other format known to those skilled in the art that can be played on a smartphone, and the format of the first file may represent both a compression-encoded format of audio data and video data in a video file and a packaging format of the audio data and video data.
The source of the first file may be varied. For example, the first file may be pre-stored in a memory of the smartphone. The first file may also be a file that the user is downloading over a network such as the internet. Further, the first file may also be a file stored in the cloud. While several sources for the first file have been illustrated above, the present disclosure is not so limited and those skilled in the art can select a source for the first file based on the principles of the present disclosure, so long as the principles of the present disclosure are implemented.
The audio data and the video data in the first file are associated with each other. The audio data and the video data are associated with each other to facilitate the output of the audio signal and the video signal in synchronization when the first file is played, and after the output of the video signal is prohibited and only the audio signal is output for a while, when the output of the video signal needs to be resumed, the video signal corresponding to the currently output audio signal can still be output, that is, when the output of the video signal is resumed, the synchronization of the audio signal and the video signal can still be maintained. The following will describe in detail how audio data and video data are associated with each other and how synchronization of both of them is achieved in connection with specific embodiments.
It should be noted that the synchronization of the audio signal and the video signal as referred to herein does not generally mean that the audio signal and the video signal are started to be output at the same time, but means that both the current audio signal and the video signal corresponding to the current audio signal are output in synchronization with each other, thereby achieving the synchronization of audio and video, and therefore in the synchronization concept as referred to herein, a case where only one of audio and video is output is allowed to occur, for example, in a certain portion in the first file, only audio data and no video data corresponding to the audio data, and when the portion is played, only sound is output, and no picture is displayed (for example, only white-aside and no picture is displayed).
In an implementation, the audio data and the video data are associated by a synchronization parameter, wherein the video data corresponding to the decoded audio data can be determined from the decoded audio data and the synchronization parameter.
For example, in a video file in AVI format, audio data and video data are stored separately from each other, and when playing the video file, an audio stream and a video stream are obtained separately through a decoder, and the output progress of the audio stream and the video stream is adjusted through a synchronization parameter, so that the audio data and the video data are associated to achieve the purpose of synchronizing the audio data and the video data.
For another example, in a video file in the MKV format, audio data and video data are packaged together by a synchronization parameter so that the audio data and the video data are associated by the synchronization parameter, and when the video file is played, the audio data and the video data are "unpacked", and the audio data and the video data are decoded by an audio decoder and a video decoder, respectively, based on the synchronization parameter, thereby achieving synchronization of the audio data and the video data.
The term that audio data is associated with video data as used herein means that the audio data is temporal and the video data is also temporal, so that, based on the temporal characteristics of the audio data and the video data, when the audio data block a decoded at the current time point is determined, the video data block B corresponding to the audio data block a decoded at the current time point can be determined according to the synchronization parameter; or, based on the timeliness of the two, when the video data block B decoded at the current time point is determined, the audio data block a corresponding to the video data block B decoded at the current time point can be determined according to the synchronization parameter. After the audio data block A and the video data block B are determined, the audio data block A and the video data block B can be synchronously decoded and output at the same time, so that the audio data block A and the video data block B are mutually synchronous; or the audio data block a and the video data block B may be decoded according to different time sequences, and then output when the two are required to be output simultaneously, so as to achieve mutual synchronization of the two.
The synchronization parameters may include a timestamp sub-parameter (hereinafter referred to as timestamp) and a reference clock sub-parameter (hereinafter referred to as reference clock). The reference clock is linearly incremented, for example, the reference clock may be a reference clock signal provided by the system. When encoding audio data and video data to generate a first file, a time stamp is given to each data block in the generated audio data according to a reference clock, that is, each audio data block is time stamped, and a time stamp is given to each data block in the generated video data according to the reference clock, that is, each video data block is time stamped. Thus, the time stamp of the audio data block and the time stamp of the video data block are both associated with the reference clock, thus associating the audio data with the video data.
For example, assuming that the reference clock starts from 0 seconds, the time stamp of the first audio data block is 0 seconds (i.e., the audio content starts from 0 seconds or starts outputting audio at 0 seconds), the time stamp of the first video data block is 5 seconds (i.e., the video content starts from 5 seconds or starts outputting video at 5 seconds), in this case, if it is determined that the time stamp of the audio data block decoded at the current time point is 5 seconds, it is determined that the video data block whose time stamp corresponds to the audio data block is 5 seconds. Therefore, when the first file is played, synchronization control can be performed by the reference clock and the time stamp, and for example, if the reference clock starts from 0 second when the first file is played, during 0 second to 5 seconds, even if the video decoder decodes a video data block with a time stamp of 5 seconds, a video signal corresponding to the video data block is not output, but the video signal must be output until the reference clock reaches 5 seconds (that is, when an audio signal corresponding to an audio data block with a time stamp of 5 seconds is output), thereby achieving synchronization between audio and video signals.
Although the composition of the synchronization parameters, the marking manner of the time stamp, and the corresponding relationship between the time stamp and the reference clock are exemplarily shown above, the present disclosure is not limited thereto, and those skilled in the art can selectively set the time stamp and the reference clock as needed as long as the audio and video synchronization can be achieved.
Further, although the above exemplarily shows the way in which the audio data and the video data in the first file are associated with each other, the present disclosure is not limited thereto, and a person skilled in the art may selectively set the way in which the audio data and the video data are associated according to any one of known synchronization principles as long as the principles of the present disclosure can be implemented. For example, different key values may be assigned to the storage addresses of the audio data blocks and the storage addresses of the video data blocks, respectively, and then a hash function of the storage addresses of the audio data blocks and the video data blocks is established based on the key values, thereby implementing the correlation between the audio data and the video data. When the first file is played, if the audio decoder decodes the audio data block corresponding to the storage address C, correspondingly, the storage address D of the corresponding video data block can be obtained according to the hash function, so that the video decoder decodes the video data block at the storage address D, thereby realizing the synchronization of the audio and the video.
The application playing the first file may be a video player pre-installed in the smartphone, such as mxlayer, BSPlayer, etc. The application has an audio decoder and a video decoder adapted to the first file and is capable of decoding video data and audio data. The application may also be an online video player used by the user over a network such as the internet. Hereinafter, for convenience of explanation, a video player pre-installed in a smart phone will be explained as an example of the application program.
Next, the process proceeds to step S102.
In step S102, a first instruction is received.
The first instruction may be an input instruction of a user, that is, the first instruction corresponds to an input operation of the user. For example, the input operation of the user may be setting of a video player, setting of a display mode of a display unit, flipping of a smartphone (for example, to face a display screen toward the ground), closing of the display unit, and the like, and accordingly, the first instruction may be a setting instruction of the video player, a display mode setting instruction of the display unit, a flipping instruction of the smartphone, a closing instruction of the display unit, and the like.
The first instruction may also be an operation instruction of the smartphone itself, that is, the first instruction corresponds to an operation of the smartphone itself. For example, the operation of the smartphone itself may be a recognition operation on the face of the user, a reading operation on the remaining battery capacity, and the like, and accordingly, the first instruction may be a user face recognition instruction, a battery remaining capacity reading instruction, and the like.
Furthermore, the first instruction may be received before playing the first file, for example, a user sets a video player in advance before playing the first file; the first instruction may also be received during playing of the first file, e.g. when the video player plays the first file, the user flips the smartphone to orient the display unit towards the ground.
Although the first instruction is exemplarily illustrated above, the present disclosure is not limited thereto, and the first instruction may also be an instruction generated by combining an input instruction of a user with an operation instruction of the smartphone itself, so that a person skilled in the art may select and set the first instruction according to the principles described herein and by combining specific practical situations as long as the principles of the present disclosure can be implemented. To make the first instruction described herein clearer, the first instruction will be described in more detail below in conjunction with specific embodiments.
It should be noted that the first instruction, whether the first instruction is an input instruction of a user or an operation instruction of the smartphone itself, is intended to control the smartphone, and therefore for convenience of description, these instructions are collectively referred to as the first instruction herein. Furthermore, it should be noted that, regardless of the source of the first instruction, the first instruction should be represented inside the smartphone by a control instruction or a control command generated by the control unit, and therefore, the judgment process of the first instruction described below should be understood as a comparison and judgment of the instruction, not a judgment of the operation of the smartphone itself or a judgment of the input operation of the user.
Next, the process proceeds to step S103.
In step S103, it is determined whether the first instruction meets a first predetermined condition.
Specifically, in step S103, the first instruction received in step S102 is compared with a first predetermined condition to determine whether the first instruction meets the first predetermined condition. The first predetermined condition is preset in the smartphone, and the first predetermined condition is different due to the difference of the operation represented by the first instruction. For example, if the first instruction is a setting instruction of the video player, the first predetermined condition is an instruction corresponding to a predetermined mode (e.g., a first mode described below); if the first instruction is a user face recognition instruction, the first predetermined condition is that no face image of the user is recognized; if the first instruction is a battery remaining capacity read instruction, the first predetermined condition is that the battery remaining capacity is less than a predetermined capacity threshold. Step S103 will be described in detail with reference to specific embodiments.
If it is determined in step S103 that the first instruction meets the first predetermined condition, the process proceeds to step S104.
In step S104, the first file is played in a first mode in which the audio data is decoded and an audio signal corresponding to the decoded audio data is output, and output of a video signal corresponding to the video data is prohibited.
Specifically, in the first mode, the audio decoder of the video player keeps decoding of the audio data and outputs an audio signal corresponding to the decoded audio data through an audio output unit such as a speaker, thereby outputting audio, and the video player prohibits output of a video signal corresponding to the video data through a display unit.
It should be noted that in the first mode, although the video player prohibits the output of the video signal corresponding to the decoded audio data, it does not mean that in the first mode, the display unit of the smartphone must be turned off. In an implementation, in the first mode, the display unit of the smartphone may be turned off (e.g., the user presses a lock screen key); the display unit of the smartphone may also fix the picture of the video player to the image of the last frame before entering the first mode, and the display unit of the smartphone may also display a standby picture (e.g., the user presses the Home key of the smartphone).
Therefore, when the first instruction meets the first preset condition, the first file is played in the first mode, only the audio is output, and the video is not output, so that the basic playing requirement of a user can be met by outputting the audio, the operation resource and the electric energy of the smart phone can be saved by closing the video, and the cruising ability of the smart phone is improved.
In an implementation, in order to achieve the effect of outputting only audio but not video as described above, the video decoder may stop decoding the video data in step S104, so that the video signal cannot be provided; alternatively, the video decoder keeps decoding the video data and provides the video signal, but the display unit does not output the video signal. It should be noted that, whichever of the above-described two implementations is adopted, the audio decoder always keeps decoding the audio data, and the video player outputs an audio signal corresponding to the decoded audio data through an audio output unit such as a speaker. The above two implementations will be described separately below.
In a first implementation, the inhibiting outputting the video signal corresponding to the video data includes: stopping decoding the video data in the first file.
Specifically, in step S104, the audio decoder of the video player keeps decoding the audio data, and the video decoder of the video player stops decoding the video data, that is, while the video player plays the first file, the audio decoder is operated, and the video decoder is not operated, so that only the audio signal and not the video signal are supplied. Thus, in step S104, only audio is output, and video is not output.
In this implementation, the control method 100 further includes: receiving a second instruction in the first mode; and judging whether the second instruction meets a second preset condition, wherein if the second instruction meets the second preset condition, the first file is played in a second mode, and in the second mode, video data associated with the currently decoded audio data is decoded, and a video signal corresponding to the decoded video data is output.
The second instruction may be an instruction corresponding to the first instruction. In this case, in general, if the first instruction is an input instruction of the user, the second instruction is also an input instruction of the user; if the first instruction is the operation instruction of the smart phone, the second instruction is the operation instruction of the smart phone. For example, if the first instruction is a close instruction of the display unit, the second instruction is an open instruction of the display unit (e.g., lighting up a screen); if the first instruction is a turning instruction of the smart phone (for example, turning the smart phone to the display unit facing the ground), the second instruction is also a turning instruction of the smart phone (for example, turning the smart phone to the display unit facing away from the ground); if the first instruction is a user face recognition instruction (e.g., the user face image is not recognized within a predetermined time threshold), the second instruction is also a user face recognition instruction (e.g., the user face image is recognized within a predetermined time threshold).
The second instruction may also be an instruction that does not correspond to the first instruction. For example, if the first instruction is a user face recognition instruction, for example, a user face image is not recognized within a predetermined time threshold, so as to close the video output, the second instruction may be a user trigger instruction for outputting the video, so as to resume the video output; if the first instruction is a battery remaining capacity judgment instruction, for example, the battery remaining capacity is lower than a predetermined capacity threshold, the second instruction may be a charging instruction of the user, for example, the user connects the smartphone with an external power supply.
Although the second instruction is exemplarily illustrated above, the present disclosure is not limited thereto, and the second instruction may also be an instruction generated by combining an input instruction of a user with an operation instruction of the smartphone itself, so that a person skilled in the art may select and set the second instruction according to the principles described herein and by combining specific practical situations as long as the principles of the present disclosure can be implemented. To further clarify the second instructions described herein, the second instructions are described in more detail below with reference to specific embodiments.
The second predetermined condition is preset in the smartphone, and the second predetermined condition is different due to the difference of the operation represented by the second instruction. For example, if the second instruction is a setting instruction of the video player, the second predetermined condition is an instruction corresponding to a predetermined mode (e.g., a second mode described below); if the second instruction is a user face recognition instruction, the second predetermined condition is that a face image of the user is recognized; if the second instruction is a battery remaining capacity read instruction, the second predetermined condition is that the battery remaining capacity is greater than a predetermined capacity threshold. The second predetermined condition will be described later in detail with reference to specific embodiments.
In the second mode, the output of the audio signal is continuously maintained, the output of the video signal is resumed, and the output video signal is synchronized with the output audio signal. Specifically, the audio decoder of the video player continues to maintain decoding of the audio data and outputs an audio signal corresponding to the decoded audio data through an audio output unit such as a speaker, thereby outputting audio, and the video decoder decodes video data associated with the currently decoded audio data and outputs a video signal corresponding to the decoded video data through a display unit, thereby outputting video.
In particular, the synchronization of audio and video in the second mode may be achieved using the synchronization parameters described above. Assuming that both audio data and video data in the first file are time-stamped by using a reference clock, when the second instruction conforms to a second predetermined condition to cause the video player to switch from the first mode to the second mode, if the audio decoder decodes an audio data block with a time stamp of n seconds, the video decoder decodes a video data block with a time stamp of n seconds, whereby the video decoder can decode a video data block associated with the currently decoded audio data block, and the video player outputs a video signal corresponding to the decoded video data block through the display unit, thereby realizing audio and video synchronization when switching from the first mode to the second mode.
In a second implementation, the prohibiting, in step S104, the outputting of the video signal corresponding to the video data includes: stopping displaying the video signal corresponding to the video data decoded from the first file.
Specifically, in step S104, the audio decoder and the video decoder of the video player each keep decoding the audio data and the video data, respectively, but the smartphone controls to stop outputting the video signal corresponding to the decoded video data, that is, while the video player plays the first file, the audio decoder and the video decoder both operate and provide the audio signal and the video signal, respectively, but the smartphone outputs only the audio signal, and does not output the video signal, that is, does not output the video signal on the display unit. Thus, in step S104, only audio is output, and video is not output.
In this implementation, the control method 100 further includes: receiving a second instruction in the first mode; and judging whether the second instruction meets a second predetermined condition, wherein if the second instruction meets the second predetermined condition, the first file is played in a second mode, and in the second mode, an audio signal corresponding to the decoded audio data and a video signal corresponding to the decoded video data are synchronously output.
The second instruction and the second predetermined condition in this implementation are similar to the second instruction and the second predetermined condition described in the first implementation described above, and those skilled in the art can understand the second instruction and the second predetermined condition in this implementation according to the above description, and are not described here again.
The main differences between this implementation and the first implementation described above are: in this implementation, in the first mode, the video decoder continues to operate and decodes video data associated with the currently decoded audio data. Therefore, when the second instruction meets the second predetermined condition and the video player is switched from the first mode to the second mode, the video player only needs to output the video signal corresponding to the decoded video data through the display unit, and the synchronization of the audio and the video can be realized.
The control method 100 according to the present disclosure will be described in detail below with reference to specific embodiments. The first instruction, the first predetermined condition, the second instruction, the second predetermined condition, etc. in the control method 100 can be further understood by those skilled in the art according to the following specific embodiments.
A control method 100 according to a first embodiment of the present disclosure is explained below. In this embodiment, the video player is divided into two play modes, namely an "audio mode" and a "video mode", in which, in the audio mode, the video player outputs only audio without outputting video, and in the video mode, the video player outputs audio and video synchronously, and these two modes can be displayed on the display interface of the video player in the form of virtual keys, respectively, so as to prompt the user and provide the user with a choice. In the present embodiment, the first predetermined condition is that the user selects the audio mode of the video player. The first instruction may be an instruction corresponding to an audio play mode. For example, the first instruction may be composed of a first file selection instruction and an audio mode playing instruction, and the corresponding operations are: the user selects a first file by long-pressing the first file, and then the user selects to play the first file in audio mode. The first instruction may also be composed of an audio mode start instruction and a first file selection instruction, and the corresponding operations are: the user starts the video player and selects the audio mode, and the user will then play the first file through the video player (e.g., drag the first file into the video player). Therefore, in the present embodiment, the first mode in the control method 100 is an audio mode, and in the audio mode, outputting only audio but not outputting video can be realized through any one of the two implementation manners described above. The second predetermined condition is that the user selects a video mode of the video player, and accordingly the second instruction is an instruction corresponding to the video play mode. For example, the second instruction may be a video mode selection instruction, which corresponds to the following operations: during the time that the video player plays the first file in audio mode, the user switches the play mode to video mode (e.g., the user triggers a "video mode" virtual key on the video player interface). Therefore, in the present embodiment, the second mode in the control method 100 is the video mode.
A control method 100 according to a second embodiment of the present disclosure is explained below. In this embodiment, the smartphone displays an "audio mode" virtual key and a "video mode" virtual key on the display unit, respectively, to prompt and provide for a user to select. If the user selects the virtual key of the 'audio mode', when the user uses the video player to play the first file, the video player plays the first file in the audio mode, namely only the audio is output and the video is not output; if the user selects the virtual key of "video mode", the video player plays the first file in video mode when the user plays the first file using the video player, i.e. synchronously outputting audio and video. In this embodiment, the first predetermined condition is that the user selects the "audio mode" virtual key, and accordingly, the first instruction is a selection instruction for the "audio mode" virtual key. Therefore, in the present embodiment, the first mode in the control method 100 is an audio mode, and in the audio mode, outputting only audio but not outputting video can be realized through any one of the two implementation manners described above. The second predetermined condition is that the user selects the "video mode" virtual key and accordingly the second instruction is a selection instruction of the "video mode" virtual key, i.e. the video mode as described above. Specifically, the user first selects the "audio mode" virtual key displayed on the display unit; then the user uses the video player to play the first file, and the video player directly enters an audio mode; then, in the audio mode, if the user selects the "video mode" virtual key, the video player switches to the video mode and synchronizes the audio and video.
A control method 100 according to a third embodiment of the present disclosure is explained below. In this embodiment, the smartphone has a front camera for capturing facial images of the user and the smartphone has a face recognition function. The first instruction is a face recognition instruction, and the first predetermined condition is that the user does not recognize the face image of the user when the video player is started to play the first file or within a predetermined time threshold before the video player is started to play the first file. In this embodiment, the first mode is an audio mode as described above, and in the audio mode, outputting only audio but not outputting video may be implemented by any one of the two implementation manners described above. The second instruction is still a face recognition instruction, the second predetermined condition is that an image of the face of the user is recognized, and the second mode is a video mode as described above. Specifically, a user places a smart phone on one side of a body and plays a first file by using a video player, and at the moment, a front camera of the smart phone does not acquire a face image (which can be a complete face image or a partial face image) of the user, so that the smart phone does not recognize the face image, and the video player enters an audio mode; in the audio mode, a user wants to watch a video and move the face to the acquisition range of the front-facing camera, so that the smart phone recognizes the face image, and then the video player enters the video mode and synchronizes the audio and the video.
A control method 100 according to a fourth embodiment of the present disclosure is explained below. In this embodiment, the first instruction is a flip instruction for the smartphone, and the first predetermined condition is that the display unit of the smartphone is blocked, for example, the display unit is flipped to face the ground and placed on a desktop. In this embodiment, the smart phone may have a front sensing unit for sensing whether the display unit of the smart phone is shielded. The first mode is the audio mode as described above. The second instruction is still a flip instruction for the smartphone, the second predetermined condition is that the display unit of the smartphone is not occluded, and the second mode is the video mode as described above. Specifically, during playing a file by using the video player, the user turns the smart phone to make the display unit face the ground and places the smart phone on the desktop, so that the display unit is shielded, at this time, the video player enters the audio mode, then the user turns the smart phone again to make the display unit of the smart phone back to the ground and places the smart phone on the desktop, so that the display unit is not shielded, at this time, the video player enters the video mode, and the audio and the video are synchronized.
The control method 100 according to the present disclosure further includes: the first file is played in the second mode before the first instruction is received.
Specifically, after acquiring the first file, the video player is first used to play the first file in the second mode, i.e. the video mode as described above; then, in a second mode, receiving the first instruction and judging whether the first instruction meets a first preset condition.
In implementation, in the second mode, if the first instruction meets the first predetermined condition, the playing mode of the first file is switched from the second mode to the first mode. In the first mode, if the received second instruction meets a second predetermined condition, the first mode is switched back to the second mode. Thereby enabling repeated switching between the first mode and the second mode. Through repeated switching between the first mode and the second mode, the user can freely select between different playing modes, so that different playing requirements of the user are met, and the cruising ability of the smart phone is favorably prolonged.
A control method 100 according to a fifth embodiment of the present disclosure is explained below. In this embodiment, the first predetermined condition is that the remaining battery capacity of the smartphone is lower than the predetermined capacity threshold, the second predetermined condition is that the remaining battery capacity of the smartphone is higher than the predetermined capacity threshold, the first instruction and the second instruction are a battery remaining capacity reading instruction or a battery remaining capacity feedback instruction, and the like, the first mode is an audio mode as described above, and the second mode is a video mode as described above. Specifically, a first file is first played in video mode using a video player, assuming a predetermined charge threshold of 30% and a current battery remaining charge of 40%; after a period of time, with the consumption of electric quantity, the remaining electric quantity of the battery is lower than 30%, the smart phone switches the playing mode from the video mode to the audio mode, and in the audio mode, only audio output but no video output can be realized through any one of the two implementation manners described above; next, the user connects the smartphone with an external power supply to charge the battery of the smartphone, and as the amount of electricity increases, when the remaining amount of electricity of the battery is higher than 30%, the smartphone switches the play mode from the audio mode to the video mode.
A control method 100 according to a sixth embodiment of the present disclosure is explained below. In this embodiment, the smartphone has a front camera for capturing facial images of the user and the smartphone has a face recognition function. The first predetermined condition is that the face image of the user is not recognized within a predetermined time threshold, the second predetermined condition is that the face image of the user is recognized, the first instruction and the second instruction are face recognition instructions, the first mode being an audio mode as described above, and the second mode being a video mode as described above. Specifically, a first file (e.g., a teaching video) is first played in a video mode using a video player through which a user views the first file; then, the user leaves from the front of the smartphone (for example, the user takes notes beside the smartphone), within a predetermined time threshold (for example, 5 seconds), the smartphone does not recognize the facial image of the user, the smartphone switches the play mode from the video mode to the audio mode, and in the audio mode, outputting only audio but not video can be realized through any one of the two implementation manners described above; then, the user returns to the front of the smart phone again, so that the front camera collects the face image of the user, and the smart phone switches the playing mode from the audio mode to the video mode. Next, if the user leaves from the front of the smartphone again, the above-described switching process between the video mode and the audio mode may be repeated.
Although the control method 100 according to the present disclosure is described above by way of specific embodiments, the specific embodiments described above are merely exemplary and are not intended to limit the present disclosure, and those skilled in the art may modify and change the above embodiments depending on specific applications, as long as the principles of the present disclosure can be implemented. For example, during the video player playing the first file in the video mode, the user may click the Home key on the display screen of the smartphone to display the standby interface of the smartphone, at which point the smartphone switches the play mode from the video mode to the audio mode.
An electronic device 20 according to the present disclosure is explained below with reference to fig. 2. Fig. 2 is a block diagram showing the configuration of the electronic apparatus 20 according to the present disclosure. The electronic device 20 may be a mobile electronic device such as a tablet, smartphone, personal digital assistant, smart wearable device, or the like. Hereinafter, for convenience of description, a smartphone will be explained as an example of such a mobile electronic device, and thus "smartphone 20" described below should be understood as an exemplary expression of the electronic device of the present disclosure, and should not be construed as a limitation on such an electronic device.
As shown in fig. 2, the electronic device 20 includes: a display unit 21, an audio unit 22, and a control unit 23. The above-described components of the electronic device 20 are described in detail below.
The display unit 21 is configured to output a video signal. The display unit 21 may be a display device such as a plasma display, an organic electroluminescent display, a liquid crystal display, etc., however, the present disclosure is not limited thereto, and those skilled in the art may select the type of the display unit 21 according to actual needs. The display unit 21 may also be implemented by a touch display panel, so that the display unit 21 can respond to a touch operation by a user.
The audio unit 22 is configured to output an audio signal. The audio unit 22 may be implemented by an audio output device such as a speaker, a sound box, or an earphone.
The control unit 23 is configured to: acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment; receiving a first instruction; and judging whether the first instruction meets a first preset condition or not, wherein if the first instruction meets the first preset condition, the first file is played in a first mode, wherein in the first mode, the audio data is decoded, an audio signal corresponding to the decoded audio data is output, and the output of a video signal corresponding to the video data is forbidden. The control unit 23 will be described in detail below.
The control unit 23 may be a processor such as a Central Processing Unit (CPU) or may be implemented by an Embedded Controller (EC). The implementation of the control unit 23 can be chosen by a person skilled in the art according to the actual needs.
The first file may be a video file of various formats such as MPEG4, RMVB, RM, AVI, MKV, etc. The first file includes audio data and video data, wherein the audio data is data previously subjected to an audio encoding process and stored in the first file, and the video data is data previously subjected to a video encoding process and stored in the first file. When the smart phone plays the first file through the application program, the application program can decode audio data through the audio decoder to output an audio signal, and can decode video data through the video decoder to output a video signal. In decoding video data, an image in units of frames is obtained, and the decoded image is output at a predetermined frame rate (e.g., 30 frames/second), thereby realizing video output.
It is noted that, as known to those skilled in the art, audio data and video data generally appear in the form of data blocks, that is, audio data generally includes one or more audio data blocks, and video data includes one or more video data blocks, and thus, the audio data and video data referred to herein are merely collective terms used for convenience of description, and hereinafter, if not explicitly stated, the term audio data is intended to include one or more audio data blocks constituting it, and the term video data is intended to include one or more video data blocks constituting it.
Although the format of the first file is exemplarily illustrated above, the present disclosure is not limited thereto, and the format of the first file may be any other format known to those skilled in the art that can be played on a smartphone, and the format of the first file may represent both a compression-encoded format of audio data and video data in a video file and a packaging format of the audio data and video data.
The source of the first file may be varied. For example, the first file may be pre-stored in a memory of the smartphone. The first file may also be a file that the user is downloading over a network such as the internet. Further, the first file may also be a file stored in the cloud. While several sources for the first file have been illustrated above, the present disclosure is not so limited and those skilled in the art can select a source for the first file based on the principles of the present disclosure, so long as the principles of the present disclosure are implemented.
The audio data and the video data in the first file are associated with each other. The audio data and the video data are associated with each other to facilitate the output of the audio signal and the video signal in synchronization when the first file is played, and after the output of the video signal is prohibited and only the audio signal is output for a while, when the output of the video signal needs to be resumed, the video signal corresponding to the currently output audio signal can still be output, that is, when the output of the video signal is resumed, the synchronization of the audio and the video can still be maintained. The following will describe in detail how audio data and video data are associated with each other and how synchronization of both of them is achieved in connection with specific embodiments.
It should be noted that the synchronization of the audio signal and the video signal as referred to herein does not generally mean that the audio signal and the video signal are started to be output at the same time, but means that both the current audio signal and the video signal corresponding to the current audio signal are output in synchronization with each other, thereby achieving the synchronization of audio and video, and therefore in the synchronization concept as referred to herein, a case where only one of audio and video is output is allowed to occur, for example, in a certain portion in the first file, only audio data and no video data corresponding to the audio data, and when the portion is played, only sound is output, and no picture is displayed (for example, only white-aside and no picture is displayed).
In an implementation, the audio data and the video data are associated by a synchronization parameter, wherein the control unit 23 is further configured to determine the video data corresponding to the decoded audio data from the decoded audio data and the synchronization parameter.
For example, in a video file in AVI format, audio data and video data are stored separately from each other, and when playing the video file, an audio stream and a video stream are obtained separately through a decoder, and the output progress of the audio stream and the video stream is adjusted through a synchronization parameter, so that the audio data and the video data are associated to achieve the purpose of synchronizing the audio data and the video data.
For another example, in a video file in the MKV format, audio data and video data are packaged together by a synchronization parameter so that the audio data and the video data are associated by the synchronization parameter, and when the video file is played, the audio data and the video data are "unpacked", and the audio data and the video data are decoded by an audio decoder and a video decoder, respectively, based on the synchronization parameter, thereby achieving synchronization of the audio data and the video data.
The term that audio data is associated with video data as used herein means that the audio data is temporal and the video data is also temporal, so that, based on the temporal characteristics of the audio data and the video data, when the audio data block a decoded at the current time point is determined, the video data block B corresponding to the audio data block a decoded at the current time point can be determined according to the synchronization parameter; or, based on the timeliness of the two, when the video data block B decoded at the current time point is determined, the audio data block a corresponding to the video data block B decoded at the current time point can be determined according to the synchronization parameter. After the audio data block A and the video data block B are determined, the audio data block A and the video data block B can be synchronously decoded and output at the same time, so that the audio data block A and the video data block B are mutually synchronous; or the audio data block a and the video data block B may be decoded according to different time sequences, and then output when the two are required to be output simultaneously, so as to achieve mutual synchronization of the two.
The synchronization parameters may include a timestamp sub-parameter (hereinafter referred to as timestamp) and a reference clock sub-parameter (hereinafter referred to as reference clock). The reference clock is linearly incremented, for example, the reference clock may be a reference clock signal provided by the system. When encoding audio data and video data to generate a first file, a time stamp is given to each data block in the generated audio data according to a reference clock, that is, each audio data block is time stamped, and a time stamp is given to each data block in the generated video data according to the reference clock, that is, each video data block is time stamped. Thus, the time stamp of the audio data block and the time stamp of the video data block are both associated with the reference clock, thus associating the audio data with the video data.
For example, assuming that the reference clock starts from 0 seconds, the time stamp of the first audio data block is 0 seconds (i.e., the audio content starts from 0 seconds or starts outputting audio at 0 seconds), the time stamp of the first video data block is 5 seconds (i.e., the video content starts from 5 seconds or starts outputting video at 5 seconds), in this case, if it is determined that the time stamp of the audio data block decoded at the current time point is 5 seconds, it is determined that the video data block whose time stamp corresponds to the audio data block is 5 seconds. Therefore, when the first file is played, synchronization control can be performed by the reference clock and the time stamp, and for example, if the reference clock starts from 0 second when the first file is played, during 0 second to 5 seconds, even if the video decoder decodes a video data block with a time stamp of 5 seconds, a video signal corresponding to the video data block is not output, but the video signal must be output until the reference clock reaches 5 seconds (that is, when an audio signal corresponding to an audio data block with a time stamp of 5 seconds is output), thereby achieving synchronization between audio and video signals.
Although the composition of the synchronization parameters, the marking manner of the time stamp, and the corresponding relationship between the time stamp and the reference clock are exemplarily shown above, the present disclosure is not limited thereto, and those skilled in the art can selectively set the time stamp and the reference clock as needed as long as the audio and video synchronization can be achieved.
Further, although the above exemplarily shows the way in which the audio data and the video data in the first file are associated with each other, the present disclosure is not limited thereto, and a person skilled in the art may selectively set the way in which the audio data and the video data are associated according to any one of known synchronization principles as long as the principles of the present disclosure can be implemented. For example, different key values may be assigned to the storage addresses of the audio data blocks and the storage addresses of the video data blocks, respectively, and then a hash function of the storage addresses of the audio data blocks and the video data blocks is established based on the key values, thereby implementing the correlation between the audio data and the video data. When the first file is played, if the audio decoder decodes the audio data block corresponding to the storage address C, correspondingly, the storage address D of the corresponding video data block can be obtained according to the hash function, so that the video decoder decodes the video data block at the storage address D, thereby realizing the synchronization of the audio and the video.
The application playing the first file may be a video player pre-installed in the smartphone, such as mxlayer, BSPlayer, etc. The application has an audio decoder and a video decoder adapted to the first file and is capable of decoding video data and audio data. The application may also be an online video player used by the user over a network such as the internet. Hereinafter, for convenience of explanation, a video player pre-installed in a smart phone will be explained as an example of the application program.
The first instruction may be an input instruction of a user, that is, the first instruction corresponds to an input operation of the user. For example, the input operation of the user may be setting of a video player, setting of a display mode of the display unit, flipping of a smartphone (for example, to face a display screen toward the ground), closing of the display unit 21, and the like, and accordingly, the first instruction may be a setting instruction of the video player, a display mode setting instruction of the display unit, a flipping instruction of the smartphone, a closing instruction of the display unit, and the like.
The first instruction may also be an operation instruction of the smartphone itself, that is, the first instruction corresponds to an operation of the smartphone itself. For example, the operation of the smartphone itself may be a recognition operation on the face of the user, a reading operation on the remaining battery capacity, and the like, and accordingly, the first instruction may be a user face recognition instruction, a battery remaining capacity reading instruction, and the like.
Furthermore, the first instruction may be received before playing the first file, for example, a user sets a video player in advance before playing the first file; the first instruction may also be received during playing of the first file, e.g. when the video player plays the first file, the user flips the smartphone to orient the display unit 21 towards the ground.
Although the first instruction is exemplarily illustrated above, the present disclosure is not limited thereto, and the first instruction may also be an instruction generated by combining an input instruction of a user with an operation instruction of the smartphone itself, so that a person skilled in the art may select and set the first instruction according to the principles described herein and by combining specific practical situations as long as the principles of the present disclosure can be implemented. To make the first instruction described herein clearer, the first instruction will be described in more detail below in conjunction with specific embodiments.
It should be noted that the first instruction, whether the first instruction is an input instruction of a user or an operation instruction of the smartphone itself, is intended to control the smartphone, and therefore for convenience of description, these instructions are collectively referred to as the first instruction herein. Furthermore, it should be noted that, regardless of the source of the first instruction, the first instruction should be represented inside the smartphone by a control instruction or a control command generated by the control unit, and therefore, the judgment process of the first instruction described below should be understood as a comparison and judgment of the instruction, not a judgment of the operation of the smartphone itself or a judgment of the input operation of the user.
The control unit 23 compares the received first instruction with the first predetermined condition to determine whether the first instruction meets the first predetermined condition. The first predetermined condition is preset in the smartphone, and the first predetermined condition is different due to the difference of the operation represented by the first instruction. For example, if the first instruction is a setting instruction of the video player, the first predetermined condition is an instruction corresponding to a predetermined mode (e.g., a first mode described below); if the first instruction is a user face recognition instruction, the first predetermined condition is that no face image of the user is recognized; if the first instruction is a battery remaining capacity read instruction, the first predetermined condition is that the battery remaining capacity is less than a predetermined capacity threshold. The first instruction and the first predetermined condition will be described in detail with reference to specific embodiments.
If the control unit 23 determines that the first instruction meets the first predetermined condition, the control unit 23 controls to play the first file in a first mode in which the audio data is decoded and an audio signal corresponding to the decoded audio data is output, and output of a video signal corresponding to the video data is prohibited.
Specifically, in the first mode, the audio decoder of the video player keeps decoding of the audio data and outputs an audio signal corresponding to the decoded audio data through the audio unit 22, thereby outputting audio, and the video player prohibits output of a video signal corresponding to the video data through the display unit 21.
It should be noted that in the first mode, although the video player prohibits the output of the video signal corresponding to the decoded audio data, it does not mean that in the first mode, the display unit 21 of the smartphone must be turned off. In an implementation, in the first mode, the display unit 21 of the smartphone may be turned off (e.g., the user presses a lock screen key); the display unit 21 of the smartphone may fix the screen of the video player to the image of the last frame before entering the first mode, and the display unit 21 of the smartphone may display a standby screen (for example, the user presses the Home key of the smartphone).
Therefore, when the first instruction meets the first preset condition, the first file is played in the first mode, only the audio is output, and the video is not output, so that the basic playing requirement of a user can be met by outputting the audio, the operation resource and the electric energy of the smart phone can be saved by closing the video, and the cruising ability of the smart phone is improved.
In an implementation, to achieve the effect of outputting only audio but not video as described above, the video decoder may stop decoding the video data, thereby failing to provide a video signal; alternatively, the video decoder keeps decoding the video data and supplies the video signal, but the display unit 21 does not output the video signal. It should be noted that, whichever of the above two implementations is adopted, the audio decoder always keeps decoding the audio data, and the video player outputs an audio signal corresponding to the decoded audio data through the audio unit 22. The above two implementations will be described separately below.
In a first implementation, the inhibiting outputting the video signal corresponding to the video data includes: stopping decoding the video data in the first file.
Specifically, the control unit 23 controls to make the audio decoder of the video player keep decoding of the audio data, while the video decoder of the video player stops decoding of the video data, that is, while the video player plays the first file, the audio decoder is operated, while the video decoder is not operated, so that only the audio signal is supplied and no video signal is supplied, thereby outputting only the audio, not the video.
In this implementation, the control unit 23 is further configured to: receiving a second instruction in the first mode; and judging whether the second instruction meets a second preset condition, wherein if the second instruction meets the second preset condition, the first file is played in a second mode, and in the second mode, video data associated with the currently decoded audio data is decoded, and a video signal corresponding to the decoded video data is output.
The second instruction may be an instruction corresponding to the first instruction. In this case, in general, if the first instruction is an input instruction of the user, the second instruction is also an input instruction of the user; if the first instruction is an operation instruction of the smartphone 20 itself, the second instruction is also an operation instruction of the smartphone 20 itself. For example, if the first instruction is a close instruction of the display unit 21, the second instruction is an open instruction of the display unit 21 (e.g., lighting up a screen); if the first instruction is a flip instruction of the smartphone 20 (for example, flipping the smartphone 20 to the display unit 21 toward the ground), the second instruction is also a flip instruction of the smartphone 20 (for example, flipping the smartphone 20 to the display unit 21 away from the ground); if the first instruction is a user face recognition instruction (e.g., the user face image is not recognized within a predetermined time threshold), the second instruction is also a user face recognition instruction (e.g., the user face image is recognized within a predetermined time threshold).
The second instruction may also be an instruction that does not correspond to the first instruction. For example, if the first instruction is a user face recognition instruction, for example, a user face image is not recognized within a predetermined time threshold, so as to close the video output, the second instruction may be a user trigger instruction for outputting the video, so as to resume the video output; if the first instruction is a battery remaining capacity judgment instruction, for example, the battery remaining capacity is lower than a predetermined capacity threshold, the second instruction may be a charging instruction of the user, for example, the user connects the smartphone 20 with an external power supply.
Although the second instruction is exemplarily illustrated above, the present disclosure is not limited thereto, and the second instruction may also be an instruction generated by combining an input instruction of a user with an operation instruction of the smartphone 20 itself, so that a person skilled in the art may select and set the second instruction according to the principles described herein and by combining specific practical situations as long as the principles of the present disclosure can be implemented. To further clarify the second instructions described herein, the second instructions are described in more detail below with reference to specific embodiments.
The second predetermined condition is set in advance in the smartphone 20, and the second predetermined condition differs depending on the operation indicated by the second instruction. For example, if the second instruction is a setting instruction of the video player, the second predetermined condition is an instruction corresponding to a predetermined mode (e.g., a second mode described below); if the second instruction is a user face recognition instruction, the second predetermined condition is that a face image of the user is recognized; if the second instruction is a battery remaining capacity read instruction, the second predetermined condition is that the battery remaining capacity is greater than a predetermined capacity threshold. The second predetermined condition will be described later in detail with reference to specific embodiments.
In the second mode, the control unit 23 continues to hold the output of the audio signal, resumes the output of the video signal, and synchronizes the output video signal with the output audio signal. Specifically, the audio decoder of the video player continues to maintain decoding of the audio data and outputs an audio signal corresponding to the decoded audio data through the audio unit 22, thereby outputting audio, and the video decoder decodes video data associated with the currently decoded audio data and outputs a video signal corresponding to the decoded video data through the display unit 21, thereby outputting video.
In particular, the synchronization of audio and video in the second mode may be achieved using the synchronization parameters described above. Assuming that both audio data and video data in the first file are time-stamped using the reference clock, when the second instruction satisfies the second predetermined condition to switch the video player from the first mode to the second mode, if the audio decoder decodes an audio data block with a time stamp of n seconds, the video decoder decodes a video data block with a time stamp of n seconds, whereby the video decoder can decode a video data block associated with the currently decoded audio data block, and the video player outputs a video signal corresponding to the decoded video data block through the display unit 21, thereby achieving audio and video synchronization when switching from the first mode to the second mode.
In a second implementation, the inhibiting outputting the video signal corresponding to the video data includes: stopping displaying the video signal corresponding to the video data decoded from the first file.
Specifically, in the first mode, the audio decoder and the video decoder of the video player each keep decoding the audio data and the video data, respectively, but the smartphone 20 controls to stop outputting the video signal corresponding to the decoded video data, that is, while the video player plays the first file, the audio decoder and the video decoder both operate and supply the audio signal and the video signal, respectively, but the smartphone 20 outputs only the audio signal and does not output the video signal, that is, does not output the video signal on the display unit 21, thereby outputting only the audio and not the video.
In this implementation, the control unit 23 is further configured to: receiving a second instruction in the first mode; and judging whether the second instruction meets a second predetermined condition, wherein if the second instruction meets the second predetermined condition, the first file is played in a second mode, and in the second mode, an audio signal corresponding to the decoded audio data and a video signal corresponding to the decoded video data are synchronously output.
The second instruction and the second predetermined condition in this implementation are similar to the second instruction and the second predetermined condition described in the first implementation described above, and those skilled in the art can understand the second instruction and the second predetermined condition in this implementation according to the above description, and are not described here again.
The main differences between this implementation and the first implementation described above are: in this implementation, in the first mode, the video decoder continues to operate and decodes video data associated with the currently decoded audio data. Therefore, when the second instruction meets the second predetermined condition and the video player is switched from the first mode to the second mode, the video player only needs to output the video signal corresponding to the decoded video data through the display unit 21, and the audio and the video can be synchronized.
The electronic device 20 according to the present disclosure will be described in detail below with reference to specific embodiments. Those skilled in the art will further appreciate the first instruction, the first predetermined condition, the second instruction, the second predetermined condition, etc. described above with reference to the following detailed description.
A smartphone 20 according to a first embodiment of the present disclosure will be explained below. In this embodiment, the video player is divided into two play modes, namely an "audio mode" and a "video mode", in which, in the audio mode, the video player outputs only audio without outputting video, and in the video mode, the video player outputs audio and video synchronously, and these two modes can be displayed on the display interface of the video player in the form of virtual keys, respectively, so as to prompt the user and provide the user with a choice. In the present embodiment, the first predetermined condition is that the user selects the audio mode of the video player. The first instruction may be an instruction corresponding to an audio play mode. For example, the first instruction may be composed of a first file selection instruction and an audio mode playing instruction, and the corresponding operations are: the user selects a first file by long-pressing the first file, and then the user selects to play the first file in audio mode. The first instruction may also be composed of an audio mode start instruction and a first file selection instruction, and the corresponding operations are: the user starts the video player and selects the audio mode, and the user will then play the first file through the video player (e.g., drag the first file into the video player). Therefore, in the present embodiment, the first mode is an audio mode, and in the audio mode, outputting only audio but not outputting video may be implemented by any one of the two implementation manners described above. The second predetermined condition is that the user selects a video mode of the video player, and accordingly the second instruction is an instruction corresponding to the video play mode. For example, the second instruction may be a video mode selection instruction, which corresponds to the following operations: during the time that the video player plays the first file in audio mode, the user switches the play mode to video mode (e.g., the user triggers a "video mode" virtual key on the video player interface). Therefore, in the present embodiment, the second mode is a video mode.
A smartphone 20 according to a second embodiment of the present disclosure is explained below. In this embodiment, the smartphone 20 displays an "audio mode" virtual key and a "video mode" virtual key on the display unit 21, respectively, to prompt and provide a user with a choice. If the user selects the virtual key of the 'audio mode', when the user uses the video player to play the first file, the video player plays the first file in the audio mode, namely only the audio is output and the video is not output; if the user selects the virtual key of "video mode", the video player plays the first file in video mode when the user plays the first file using the video player, i.e. synchronously outputting audio and video. In this embodiment, the first predetermined condition is that the user selects the "audio mode" virtual key, and accordingly, the first instruction is a selection instruction for the "audio mode" virtual key. Therefore, in the present embodiment, the first mode is an audio mode, and in the audio mode, outputting only audio but not outputting video may be implemented by any one of the two implementation manners described above. The second predetermined condition is that the user selects the "video mode" virtual key and accordingly the second instruction is a selection instruction of the "video mode" virtual key, i.e. the video mode as described above. Specifically, the user first selects the "audio mode" virtual key displayed on the display unit 21; then the user uses the video player to play the first file, and the video player directly enters an audio mode; then, in the audio mode, if the user selects the "video mode" virtual key displayed on the display unit 21, the video player switches to the video mode and synchronizes the audio and video.
A smartphone 20 according to a third embodiment of the present disclosure is explained below. In this embodiment, the smartphone 20 has a front camera for capturing facial images of the user and the smartphone 20 has a face recognition function. The first instruction is a face recognition instruction, and the first predetermined condition is that the user does not recognize the face image of the user when the video player is started to play the first file or within a predetermined time threshold before the video player is started to play the first file. In this embodiment, the first mode is an audio mode as described above, and in the audio mode, outputting only audio but not outputting video may be implemented by any one of the two implementation manners described above. The second instruction is still a face recognition instruction, the second predetermined condition is that an image of the face of the user is recognized, and the second mode is a video mode as described above. Specifically, the user places the smartphone 20 on one side of the body, and plays the first file using the video player, at this time, the front camera of the smartphone 20 does not acquire a facial image (which may be a complete facial image or a partial facial image) of the user, so that the smartphone 20 does not recognize the facial image, and the video player enters an audio mode; in the audio mode, if the user wishes to watch the video and move the face into the capture range of the front-facing camera so that the smartphone 20 recognizes the face image, the video player enters the video mode and synchronizes the audio and video.
A smartphone 20 according to a fourth embodiment of the present disclosure will be explained below. In the present embodiment, the first instruction is a flip instruction for the smartphone 20, and the first predetermined condition is that the display unit 21 of the smartphone 20 is shielded, for example, the display unit 21 is flipped toward the ground and placed on a desktop. In the present embodiment, the smartphone 20 may have a front sensing unit for sensing whether the display unit 21 of the smartphone is blocked. The first mode is the audio mode as described above. The second instruction is still a flip instruction for the smartphone 20, the second predetermined condition is that the display unit 21 of the smartphone 20 is not occluded, the second mode being the video mode as described above. Specifically, during playing a file using the video player, the user turns the smartphone 20 such that the display unit 21 faces the ground and places the smartphone 20 on the desktop, so that the display unit 21 is shielded, and at this time, the video player enters the audio mode, and then the user turns the smartphone 20 again such that the display unit 21 faces away from the ground and places the smartphone 20 on the desktop, so that the display unit 21 is not shielded, and at this time, the video player enters the video mode and synchronizes the audio and the video.
The control unit 23 is further configured to: playing the first file in a second mode before receiving the first instruction.
Specifically, the control unit 23 controls to play the first file in the second mode, i.e., the video mode as described above, first using the video player after acquiring the first file; then, in the second mode, the control unit 23 controls to receive the first instruction and determines whether the first instruction meets a first predetermined condition.
In an implementation, in the second mode, if the first instruction meets the first predetermined condition, the control unit 23 controls to switch the play mode of the first file from the second mode to the first mode. In the first mode, if the received second instruction meets a second predetermined condition, the control unit 23 controls to switch the first mode back to the second mode. Thereby enabling repeated switching between the first mode and the second mode. Through repeated switching between the first mode and the second mode, the user can freely select between different playing modes, so that different playing requirements of the user are met, and the cruising ability of the smart phone 20 is prolonged.
A smartphone 20 according to a fifth embodiment of the present disclosure is explained below. In the present embodiment, the first predetermined condition is that the remaining battery capacity of the smartphone 20 is lower than the predetermined battery capacity threshold, the second predetermined condition is that the remaining battery capacity of the smartphone 20 is higher than the predetermined battery capacity threshold, the first instruction and the second instruction are a battery remaining capacity reading instruction or a battery remaining capacity feedback instruction, and the like, the first mode is an audio mode as described above, and the second mode is a video mode as described above. Specifically, a first file is first played in video mode using a video player, assuming a predetermined charge threshold of 30% and a current battery remaining charge of 40%; after a period of time, with the consumption of the electric quantity, the remaining electric quantity of the battery is lower than 30%, the smartphone 20 switches the play mode from the video mode to the audio mode, and in the audio mode, only outputting the audio but not outputting the video can be realized through any one of the two implementation manners described above; next, the user connects the smartphone 20 with an external power supply to charge the battery of the smartphone 20, and as the amount of electricity increases, when the remaining amount of electricity of the battery is higher than 30%, the smartphone 20 switches the play mode from the audio mode to the video mode.
A smartphone 20 according to a sixth embodiment of the present disclosure is explained below. In this embodiment, the smartphone 20 has a front camera for capturing facial images of the user and the smartphone 20 has a face recognition function. The first predetermined condition is that the face image of the user is not recognized within a predetermined time threshold, the second predetermined condition is that the face image of the user is recognized, the first instruction and the second instruction are face recognition instructions, the first mode being an audio mode as described above, and the second mode being a video mode as described above. Specifically, a first file (e.g., a teaching video) is first played in a video mode using a video player through which a user views the first file; then, the user leaves from the front of the smartphone 20 (e.g., the user takes notes beside the smartphone 20), within a predetermined time threshold (e.g., 5 seconds), the smartphone 20 does not recognize the facial image of the user, the smartphone 20 switches the play mode from the video mode to the audio mode, and in the audio mode, outputting only audio but not video can be implemented through any one of the two implementation manners described above; then, the user returns to the front of the smartphone 20 again, so that the front camera captures the face image of the user, and the smartphone 20 switches the play mode from the audio mode to the video mode. Next, if the user leaves from the front of the smartphone 20 again, the above-described switching process between the video mode and the audio mode may be repeated.
Although the smartphone 20 according to the present disclosure is described above by way of a specific embodiment, the specific embodiment described above is merely exemplary and is not intended to limit the present disclosure, and those skilled in the art may modify and change the above-described embodiment according to a specific application as long as the principles of the present disclosure can be implemented. For example, during the video player playing the first file in the video mode, the user may click the Home key on the display unit 21 of the smartphone 20 to display the standby interface of the smartphone 20, at which point the smartphone 20 switches the play mode from the video mode to the audio mode.
It is to be understood that the terminology used in the description is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that the embodiments disclosed herein can be implemented in electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Those skilled in the art will understand that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; although the present disclosure has been described in detail with reference to the foregoing embodiments, those skilled in the art may modify the technical solutions described in the foregoing embodiments or may substitute some or all of the technical features; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. A control method is applied to electronic equipment and comprises the following steps:
acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment;
receiving a first instruction; and
judging whether the first instruction meets a first preset condition or not, wherein the first instruction comprises a battery residual capacity reading instruction, the first preset condition comprises that the battery residual capacity of the electronic equipment is lower than a preset capacity threshold value,
playing the first file in a first mode if the first instruction meets the first predetermined condition, wherein,
decoding the audio data and outputting an audio signal corresponding to the decoded audio data and prohibiting an output of a video signal corresponding to the video data in the first mode,
wherein the prohibiting of outputting the video signal corresponding to the video data includes:
continuing to decode video data from the acquired first file and provide a video signal corresponding to the video data decoded from the first file while playing the acquired first file including the audio data and video data in the first mode, but stopping outputting the video signal corresponding to the video data decoded from the first file,
the method further comprises the following steps:
receiving a second instruction in the first mode; and
judging whether the second instruction meets a second preset condition or not, wherein the second instruction comprises a charging instruction of a user,
if the second instruction meets the second preset condition, playing the first file in a second mode, wherein,
and in the second mode, starting to output a video signal corresponding to the video data continuously decoded from the acquired first file in the first mode so as to realize the synchronization of the audio signal and the video signal.
2. The control method according to claim 1,
the audio data and the video data are associated by a synchronization parameter, wherein the video data corresponding to the decoded audio data can be determined from the decoded audio data and the synchronization parameter.
3. The control method according to claim 1, further comprising:
playing the first file in a second mode before receiving the first instruction.
4. The control method according to claim 3, further comprising:
and if the first instruction meets the first preset condition, switching the play mode of the first file from the second mode to the first mode.
5. An electronic device, comprising:
a display unit configured to output a video signal;
an audio unit configured to output an audio signal; and
a control unit configured to:
acquiring a first file, wherein the first file comprises audio data and video data associated with the audio data, and can be played through an application program of the electronic equipment;
receiving a first instruction; and
judging whether the first instruction meets a first preset condition or not, wherein the first instruction comprises a battery residual capacity reading instruction, the first preset condition comprises that the battery residual capacity of the electronic equipment is lower than a preset capacity threshold value,
playing the first file in a first mode if the first instruction meets the first predetermined condition, wherein,
decoding the audio data and outputting an audio signal corresponding to the decoded audio data and prohibiting an output of a video signal corresponding to the video data in the first mode,
wherein the prohibiting of outputting the video signal corresponding to the video data includes:
continuing to decode video data from the acquired first file and provide a video signal corresponding to the video data decoded from the first file while playing the acquired first file including the audio data and video data in the first mode, but stopping outputting the video signal corresponding to the video data decoded from the first file,
the control unit is further configured to:
receiving a second instruction in the first mode; and
judging whether the second instruction meets a second preset condition or not, wherein the second instruction comprises a charging instruction of a user,
if the second instruction meets the second preset condition, playing the first file in a second mode, wherein,
and in the second mode, starting to output a video signal corresponding to the video data continuously decoded from the acquired first file in the first mode so as to realize the synchronization of the audio signal and the video signal.
6. The electronic device of claim 5,
the audio data is associated with the video data by a synchronization parameter, wherein,
the control unit is further configured to:
determining video data corresponding to the decoded audio data according to the decoded audio data and the synchronization parameter.
7. The electronic device of claim 5, wherein the control unit is further configured to:
playing the first file in a second mode before receiving the first instruction.
8. The electronic device of claim 7, the control unit further configured to:
and if the first instruction meets the first preset condition, switching the play mode of the first file from the second mode to the first mode.
CN201510959123.2A 2015-12-18 2015-12-18 Control method and electronic device Active CN105577947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510959123.2A CN105577947B (en) 2015-12-18 2015-12-18 Control method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510959123.2A CN105577947B (en) 2015-12-18 2015-12-18 Control method and electronic device

Publications (2)

Publication Number Publication Date
CN105577947A CN105577947A (en) 2016-05-11
CN105577947B true CN105577947B (en) 2021-11-16

Family

ID=55887579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510959123.2A Active CN105577947B (en) 2015-12-18 2015-12-18 Control method and electronic device

Country Status (1)

Country Link
CN (1) CN105577947B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734376B (en) * 2017-10-16 2019-11-26 维沃移动通信有限公司 A kind of method and device of multi-medium data broadcasting
CN108566589A (en) * 2018-04-26 2018-09-21 京东方科技集团股份有限公司 Control method, display device, electronic equipment and the storage medium of display device
CN108965927B (en) * 2018-07-25 2021-05-04 广州市迪士普音响科技有限公司 Broadcast control method and system
CN109977244A (en) * 2019-03-31 2019-07-05 联想(北京)有限公司 A kind of processing method and electronic equipment
CN110166820B (en) * 2019-05-10 2021-04-09 华为技术有限公司 Audio and video playing method, terminal and device
WO2022205793A1 (en) * 2021-03-30 2022-10-06 海信视像科技股份有限公司 Display device and device control method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453655A (en) * 2007-11-30 2009-06-10 深圳华为通信技术有限公司 Method, system and device for customer controllable audio and video synchronization regulation
US7664057B1 (en) * 2004-07-13 2010-02-16 Cisco Technology, Inc. Audio-to-video synchronization system and method for packet-based network video conferencing
CN101873447A (en) * 2009-04-21 2010-10-27 深圳富泰宏精密工业有限公司 Electronic device and power-saving method thereof of watching television
CN102572443A (en) * 2010-09-30 2012-07-11 苹果公司 Techniques for synchronizing audio and video data in an image signal processing system
CN102800341A (en) * 2012-07-02 2012-11-28 宇龙计算机通信科技(深圳)有限公司 Terminal and multimedia playing method thereof
CN103024490A (en) * 2012-12-26 2013-04-03 北京奇艺世纪科技有限公司 Method and device supporting independent playing of audio and video
CN104581320A (en) * 2013-10-16 2015-04-29 中兴通讯股份有限公司 Method, device and terminal for switching play modes

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7664057B1 (en) * 2004-07-13 2010-02-16 Cisco Technology, Inc. Audio-to-video synchronization system and method for packet-based network video conferencing
CN101453655A (en) * 2007-11-30 2009-06-10 深圳华为通信技术有限公司 Method, system and device for customer controllable audio and video synchronization regulation
CN101873447A (en) * 2009-04-21 2010-10-27 深圳富泰宏精密工业有限公司 Electronic device and power-saving method thereof of watching television
CN102572443A (en) * 2010-09-30 2012-07-11 苹果公司 Techniques for synchronizing audio and video data in an image signal processing system
CN102800341A (en) * 2012-07-02 2012-11-28 宇龙计算机通信科技(深圳)有限公司 Terminal and multimedia playing method thereof
CN103024490A (en) * 2012-12-26 2013-04-03 北京奇艺世纪科技有限公司 Method and device supporting independent playing of audio and video
CN104581320A (en) * 2013-10-16 2015-04-29 中兴通讯股份有限公司 Method, device and terminal for switching play modes

Also Published As

Publication number Publication date
CN105577947A (en) 2016-05-11

Similar Documents

Publication Publication Date Title
CN105577947B (en) Control method and electronic device
CN109982102B (en) Interface display method and system for live broadcast room, live broadcast server and anchor terminal
CN111316598B (en) Multi-screen interaction method and equipment
CN109600678B (en) Information display method, device and system, server, terminal and storage medium
CN108419113B (en) Subtitle display method and device
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN110297917B (en) Live broadcast method and device, electronic equipment and storage medium
CN114518817B (en) Display method, electronic device and storage medium
CN111093108B (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN109157839A (en) Frame per second regulates and controls method, apparatus, storage medium and terminal
CN107896337B (en) Information popularization method and device and storage medium
CN113438552B (en) Refresh rate adjusting method and electronic equipment
CN110996117B (en) Video transcoding method and device, electronic equipment and storage medium
CN110708581B (en) Display device and method for presenting multimedia screen saver information
CN112328941A (en) Application screen projection method based on browser and related device
CN104090709A (en) Picture switching method and device
CN105653165A (en) Method and device for regulating character display
CN110958464A (en) Live broadcast data processing method and device, server, terminal and storage medium
CN114661263A (en) Display method, electronic equipment and storage medium
CN115048012A (en) Data processing method and related device
CN107888975B (en) Video playing method, device and storage medium
KR20160074234A (en) Display apparatus and method for controlling a content output
CN114095769B (en) Live broadcast low-delay processing method of application-level player and display device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant