CN111050214A - Video playing method and electronic equipment - Google Patents

Video playing method and electronic equipment Download PDF

Info

Publication number
CN111050214A
CN111050214A CN201911365147.XA CN201911365147A CN111050214A CN 111050214 A CN111050214 A CN 111050214A CN 201911365147 A CN201911365147 A CN 201911365147A CN 111050214 A CN111050214 A CN 111050214A
Authority
CN
China
Prior art keywords
video
target
playing
thumbnail
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911365147.XA
Other languages
Chinese (zh)
Inventor
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911365147.XA priority Critical patent/CN111050214A/en
Publication of CN111050214A publication Critical patent/CN111050214A/en
Priority to PCT/CN2020/139514 priority patent/WO2021129818A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video playing method and electronic equipment, wherein the method comprises the following steps: displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image; receiving a first input of a user to a target video thumbnail of the at least one video thumbnail; responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image. According to the embodiment of the invention, the efficiency of searching the target face image in the target video can be improved.

Description

Video playing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a video playing method and an electronic device.
Background
With the rapid development of electronic devices, electronic devices have become an indispensable tool in people's life, and bring great convenience to various aspects of user's life. For example: people can play videos with electronic devices. And the playing progress of the video can be adjusted by sliding the screen, dragging the progress bar or touching the functional button. However, in practical applications, the accuracy of adjusting the playing progress of the video in the above manner is poor, and when a user needs to watch a part of content (e.g., a certain person) in the video, the playing progress of the video needs to be adjusted repeatedly, and thus, the efficiency of searching for the content in the video is low at present.
Disclosure of Invention
The embodiment of the invention provides a video playing method and electronic equipment, and aims to solve the problem that the efficiency of searching contents in a video is low at present.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video playing method applied to an electronic device, including:
displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image;
receiving a first input of a user to a target video thumbnail of the at least one video thumbnail;
responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the display module is used for displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image;
the first receiving module is used for receiving a first input of a user to a target video thumbnail in the at least one video thumbnail;
and the playing module is used for responding to the first input and playing the video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: the video playing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps in the video playing method when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the video playing method.
In the embodiment of the invention, at least one video thumbnail of a target video is displayed, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image; receiving a first input of a user to a target video thumbnail of the at least one video thumbnail; responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image. Therefore, the target face image is included in the video thumbnail, the target video can be played at different playing time nodes according to different selected video thumbnails, and the video content corresponding to the playing time nodes comprises the target face image, so that the playing progress of the target video does not need to be adjusted repeatedly, and the searching efficiency of the target face image in the target video is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a video playing method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a display interface of an electronic device according to an embodiment of the present invention;
fig. 4 is a second schematic view of a display interface of an electronic device according to an embodiment of the invention;
fig. 5 is a third schematic view of a display interface of an electronic device according to an embodiment of the present invention;
FIG. 6 is a block diagram of an electronic device according to an embodiment of the present invention;
FIG. 7 is a second block diagram of an electronic device according to an embodiment of the present invention;
FIG. 8 is a third block diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a fourth structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a video playing method provided by an embodiment of the present invention, applied to an electronic device, as shown in fig. 1, including the following steps:
step 101, displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image.
The specific type of the video thumbnail is not limited herein, and for example: the video thumbnail may be a still image or a dynamic image, preferably, the video thumbnail may be a dynamic graph, and the content of the dynamic graph may be used to represent the video content of the playing time node associated with the video thumbnail, for example: the content of the dynamic graph may be a most representative piece of content in the video content of the play time node associated with the video thumbnail, or may be content at the beginning, middle or end of the video content of the play time node associated with the video thumbnail.
In addition, the video thumbnails include video feature description information of the target video, that is, each video thumbnail may include video feature description information of video content of a different play time node in the target video, and the video feature description information may be used to describe the video content of the play time node associated with the video thumbnail, for example: the video feature description information may be summary information of the video content of the play time node associated with the video thumbnail, or the video feature description information may be description information of the content at the beginning, middle or end of the video content of the play time node associated with the video thumbnail.
The target face image is not specifically limited herein, and the target face image may be any one of all face images included in the target video.
And 102, receiving a first input of a user to a target video thumbnail in the at least one video thumbnail.
The specific type of the first input is not limited herein. For example: the first input may be a press input for the target video thumbnail, and the press duration exceeds a preset duration, and a specific value of the preset duration is also not limited, for example, may be 1 second, 2 seconds, or 3 seconds. In addition, the first input may also be a sliding input, and a trajectory of the sliding input may be a preset trajectory, and the preset trajectory may be a circle, a rectangle, a triangle, or the like.
The following examples specifically illustrate the determination of the target video thumbnail, for example: in the process of playing the target video, when a control instruction for controlling the target video to pause playing is received, the target video pauses playing and displays a target interface, a plurality of video thumbnails can be included in the target interface, and a user can determine the target video thumbnail from the plurality of video thumbnails through touch input or sliding input. Of course, in the playing process of the target video, under the condition that an input of a certain target face image of the target video is received by the user, a plurality of video thumbnails corresponding to the target face image may be displayed, it should be noted that the correspondence between the plurality of video thumbnails corresponding to the target face image and the target face image may be preset, or may be obtained by performing face recognition on the face image in the target video after the target face image is determined, and the specific manner is not limited herein.
And 103, responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image.
And the target time node is a playing time node in the target video. Of course, the position of the target time node in the target video is not specifically limited herein. For example: if the whole playing time of the target video is 1 minute, the target time node may be a playing time node corresponding to the 5 th, 10 th, or 15 th second or the like. It should be noted that the play time node may also be referred to as a play progress point.
The target time node may be a play start node or a play end node. Of course, the target time node may also include a play start node and a play end node.
For example: when the target time node is a play start node, jumping to a play start node associated with the target video thumbnail and starting to play the target video from the play start node; when the target time node is a playing end node, the target video can be continuously played from the currently corresponding playing time node of the target video, and the playing of the target video is stopped until the playing end node; when the target time node comprises a play start node and a play end node, skipping to the play start node, starting to play the target video from the play start node, and stopping playing the target video when the target video is played to the play end node; of course, when the video content reaches the playing end node, the target video can be continuously played, or the video content of the video playing time node associated with the next video thumbnail can be directly played.
In an embodiment of the present invention, the electronic Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
In the embodiment of the invention, at least one video thumbnail of a target video is displayed, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image; receiving a first input of a user to a target video thumbnail of the at least one video thumbnail; responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image. Therefore, the target face image is included in the video thumbnail, the target video can be played at different playing time nodes according to different selected video thumbnails, and the video content corresponding to the playing time nodes comprises the target face image, so that the playing progress of the target video does not need to be adjusted repeatedly, and the searching efficiency of the target face image in the target video is improved.
Referring to fig. 2, fig. 2 is a flowchart of another video playing method according to an embodiment of the present invention. The main differences between this embodiment and the previous embodiment are: a first corresponding relation exists among the target face image, the video thumbnail and the video playing time node. As shown in fig. 2, the method comprises the following steps:
step 201, displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image.
Step 201 may refer to corresponding descriptions of step 101 in the above embodiments, and is not described herein again.
Referring to fig. 3, a plurality of video thumbnails of the target video may be displayed, where the plurality of video thumbnails are a progress 1 thumbnail 301, a progress 2 thumbnail 302, a progress 3 thumbnail 303, a progress 4 thumbnail 304, a progress 5 thumbnail 305, a progress 6 thumbnail 306, a progress 7 thumbnail 307, and a progress 8 thumbnail 308 in fig. 3, and each video thumbnail may include video feature description information, such as text information of the progress 1, the progress 2, the progress 3, the progress 4, the progress 5, the progress 6, the progress 7, and the progress 8 in fig. 3.
Optionally, the first corresponding relationship is stored in a preset index information table, where the index information table is generated in the electronic device in advance, or the index information table is obtained from a server.
The index information table is generated in advance in the electronic device, and when a target video is displayed in a preview interface of the electronic device, the first face image in the target video can be automatically subjected to face recognition, so that the first corresponding relationship is established among the target face image, the video thumbnail and the video playing time node, the index information table can be generated, and the first corresponding relationship can be stored in the index information table.
Of course, the index information table may also be configured to determine a first face image in the target video according to an input instruction of the user when the electronic device displays the target video, and perform face recognition on the first face image, so as to establish a first object relationship between the first face image and the video thumbnail and the video playing time node, and may generate the index information table, and store the first corresponding relationship in the index information table. The target face image is one of the first face images.
When the index information table is obtained from the server, the server may also send the target video to the electronic device, and a specific manner of sending the index information table to the electronic device by the server is not limited herein, for example: the index information may be associated with the target video, and the server may transmit the target video to the electronic device together with the index information table when transmitting the target video to the electronic device. Of course, the target video may be transmitted first, and then the index information table may be transmitted separately.
In the embodiment of the invention, the index information table can be generated in the electronic equipment in advance and can also be acquired from the server, so that the flexibility of acquiring the index information table is increased.
Optionally, before displaying at least one video thumbnail of the target video, the method further includes:
receiving a second input of a first face image in the target video from a user;
responding to the second input, performing face recognition on a first face image in the target video to obtain a first video thumbnail comprising the first face image and a first video playing time node corresponding to the first video thumbnail;
establishing and storing a first corresponding relation among the first face image, the first video thumbnail and the first video playing time node;
wherein the at least one video thumbnail is the first video thumbnail. In this way, correspondingly, the target face image may also be an image in the first face image, and the video playing time node associated with the target video thumbnail may also be a time node in the first video playing time node.
The first face image may refer to any one of face images in the target video, for example: may be actor a or actor B, etc., and the target face image may be one of the plurality of first face images, for example, the target face image may be a face image of actor a.
It should be noted that, in a case where a preset input to the target video by the user is received, a face image selection interface may be displayed, and a plurality of face images may be displayed on the face image selection interface, and a name of each face image may also be displayed on a lower side of each face image, for example: referring to fig. 4, actor 1(401), actor 2(402), actor 3(403), actor 4(404), actor 5(405), and actor 6(406) may be displayed on the face image selection interface, and correspondingly, the name of each actor may be displayed on the lower side of the face image of the actor. In this way, under the condition that a second input of the user to the face image (i.e. the first face image) of the certain actor is received, the face image can be subjected to face recognition, corresponding video playing time nodes comprising the first video thumbnail of the first face image and corresponding to the first video thumbnail are obtained, and a first corresponding relationship among the first face image, the first video thumbnail and the video playing time nodes can be established and stored.
Since the first face image may refer to any one of the face images in the target video, each of the face images in the target video may store a first video thumbnail and a first video play time node having a first corresponding relationship.
Of course, each first face image may correspond to a plurality of first video thumbnails and a plurality of first video playing time nodes, and the number of the first video thumbnails and the number of the first playing time nodes corresponding to any two first face images may be the same or may not be the same.
In the embodiment of the invention, the first corresponding relation among the first face image, the first video thumbnail and the first video playing time node is established and stored by the electronic equipment, so that compared with a mode of acquiring the first corresponding relation from a server, the phenomenon of mistakenly acquiring the wrong first corresponding relation can be avoided, and the accuracy of the first corresponding relation can be improved.
Step 202, receiving a first input of a user to a target video thumbnail in the at least one video thumbnail.
The specific type of the first input is not limited herein. For example: the first input may be a press input for the target video thumbnail, and the press duration exceeds a preset duration, and a specific value of the preset duration is also not limited, for example, may be 1 second, 2 seconds, or 3 seconds. In addition, the first input may also be a sliding input, and a trajectory of the sliding input may be a preset trajectory, and the preset trajectory may be a circle, a rectangle, a triangle, or the like.
Step 203, obtaining the target time node corresponding to the target video thumbnail based on the first corresponding relationship.
The first correspondence relationship may be information generated in advance in the electronic device and stored in the electronic device, or may be information obtained by the electronic device from a server. The specific manner is not limited herein.
And step 204, playing the video content of the target time node.
Step 204 may refer to corresponding descriptions in step 103 in the above embodiments, and is not described herein again.
Optionally, the target time node is a play start node, a play time period is also acquired in the electronic device, and a second corresponding relationship exists between the play start node and the play time period;
the playing the video content of the target time node includes:
playing the target video with a first time length from the playing start node based on the second corresponding relation; wherein the first duration is the playing time period.
It should be noted that the playing time periods of the video contents corresponding to each playing start node (i.e., each video thumbnail) may be the same or different. For example: the playing time period of the video content corresponding to the first playing start node is from 15 th second to 25 th second of the target video, the playing time period of the video content corresponding to the second playing start node is from 30 th second to 55 th second of the target video, and the playing time period of the video content corresponding to the third playing start node is from 60 th second (1 minute) to 95 th second (1 minute 35 seconds) of the target video.
After the target video with the first duration is played from the play start node, the playing of the target video can be stopped, and of course, the target video can also be continuously played, and in addition, the target video with the second duration can be played by jumping to another play start node.
In addition, referring to table 1, each video thumbnail may include a target face image, where the target face image may be a face image of a third actor or a face image of a fourth lie, and a second corresponding relationship may exist between the target face image and a play start node and a play time period (i.e., the play duration in table 1), and it should be noted that the second corresponding relationship may also be stored in the index information table.
Figure BDA0002338202180000091
TABLE 1
In the embodiment of the invention, because the electronic equipment also acquires the playing time period, the target video can be played from the playing start node and the target video in the playing time period is continuously played, and the playing mode of the target video is increased, so that the target video is more flexible when being played.
Optionally, the video thumbnail includes at least one frame of video image in the playing time period.
The type of the video thumbnail is not specifically limited, for example: the video thumbnail may be a static image or a dynamic image in a playing time period, and of course, may also be an adjacent multi-frame video image.
In the embodiment of the present invention, the video thumbnail includes at least one frame of video image in the playing time period, so that the video thumbnail can generally represent the playing content in the playing time period, and when the user sees the video thumbnail, the effect of previewing the playing content in the playing time period can be achieved, so that whether to play the target video from the video playing time node corresponding to the video thumbnail can be selected.
Optionally, the video feature description information is description information of corresponding playing content in the playing time period.
The video feature description information may also be referred to as a scenario brief or scenario brief information, for example, referring to fig. 5, four video thumbnails, namely, a first video thumbnail 501, a second video thumbnail 502, a third video thumbnail 503 and a fourth video thumbnail 504 may be displayed, and each video thumbnail may include information such as video feature description information (i.e., scenario brief), a face image and a playing time period.
In addition, there may also be a corresponding relationship between the video feature description information and the play start node and the play time period, and the video feature description information is stored in the index information table, see table 2. Of course, the video playing time node may also include a playing start node and a playing end node, see table 3.
Figure BDA0002338202180000101
TABLE 2
Figure BDA0002338202180000102
TABLE 3
In the embodiment of the invention, because the video feature description information is the description information of the corresponding playing content in the playing time period, when the user sees the video thumbnail, the effect of previewing the playing content in the playing time period can be achieved, and whether the target video is played from the video playing time node corresponding to the video thumbnail can be selected.
In the embodiment of the present invention, through steps 201 to 204, since the first corresponding relationship exists between the target face image, the video thumbnail and the video playing time node, when the video playing time node is determined according to the video thumbnail and the first corresponding relationship, the accuracy and the rate of determining the video playing time node are improved.
Referring to fig. 6, fig. 6 is a structural diagram of an electronic device according to an embodiment of the present invention, which can implement details of the video playing method in the embodiments shown in fig. 1 and fig. 2 in the foregoing embodiments, and achieve the same effect. As shown in fig. 6, the electronic device 600 includes:
the display module 601 is configured to display at least one video thumbnail of a target video, where the video thumbnail includes video feature description information of the target video, and the video thumbnail includes a target face image;
a first receiving module 602, configured to receive a first input of a target video thumbnail from the at least one video thumbnail from a user;
a playing module 603, configured to respond to the first input, and play video content of a target time node, where the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node includes the target face image.
Optionally, referring to fig. 7, a first corresponding relationship exists among the target face image, the video thumbnail and the video playing time node;
the playing module 603 includes:
an obtaining sub-module 6031, configured to obtain the target time node corresponding to the target video thumbnail based on the first corresponding relationship;
and the playing sub-module 6032 is configured to play the video content of the target time node.
Optionally, the first corresponding relationship is stored in a preset index information table, where the index information table is generated in the electronic device in advance, or the index information table is obtained from a server.
Optionally, referring to fig. 8, the electronic device 600 further includes:
a second receiving module 604, configured to receive a second input of the first face image in the target video from the user;
a face recognition module 605, configured to perform face recognition on a first face image in the target video in response to the second input, to obtain a first video thumbnail including the first face image and a video playing time node corresponding to the first video thumbnail;
an establishing module 606, configured to establish and store a first corresponding relationship among the first face image, the first video thumbnail, and the video playing time node.
Optionally, the target time node is a play start node, a play time period is also acquired in the electronic device, and a second corresponding relationship exists between the play start node and the play time period;
the playing module 603 is further configured to: playing the target video with a first time length from the playing start node based on the second corresponding relation; wherein the first duration is the playing time period.
Optionally, the video thumbnail includes at least one frame of video image in the playing time period.
Optionally, the video feature description information is description information of corresponding playing content in the playing time period.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition. In the embodiment of the invention, as the video thumbnail comprises the target face image, the target video can be played at different playing time nodes according to different selected video thumbnails, and the video content corresponding to the playing time nodes comprises the target face image, the playing progress of the target video does not need to be adjusted repeatedly, so that the searching efficiency of the target face image in the target video is improved.
Figure 9 is a schematic diagram of a hardware configuration of an electronic device implementing various embodiments of the invention,
the electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A display unit 906 configured to display at least one video thumbnail of a target video, where the video thumbnail includes video feature description information of the target video, and the video thumbnail includes a target face image;
a user input unit 907 for receiving a first input of a user to a target video thumbnail among the at least one video thumbnail;
a processor 910, configured to respond to the first input, to play a video content of a target time node, where the target time node is a video play time node associated with the target video thumbnail, and the video content of the target time node includes the target face image.
Optionally, a first corresponding relationship exists between the target face image, the video thumbnail and the video playing time node;
the playing back of the video content of the target time node in response to the first input performed by processor 910 includes:
acquiring the target time node corresponding to the target video thumbnail based on the first corresponding relation; and playing the video content of the target time node.
Optionally, the first corresponding relationship is stored in a preset index information table, where the index information table is generated in the electronic device in advance, or the index information table is obtained from a server.
Optionally, the user input unit 907 is further configured to receive a second input of the first face image in the target video from the user;
processor 910 is further configured to perform face recognition on a first face image in the target video in response to the second input, so as to obtain a first video thumbnail including the first face image and a video playing time node corresponding to the first video thumbnail; and establishing and storing a first corresponding relation among the first face image, the first video thumbnail and the video playing time node.
Optionally, the target time node is a play start node, a play time period is also acquired in the electronic device, and a second corresponding relationship exists between the play start node and the play time period;
the playing of the video content of the target time node performed by processor 910 includes:
playing the target video with a first time length from the playing start node based on the second corresponding relation; wherein the first duration is the playing time period.
Optionally, the video thumbnail includes at least one frame of video image in the playing time period.
Optionally, the video feature description information is description information of corresponding playing content in the playing time period.
In the embodiment of the invention, as the video thumbnail comprises the target face image, the target video can be played at different playing time nodes according to different selected video thumbnails, and the video content corresponding to the playing time nodes comprises the target face image, the playing progress of the target video does not need to be adjusted repeatedly, so that the searching efficiency of the target face image in the target video is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 902, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may provide audio output related to a specific function performed by the electronic device 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The electronic device 900 also includes at least one sensor 905, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 9061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 9061 and/or the backlight when the electronic device 900 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 908 is an interface for connecting an external device to the electronic apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 900 or may be used to transmit data between the electronic device 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the electronic device. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The electronic device 900 may further include a power supply 911 (e.g., a battery) for supplying power to various components, and preferably, the power supply 911 may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the electronic device 900 includes some functional modules that are not shown, and thus are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 910, a memory 909, and a computer program that is stored in the memory 909 and can be run on the processor 910, and when the computer program is executed by the processor 910, the processes of the above-mentioned embodiment of the video playing method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. A video playing method is applied to electronic equipment and is characterized by comprising the following steps:
displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image;
receiving a first input of a user to a target video thumbnail of the at least one video thumbnail;
responding to the first input, and playing video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image.
2. The method of claim 1, wherein a first correspondence exists between the target face image, the video thumbnail, and the video play time node;
the playing the video content of the target time node in response to the first input comprises:
acquiring the target time node corresponding to the target video thumbnail based on the first corresponding relation;
and playing the video content of the target time node.
3. The method according to claim 2, wherein the first correspondence is stored in a preset index information table, the index information table is generated in the electronic device in advance, or the index information table is obtained from a server.
4. The method of claim 2, wherein prior to displaying at least one video thumbnail of a target video, the method further comprises:
receiving a second input of a first face image in the target video from a user;
responding to the second input, performing face recognition on a first face image in the target video to obtain a first video thumbnail comprising the first face image and a first video playing time node corresponding to the first video thumbnail;
establishing and storing a first corresponding relation among the first face image, the first video thumbnail and the first video playing time node;
wherein the at least one video thumbnail is the first video thumbnail.
5. The method according to claim 2, wherein the target time node is a play start node, the electronic device further obtains a play time period, and a second corresponding relationship exists between the play start node and the play time period;
the playing the video content of the target time node includes:
playing the target video with a first time length from the playing start node based on the second corresponding relation; wherein the first duration is the playing time period.
6. The method of claim 5, wherein the video thumbnail comprises at least one frame of video image within the playback time period.
7. The method according to claim 5, wherein the video feature description information is description information of corresponding playing content in the playing time period.
8. An electronic device, comprising:
the display module is used for displaying at least one video thumbnail of a target video, wherein the video thumbnail comprises video feature description information of the target video, and the video thumbnail comprises a target face image;
the first receiving module is used for receiving a first input of a user to a target video thumbnail in the at least one video thumbnail;
and the playing module is used for responding to the first input and playing the video content of a target time node, wherein the target time node is a video playing time node associated with the target video thumbnail, and the video content of the target time node comprises the target face image.
9. The electronic device of claim 8, wherein a first correspondence exists between the target face image, the video thumbnail, and the video play time node;
the playing module comprises:
the obtaining sub-module is used for obtaining the target time node corresponding to the target video thumbnail based on the first corresponding relation;
and the playing submodule is used for playing the video content of the target time node.
10. The electronic device according to claim 9, wherein the first correspondence is stored in a preset index information table, and the index information table is generated in the electronic device in advance, or is obtained from a server.
11. The electronic device of claim 9, further comprising:
the second receiving module is used for receiving second input of the first face image in the target video from the user;
the face recognition module is used for responding to the second input and carrying out face recognition on a first face image in the target video to obtain a first video thumbnail comprising the first face image and a first video playing time node corresponding to the first video thumbnail;
the establishing module is used for establishing and storing a first corresponding relation among the first face image, the first video thumbnail and the first video playing time node;
wherein the at least one video thumbnail is the first video thumbnail.
12. The electronic device according to claim 9, wherein the target time node is a play start node, a play time period is obtained in the electronic device, and a second correspondence relationship exists between the play start node and the play time period;
the play module is further configured to: playing the target video with a first time length from the playing start node based on the second corresponding relation; wherein the first duration is the playing time period.
13. The electronic device of claim 12, wherein the video thumbnail comprises at least one frame of video image within the playback time period.
14. The electronic device according to claim 12, wherein the video feature description information is description information of corresponding playing content in the playing time period.
15. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the steps in the video playback method according to any of claims 1-7 when executing the computer program.
16. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the video playback method according to any one of claims 1 to 7.
CN201911365147.XA 2019-12-26 2019-12-26 Video playing method and electronic equipment Pending CN111050214A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911365147.XA CN111050214A (en) 2019-12-26 2019-12-26 Video playing method and electronic equipment
PCT/CN2020/139514 WO2021129818A1 (en) 2019-12-26 2020-12-25 Video playback method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911365147.XA CN111050214A (en) 2019-12-26 2019-12-26 Video playing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111050214A true CN111050214A (en) 2020-04-21

Family

ID=70240114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911365147.XA Pending CN111050214A (en) 2019-12-26 2019-12-26 Video playing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111050214A (en)
WO (1) WO2021129818A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929748A (en) * 2021-01-22 2021-06-08 维沃移动通信(杭州)有限公司 Video processing method, video processing device, electronic equipment and medium
WO2021129818A1 (en) * 2019-12-26 2021-07-01 维沃移动通信有限公司 Video playback method and electronic device
WO2024001768A1 (en) * 2022-06-30 2024-01-04 中兴通讯股份有限公司 Video playback method and device, and computer readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140089806A1 (en) * 2012-09-25 2014-03-27 John C. Weast Techniques for enhanced content seek
CN104142799A (en) * 2013-05-10 2014-11-12 Lg电子株式会社 Mobile terminal and controlling method thereof
CN104185073A (en) * 2014-08-04 2014-12-03 北京奇虎科技有限公司 Method and client for playing video by selecting corresponding video progress through picture
CN104394422A (en) * 2014-11-12 2015-03-04 华为软件技术有限公司 Video segmentation point acquisition method and device
CN104995639A (en) * 2013-10-30 2015-10-21 宇龙计算机通信科技(深圳)有限公司 Terminal and method for managing video file
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN108228776A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN110557683A (en) * 2019-09-19 2019-12-10 维沃移动通信有限公司 Video playing control method and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019841A1 (en) * 2014-08-04 2016-02-11 北京奇虎科技有限公司 Method and client for performing video fixed-point playing by means of picture
CN111050214A (en) * 2019-12-26 2020-04-21 维沃移动通信有限公司 Video playing method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140089806A1 (en) * 2012-09-25 2014-03-27 John C. Weast Techniques for enhanced content seek
CN104142799A (en) * 2013-05-10 2014-11-12 Lg电子株式会社 Mobile terminal and controlling method thereof
CN104995639A (en) * 2013-10-30 2015-10-21 宇龙计算机通信科技(深圳)有限公司 Terminal and method for managing video file
CN104185073A (en) * 2014-08-04 2014-12-03 北京奇虎科技有限公司 Method and client for playing video by selecting corresponding video progress through picture
CN104394422A (en) * 2014-11-12 2015-03-04 华为软件技术有限公司 Video segmentation point acquisition method and device
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN108228776A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Data processing method, device, storage medium and electronic equipment
CN110557683A (en) * 2019-09-19 2019-12-10 维沃移动通信有限公司 Video playing control method and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021129818A1 (en) * 2019-12-26 2021-07-01 维沃移动通信有限公司 Video playback method and electronic device
CN112929748A (en) * 2021-01-22 2021-06-08 维沃移动通信(杭州)有限公司 Video processing method, video processing device, electronic equipment and medium
WO2024001768A1 (en) * 2022-06-30 2024-01-04 中兴通讯股份有限公司 Video playback method and device, and computer readable medium

Also Published As

Publication number Publication date
WO2021129818A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
CN110557566B (en) Video shooting method and electronic equipment
US11675442B2 (en) Image processing method and flexible-screen terminal
CN109388304B (en) Screen capturing method and terminal equipment
CN109078319B (en) Game interface display method and terminal
CN110784771B (en) Video sharing method and electronic equipment
CN109240577B (en) Screen capturing method and terminal
CN108279948B (en) Application program starting method and mobile terminal
CN110557683B (en) Video playing control method and electronic equipment
CN111666009B (en) Interface display method and electronic equipment
CN109710349B (en) Screen capturing method and mobile terminal
CN107728923B (en) Operation processing method and mobile terminal
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN109144703B (en) Multitask processing method and terminal equipment thereof
CN109348019B (en) Display method and device
CN110855921B (en) Video recording control method and electronic equipment
WO2021129818A1 (en) Video playback method and electronic device
CN109542321B (en) Control method and device for screen display content
CN111147919A (en) Play adjustment method, electronic equipment and computer readable storage medium
CN110333803B (en) Multimedia object selection method and terminal equipment
CN109672845B (en) Video call method and device and mobile terminal
CN111061446A (en) Display method and electronic equipment
CN110851219A (en) Information processing method and electronic equipment
CN108536513B (en) Picture display direction adjusting method and mobile terminal
CN110865752A (en) Photo viewing method and electronic equipment
CN111526248B (en) Audio output mode switching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200421