CN109471955B - Video clip positioning method, computing device and storage medium - Google Patents

Video clip positioning method, computing device and storage medium Download PDF

Info

Publication number
CN109471955B
CN109471955B CN201811337627.0A CN201811337627A CN109471955B CN 109471955 B CN109471955 B CN 109471955B CN 201811337627 A CN201811337627 A CN 201811337627A CN 109471955 B CN109471955 B CN 109471955B
Authority
CN
China
Prior art keywords
segment
target
identifier
video frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811337627.0A
Other languages
Chinese (zh)
Other versions
CN109471955A (en
Inventor
张平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201811337627.0A priority Critical patent/CN109471955B/en
Publication of CN109471955A publication Critical patent/CN109471955A/en
Application granted granted Critical
Publication of CN109471955B publication Critical patent/CN109471955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention is suitable for the technical field of computers, and provides a video clip positioning method, a computing device and a storage medium, wherein the method comprises the following steps: after acquiring a segment positioning request of a user for the current video, identifying a positioning instruction identifier and a target segment identifier from the segment positioning request, and positioning a target segment corresponding to the target segment identifier from the current video according to the positioning instruction identifier. Therefore, the target segment in the current video can be positioned through the target segment identification in the segment positioning request, and the positioned target segment can be marked or stored, so that the problem that manual recording is easy to miss recording is avoided, and the efficiency is improved.

Description

Video clip positioning method, computing device and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a video clip positioning method, a computing device and a storage medium.
Background
At present, video teaching is popularized, and students often need to record key points, difficult points or classical question types explained by teachers in videos in the process of watching the videos so as to be used for follow-up review and consolidation. The conventional recording method is that students read while watching, and the manual recording method is easy to omit the learning key points, has low efficiency and is particularly not good for remote live-broadcast courses.
Disclosure of Invention
The invention aims to provide a video clip positioning method, a computing device and a storage medium, and aims to solve the problems that manual recording is easy to miss and low in efficiency in the prior art.
In one aspect, the present invention provides a method for positioning a video segment, the method comprising the following steps:
acquiring a segment positioning request of a user for a current video, wherein the segment positioning request comprises a target segment identifier and a positioning instruction identifier;
identifying the positioning instruction identification and the target fragment identification from the fragment positioning request;
and positioning a target segment corresponding to the target segment identifier from the current video according to the positioning instruction identifier.
Further, according to the positioning instruction identifier, positioning a target segment corresponding to the target segment identifier from the current video, specifically including the following steps:
and searching target segment time information corresponding to the target segment identification according to the preset segment identification and the preset segment time information which are associated in the current video so as to position the target segment.
Further, according to the positioning instruction identifier, positioning a target segment corresponding to the target segment identifier from the current video, specifically including the following steps:
identifying a frame identifier used for indicating the content of a video frame from the specified video frame of the current video;
searching a target video frame corresponding to the frame identifier matched with the target fragment identifier from the specified video frame to obtain a target video frame set;
and positioning the target segment according to the target video frame set.
Further, identifying a frame identifier for indicating the content of the video frame from the specified video frame of the current video specifically includes the following steps:
when the current video is in a played state, obtaining the request time of the segment positioning request;
determining, from the current video, a first video frame group that has been played before the request time and/or a second video frame group that is to be played after the request time, the play start time of the first video frame group being separated from the request time by a first predetermined time period, the play end time of the second video frame group being separated from the request time by a second predetermined time period, the specified video frame being located in the first video frame group and/or the second video frame group;
the frame identification is identified from the specified video frame.
Further, searching a target video frame corresponding to the frame identifier matched with the target segment identifier from the specified video frame to obtain a target video frame set, specifically including the following steps:
determining a first approximation degree of a frame identifier and a target segment identifier of the specified video frame and a change condition of a second approximation degree of the frame identifier contained in each of the specified video frames in sequence;
and determining the target video frame from the specified video frames according to the first approximation degree and the change condition to obtain the target video frame set.
Further, the positioning the target segment according to the target video frame set specifically includes the following steps:
determining segment time information of the target segment according to the duration of the target video frame set;
and positioning the target segment according to the segment time information.
Further, after the target segment corresponding to the segment identifier is located from the current video according to the location instruction identifier, the method further includes the following steps:
and marking, storing or downloading the target segment.
Further, the segment positioning request is a voice request or a mouse/keyboard operation request.
In another aspect, the present invention further provides a computing device, which includes a memory and a processor, and the processor implements the steps in the method when executing the computer program stored in the memory.
In another aspect, the present invention also provides a computer-readable storage medium, which stores a computer program, which when executed by a processor implements the steps in the method as described above.
According to the method and the device, after a fragment positioning request of a user for a current video is obtained, a positioning instruction identifier and a target fragment identifier are identified from the fragment positioning request, and a target fragment corresponding to the target fragment identifier is positioned from the current video according to the positioning instruction identifier. Therefore, the target segment in the current video can be positioned through the target segment identification in the segment positioning request, and the positioned target segment can be marked or stored, so that the problem that manual recording is easy to miss recording is avoided, and the efficiency is improved.
Drawings
Fig. 1 is a flowchart illustrating an implementation of a method for positioning a video segment according to an embodiment of the present invention;
fig. 2 is a schematic diagram of the preset segment identifier and the segment start and end times in the current video a according to the second embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S103 in the third embodiment of the present invention;
FIG. 4 is a diagram illustrating frame identification in a third embodiment of the present invention;
FIG. 5 is a diagram illustrating frame identification in another case of the third embodiment of the present invention;
FIG. 6 is a flowchart of a step S301 in the fourth embodiment of the present invention;
FIG. 7 is a diagram illustrating a first video frame set and a second video frame set according to a fourth embodiment of the present invention;
FIG. 8 is a flowchart of a step S302 according to a fifth embodiment of the present invention;
FIG. 9 is a diagram illustrating frame identification in a designated video frame according to a fifth embodiment of the present invention;
FIG. 10 is a flowchart of a refinement of step S303 in the sixth embodiment of the present invention;
fig. 11 is a schematic structural diagram of a computing device according to a seventh embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation flow of a method for positioning a video segment according to an embodiment of the present invention, and for convenience of description, only the relevant portions of the embodiment of the present invention are shown, and the following details are described below:
in step S101, a segment positioning request of a user for a current video is obtained, where the segment positioning request includes a target segment identifier and a positioning instruction identifier.
In this embodiment, the current video may be an offline video or an online video; the current video can be a teaching video, a singing video and the like.
The acquisition of the user segment positioning request can be realized through input and output modules such as a microphone, a mouse/keyboard and the like, and the acquired segment positioning request is a voice request or a mouse/keyboard operation request and the like correspondingly.
The segment positioning request is usually embodied as a segment marking request, a segment storing request, or a segment downloading request, and the segment marking request indicates that the user wants to position and mark a certain segment capable of realizing positioning in the current video, the segment storing request indicates that the user wants to position and store a certain segment capable of realizing positioning in the current video, and the segment downloading request indicates that the user wants to position, download, and store a certain segment capable of realizing positioning in the current video.
The positioning instruction identifier included in the segment positioning request is used to instruct the user to make the segment positioning request, and the computing device recognizes the positioning instruction identifier, and may perform corresponding operations, for example: when the fragment positioning request is identified as a sentence 'help me position the content of the example topic 3', and the positioning instruction identifier is a word 'positioning' which can be identified by the computing equipment, the computing equipment can perform subsequent target fragment positioning processing according to the identified positioning instruction identifier; when the segment positioning request is recognized as a sentence 'help me save the content of the first exercise', and the positioning instruction identifier is a word 'save' which can be recognized by the computing equipment, the computing equipment can perform subsequent target segment positioning and saving processing according to the recognized positioning instruction identifier; when the segment positioning request is identified as a sentence "help me to download the content of track four", and the positioning instruction identifier is a word "download" that can be identified by the computing device, the computing device can perform subsequent target segment positioning, downloading, and storing processing according to the identified positioning instruction identifier.
The target segment identifier included in the segment locating request may be the words "example topic 3", "first exercise", "track four" in the above examples, or may be a phrase including the words in the above examples, for example: example 3 and subsequent examples, first to fourth exercises, and the like, the computing device may perform corresponding target segment positioning and processing such as marking, saving, and/or downloading by identifying.
The fragment positioning request comprises the target fragment identification and the positioning instruction identification, so that the user experience is more natural and the feeling is better when the man-machine conversation is realized.
In step S102, a positioning instruction identifier and a target segment identifier are identified from the segment positioning request.
In this embodiment, the positioning instruction identifier and the target segment identifier may be screened and identified from the segment positioning request by using a common word matching and identifying technology, or the positioning instruction identifier and the target segment identifier may be identified from the segment positioning request by using a machine learning technology.
In step S103, according to the positioning instruction identifier, a target segment corresponding to the target segment identifier is positioned from the current video.
In this embodiment, the computing device recognizes the positioning instruction identifier, that is, the computing device is characterized by receiving the positioning processing instruction, and then the target segment corresponding to the target segment identifier can be positioned from the current video. When the current video is marked with the preset segment identifier in advance, the target segment can be accurately positioned by comparing the target segment identifier with the preset segment identifier, and when the current video is not marked with the preset segment identifier in advance, the corresponding frame identifier needs to be temporarily identified from the video frames forming the current video, and then the target segment is accurately positioned by comparing the target segment identifier with the identified frame identifier.
The corresponding frame identification is identified from the video frame, and the machine learning technology can also be adopted, so that the technology of the invention can be adopted for processing different types of video data, and the application of the technology of the invention is expanded.
By implementing the embodiment, the target segment in the current video can be positioned through the target segment identifier in the segment positioning request, and then the positioned target segment can be marked or stored, so that the problem that manual recording is easy to miss recording is avoided, and the efficiency is improved.
Example two:
the difference between the present embodiment and the first embodiment is mainly as follows:
in this embodiment, step S103 specifically includes:
and searching target segment time information corresponding to the target segment identification according to the preset segment identification and the preset segment time information which are associated in the current video so as to position the target segment.
In this embodiment, when the current video is marked with the preset segment identifier in advance, the target segment can be accurately located by comparing the target segment identifier with the preset segment identifier.
Specifically, a preset segment identifier and preset segment time information may be preset in the current video, and the preset segment identifier and the preset segment time information are associated, as shown in fig. 2, the total duration of the current video a is 30 minutes, the current video a includes two segments, the corresponding preset segment identifiers a and b are respectively set, in the first preset segment time information corresponding to the first preset segment identifier a, the indicated first segment start time is 05:15, the first segment end time is 08:32, in the second preset segment time information corresponding to the second preset segment identifier b, the indicated second segment start time is 20:01, and the second segment end time is 27:13, when the target segment time information corresponding to the target segment identifier is searched, when the target segment identifier is a, the target segment time information obtained by searching is the first preset segment time information corresponding to the first preset segment identifier a, the target segment is the first segment with the start time of 05:15 and the end time of 08:32 in the current video A.
By implementing the embodiment, the segment identifier and the segment time information related to the segment identifier are preset in the video, so that the target segment time information corresponding to the target segment identifier can be obtained through subsequent searching, the required segment can be accurately positioned, the processing process is accurate and not complicated, and the efficiency can be further improved.
Example three:
the difference between the present embodiment and the first embodiment is mainly as follows:
in this embodiment, step S103 specifically includes the following steps as shown in fig. 3:
in step S301, a frame identifier indicating the content of a video frame is identified from specified video frames of the current video.
In step S302, a target video frame corresponding to a frame identifier matching the target segment identifier is searched from the designated video frames to obtain a target video frame set;
in step S303, a target segment is located according to the target video frame set.
In this embodiment, when the preset segment identifier is not pre-marked in the current video, the corresponding frame identifier needs to be temporarily identified from the video frames constituting the current video, and then the target segment is accurately located by comparing the target segment identifier with the identified frame identifier.
In particular, the identification of the frame identification may be achieved by image recognition techniques, namely: if the frame mark exists in the video frame, whether the frame mark is presented in the video frame in a text or letter mode or in a symbol mode, the frame mark can be identified from the video frame through an image identification technology.
As shown in fig. 4, a frame identifier "case 4" exists in one or some video frames, and although the frame identifier itself is a word, the frame identifier is shown by an image in the or some video frames, and then the frame identifier "case 4" can be identified from the or some video frames by using an image identification technology. If the same frame identifications matched with the target segment identifications exist in a plurality of video frames, the video frames can be regarded as target video frames, and the target segment can be positioned according to the set formed by the target video frames.
Or, if only one video frame has a frame identifier matching the target segment identifier (usually, the video frame is a starting frame of the target segment desired to be located), and the subsequent video frame and the video frame satisfy the requirement of high approximation or more approximation from the aspect of image approximation, specifically, the determination can be performed through the calculation of an image approximation degree indicating value and the comparison with a set approximation degree threshold value, and the like, then the group of video frames with high approximation or more approximation can be used as the target video frame set.
Alternatively, as shown in fig. 5, if there is a first frame identifier "case 4" in a certain video frame and there is a second frame identifier "case 5" in another video frame after the certain video frame according to the video frame sequence order, since the first frame identifier "case 4" and the second frame identifier "case 5" are logically ordered in sequence (and the computing device has the capability of recognizing the logic), the video frame set from the first video frame containing the first frame identifier "case 4" to the second video frame containing the second frame identifier "case 5" in the video frame sequence can be used as the target video frame set (generally, it can be set that the first video frame is contained, but the second video frame is not contained).
After the target video frame set is determined, the starting video frame and the ending video frame of the target segment can be determined according to the time sequence relation of the target video frame in the video frame sequence, and the video frames in the interval from the starting video frame to the ending video frame form the target segment. It should be noted that the target video frame set is generally included in the target segment, and the target video frame set may be only a part of the video frames in the target segment.
By implementing the embodiment, the requirements of excessive current video and the requirement of setting the segment identifier in the current video in advance are not required, and the applicability is better. Especially for teaching videos, the teaching videos usually include frame identification information (although the frame identification information is displayed in video image frames), and the frame identification information can be quickly and accurately identified through an image identification technology to be matched with the target segment identification information, so as to determine the target video frame and finally position the target segment.
Example four:
the present embodiment is mainly different from the present embodiment in that:
in this embodiment, step S301 specifically includes the following steps as shown in fig. 6:
in step S601, when the current video is in the played state, the request time of the clip positioning request is obtained.
In step S602, a first video frame group that has been played before the request time and/or a second video frame group that is to be played after the request time are determined from the current video, the play start time of the first video frame group being separated from the request time by a first predetermined time period, the play end time of the second video frame group being separated from the request time by a second predetermined time period, and the specified video frame is located in the first video frame group and/or the second video frame group.
In step S603, a frame identification is recognized from the specified video frame.
Usually, the positioning of the video segment is performed temporarily along with the video playing, that is, the user usually determines the video segment to be positioned during the video playing (first or repeated playing). As shown in fig. 7, in a specific application example, when the current video is in a played state (which may be in a playing state or a playing pause state), a request time t1 for obtaining a segment locating request is 2018, 10, 24, 19:00, where the request time corresponds to a current video frame being played or paused, the first video frame group is pushed forward from the current video frame by a first predetermined time period according to a video frame sequence relationship to obtain a playing start time t2 of the first video frame group, the second video frame group is pushed backward from the current video frame by a second predetermined time period according to the video frame sequence relationship to obtain a playing end time t3 of the second video frame group, and the specified video frame is located in the first video frame group and/or the second video frame group.
By implementing the embodiment, the number of video frames to be processed can be reduced by determining the designated video frames, so that the processing efficiency is improved.
Example five:
the present embodiment is different from the third and fourth embodiments mainly in that:
in this embodiment, step S302 specifically includes the following steps as shown in fig. 8:
in step S801, a first approximation of the frame identification of the designated video frame and the target segment identification, and a change in the sequence of a second approximation of the frame identification contained in each of the designated video frames are determined.
In step S802, a target video frame is determined from the designated video frames according to the first approximation and the change condition, so as to obtain a target video frame set.
The present embodiment is explained below by way of a specific example. As shown in fig. 9, the target segment is identified as "example 4", the frame identifier of the first designated video frame includes "example 3" and "fig. 1", the frame identifier of the second designated video frame includes "example 4" and "fig. 2", the frame identifier of the third designated video frame includes "example 4" and "fig. 3", it can be determined that the first approximation degree indicating value of the frame identifier "example 3" and the target segment identifier "example 4" of the first designated video frame is 0.8, the first approximation degree indicating value of the frame identifier "example 1" and the target segment identifier "example 4" of the frame identifier "fig. 1" and the target segment identifier "example 4" is 0.1, the first approximation degree indicating value of the frame identifier "example 4" and the target segment identifier "example 4" of the second designated video frame is 1, the first approximation degree indicating value of the frame identifier "example 4" and the target segment identifier "example 4" of the frame identifier "of the third designated video frame identifier" example 4 "and the target segment identifier" example 4 "is 1, the first approximation indicating value of the frame identifier "fig. 3" and the target segment identifier "case 4" is 0.1, the first designated video frame is earlier in the sequence than the second designated video frame, the second designated video frame is earlier in the sequence than the third designated video frame, according to the second approximation calculation strategy, the frame identifiers "case 3" and "fig. 1" in the first designated video frame, the second approximation indicating value determined between the frame identifiers "case 4" and "fig. 2" in the second designated video frame is 0.5, the frame identifiers "case 4" and "fig. 2" in the second designated video frame and the second approximation indicating value determined between the frame identifiers "case 4" and "fig. 3" in the third designated video frame are 0.8, so that, due to the presence of the highest approximation indicating value of 1 (i.e. the presence of the frame identifier identical to the target segment identifier) in the second designated video frame and the third designated video frame, the first designated video frame does not have a frame identifier which is completely the same as the target segment identifier, and meanwhile, on the time sequence relation determined by the video frame sequence, the similarity between the first designated video frame identifier and the second designated video frame identifier is considered to be low, the similarity between the second designated video frame identifier and the third designated video frame identifier is considered to be high, and the similarity can obviously change on the time sequence relation (can be judged by a change value threshold) so as to determine that the second designated video frame and the third designated video frame are the target video frame in a comprehensive way. And the judgment of the change condition of the second approximation degree is introduced, mainly aiming at accurately identifying the target video frame under the following conditions: some designated video frames should themselves be target video frames, which do not fully contain the target segment identification.
Example six:
the difference between the present embodiment and the third, fourth and fifth embodiments is mainly as follows:
in this embodiment, step S303 specifically includes the following steps as shown in fig. 10:
in step S1001, segment time information of the target segment is determined according to the duration of the target video frame set.
In step S1002, a target segment is located according to the segment time information.
In this embodiment, after the target video frame set is determined, according to the time sequence relationship of the target video frames, the time of the first target video frame is determined to be the start time of the target segment, and the time of the last other target video frame is determined to be the end time of the target segment, so that the duration of the target segment can be determined according to the start time and the end time, and the target segment is located.
Example seven:
fig. 11 shows a structure of a computing device according to a seventh embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown.
The computing device according to an embodiment of the present invention includes a processor 1101 and a memory 1102, and when the processor 1101 executes the computer program 1103 stored in the memory 1102, the steps in the above-described method embodiments, such as steps S101 to S103 shown in fig. 1, are implemented.
The computing device of the embodiment of the invention can be a personal computer, a smart phone, a tablet computer and the like. For the steps implemented when the processor 1101 executes the computer program 1103 to implement the above methods in the computing device, reference may be made to the description of the foregoing method embodiments, and details are not repeated here.
Of course, in a specific application, the computing device may be further configured with a module for implementing related functions, such as: microphone, speaker, touch-sensitive display screen, etc.
Example eight:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps in the above-described method embodiments, for example, steps S101 to S103 shown in fig. 1.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A method for locating a video segment, the method comprising the steps of:
acquiring a segment positioning request of a user for a current video, wherein the segment positioning request comprises a target segment identifier and a positioning instruction identifier;
identifying the positioning instruction identification and the target fragment identification from the fragment positioning request;
positioning a target segment corresponding to the target segment identification from the current video according to the positioning instruction identification;
according to the positioning instruction identifier, positioning a target segment corresponding to the target segment identifier from the current video, specifically comprising the following steps:
identifying a frame identifier used for indicating the content of a video frame from the specified video frame of the current video;
searching a target video frame corresponding to the frame identifier matched with the target fragment identifier from the specified video frame to obtain a target video frame set;
positioning the target segment according to the target video frame set;
identifying a frame identifier for indicating the content of the video frame from the specified video frame of the current video, specifically comprising the following steps:
when the current video is in a played state, obtaining the request time of the segment positioning request;
determining, from the current video, a first video frame group that has been played before the request time and/or a second video frame group that is to be played after the request time, the play start time of the first video frame group being separated from the request time by a first predetermined time period, the play end time of the second video frame group being separated from the request time by a second predetermined time period, the specified video frame being located in the first video frame group and/or the second video frame group;
the frame identification is identified from the specified video frame.
2. The method according to claim 1, wherein, according to the positioning instruction identifier, positioning a target segment corresponding to the target segment identifier from the current video specifically includes:
and searching target segment time information corresponding to the target segment identification according to the preset segment identification and the preset segment time information which are associated in the current video so as to position the target segment.
3. The method according to claim 1, wherein the step of finding a target video frame corresponding to the frame identifier matching the target segment identifier from the designated video frames to obtain a target video frame set comprises the following steps:
determining a first approximation degree of a frame identifier and a target segment identifier of the specified video frame and a change condition of a second approximation degree of the frame identifier contained in each of the specified video frames in sequence;
and determining the target video frame from the specified video frames according to the first approximation degree and the change condition to obtain the target video frame set.
4. The method of claim 1, wherein locating the target segment based on the set of target video frames comprises:
determining segment time information of the target segment according to the duration of the target video frame set;
and positioning the target segment according to the segment time information.
5. The method of claim 1, wherein after locating the target segment corresponding to the segment identifier from the current video according to the locating instruction identifier, further comprising the steps of:
and marking, storing or downloading the target segment.
6. The method of claim 1, wherein the segment positioning request is a voice request or a mouse/keyboard operation request.
7. A computing device comprising a memory and a processor, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing a computer program stored in the memory.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201811337627.0A 2018-11-12 2018-11-12 Video clip positioning method, computing device and storage medium Active CN109471955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811337627.0A CN109471955B (en) 2018-11-12 2018-11-12 Video clip positioning method, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811337627.0A CN109471955B (en) 2018-11-12 2018-11-12 Video clip positioning method, computing device and storage medium

Publications (2)

Publication Number Publication Date
CN109471955A CN109471955A (en) 2019-03-15
CN109471955B true CN109471955B (en) 2021-09-17

Family

ID=65671669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811337627.0A Active CN109471955B (en) 2018-11-12 2018-11-12 Video clip positioning method, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN109471955B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134829B (en) * 2019-04-28 2021-12-07 腾讯科技(深圳)有限公司 Video positioning method and device, storage medium and electronic device
CN110337009A (en) * 2019-07-01 2019-10-15 百度在线网络技术(北京)有限公司 Control method, device, equipment and the storage medium of video playing
CN112911378A (en) * 2019-12-03 2021-06-04 西安光启未来技术研究院 Video frame query method
CN111626902B (en) * 2020-05-30 2021-04-23 厦门致力于学在线教育科技有限公司 Online education management system and method based on block chain

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870754A (en) * 1996-04-25 1999-02-09 Philips Electronics North America Corporation Video retrieval of MPEG compressed sequences using DC and motion signatures
CN101604486A (en) * 2008-06-12 2009-12-16 王涛 Musical instrument playing and practicing method based on speech recognition technology of computer
CN104049913A (en) * 2014-05-29 2014-09-17 北京捷成世纪科技股份有限公司 Method and device for managing magnetic tape file
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN106488300A (en) * 2016-10-27 2017-03-08 广东小天才科技有限公司 A kind of video content inspection method and device
CN107613399A (en) * 2017-09-15 2018-01-19 广东小天才科技有限公司 A kind of video fixed-time control method for playing back, device and terminal device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI316690B (en) * 2006-09-05 2009-11-01 Univ Nat Cheng Kung Video annotation method by integrating visual features and frequent patterns

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870754A (en) * 1996-04-25 1999-02-09 Philips Electronics North America Corporation Video retrieval of MPEG compressed sequences using DC and motion signatures
CN101604486A (en) * 2008-06-12 2009-12-16 王涛 Musical instrument playing and practicing method based on speech recognition technology of computer
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN104049913A (en) * 2014-05-29 2014-09-17 北京捷成世纪科技股份有限公司 Method and device for managing magnetic tape file
CN106488300A (en) * 2016-10-27 2017-03-08 广东小天才科技有限公司 A kind of video content inspection method and device
CN107613399A (en) * 2017-09-15 2018-01-19 广东小天才科技有限公司 A kind of video fixed-time control method for playing back, device and terminal device

Also Published As

Publication number Publication date
CN109471955A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN109471955B (en) Video clip positioning method, computing device and storage medium
CN106534548B (en) Voice error correction method and device
US10642892B2 (en) Video search method and apparatus
CN112087656B (en) Online note generation method and device and electronic equipment
US10114809B2 (en) Method and apparatus for phonetically annotating text
CN107808004B (en) Model training method and system, server and storage medium
CN106098063B (en) Voice control method, terminal device and server
CN110164435A (en) Audio recognition method, device, equipment and computer readable storage medium
CN109817210B (en) Voice writing method, device, terminal and storage medium
CN109119079B (en) Voice input processing method and device
US9524751B2 (en) Semi-automatic generation of multimedia content
WO2019033658A1 (en) Method and apparatus for determining associated annotation information, intelligent teaching device, and storage medium
CN109192212B (en) Voice control method and device
US10089898B2 (en) Information processing device, control method therefor, and computer program
CN111522970A (en) Exercise recommendation method, exercise recommendation device, exercise recommendation equipment and storage medium
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN107748744B (en) Method and device for establishing drawing box knowledge base
CN105302906A (en) Information labeling method and apparatus
CN109616101B (en) Acoustic model training method and device, computer equipment and readable storage medium
CN111524507A (en) Voice information feedback method, device, equipment, server and storage medium
CN112114771A (en) Presentation file playing control method and device
CN107844531B (en) Answer output method and device and computer equipment
CN113992972A (en) Subtitle display method and device, electronic equipment and readable storage medium
CN111128254B (en) Audio playing method, electronic equipment and storage medium
US20230343325A1 (en) Audio processing method and apparatus, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant