CN112883235A - Video content searching method and device, computer equipment and storage medium - Google Patents
Video content searching method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112883235A CN112883235A CN202110266231.7A CN202110266231A CN112883235A CN 112883235 A CN112883235 A CN 112883235A CN 202110266231 A CN202110266231 A CN 202110266231A CN 112883235 A CN112883235 A CN 112883235A
- Authority
- CN
- China
- Prior art keywords
- video
- text data
- content
- text
- searching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000009191 jumping Effects 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- RYYVLZVUVIJVGH-UHFFFAOYSA-N caffeine Chemical compound CN1C(=O)N(C)C(=O)C2=C1N=CN2C RYYVLZVUVIJVGH-UHFFFAOYSA-N 0.000 description 2
- LPHGQDQBBGAPDZ-UHFFFAOYSA-N Isocaffeine Natural products CN1C(=O)N(C)C(=O)C2=C1N(C)C=N2 LPHGQDQBBGAPDZ-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 229960001948 caffeine Drugs 0.000 description 1
- VJEONQKOZGKCAK-UHFFFAOYSA-N caffeine Natural products CN1C(=O)N(C)C(=O)C2=C1C=CN2C VJEONQKOZGKCAK-UHFFFAOYSA-N 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method and a device for searching video content, computer equipment and a storage medium, wherein the method comprises the following steps: extracting audio data of all videos in a video library; converting the audio data into text data; marking the time corresponding to the text content in the text data in the video, and storing the text data marked with the time in a video library; acquiring video search content input by a user; decomposing the video search content into a plurality of keywords; searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library; displaying all text data reaching a preset threshold; and jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user. The method and the device avoid the trouble that the user needs to manually drag the video progress bar in order to find a certain section in the video, greatly improve the precision and accuracy of video search of the user, and have better user experience.
Description
Technical Field
The present invention relates to the field of video processing, and more particularly, to a method and apparatus for searching video content, a computer device, and a storage medium.
Background
With the explosion of the multimedia industry, more and more videos are emerging continuously. In order to get rid of the traditional mode of passively watching video contents, more and more playing platforms provide the function of searching videos according to the preference of users, so that the video searching also becomes an important function of acquiring videos at present.
At present, a video search mode is usually that a user inputs a keyword or a voice, then a video search engine returns a video search result according to content input by the user, the returned video search result is only the name of a video and cannot accurately reflect the position of the input keyword in the video, if the specific position of the searched keyword in the video needs to be found, a video progress bar needs to be dragged to find, so that finding is not easy to find, if more videos need to be searched, each video needs to click in the progress bar to find an answer, the operation amount is huge, and time is delayed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a video content searching method, a video content searching device, a computer device and a storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a method of searching for video content, the method comprising:
extracting audio data of all videos in a video library;
converting the audio data into text data;
marking the time corresponding to the text content in the text data in the video, and storing the text data marked with the time in a video library;
acquiring video search content input by a user;
decomposing the video search content into a plurality of keywords;
searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
displaying all text data reaching a preset threshold;
and jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
The further technical scheme is as follows: in the step of acquiring the video search content input by the user, the input mode of the video search content is voice input or text input.
The further technical scheme is as follows: in the step of splitting the video search content into the plurality of keywords, when the input mode of the video search content is voice input, the voice content needs to be converted into text content and then split.
The further technical scheme is as follows: in the step of displaying all the text data reaching the preset threshold, all the text data reaching the preset threshold are arranged and displayed according to the weight of the association degree with the keywords.
In a second aspect, a video content searching apparatus includes an audio extracting unit, a text converting unit, a time marking unit, an obtaining unit, a disassembling unit, a searching unit, a displaying unit, and a video progress jumping unit;
the audio extraction unit is used for extracting audio data of all videos in the video library;
the text conversion unit is used for converting the audio data into text data;
the time marking unit is used for marking the time corresponding to the text content in the text data in the video and storing the text data marked with the time in a video library;
the acquisition unit is used for acquiring video search content input by a user;
the disassembling unit is used for disassembling the video search content into a plurality of keywords;
the searching unit is used for searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
the display unit is used for displaying all the text data reaching a preset threshold value;
and the video progress jumping unit is used for jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
The further technical scheme is as follows: the input mode of the video search content is voice input or text input.
The further technical scheme is as follows: when the input mode of the video search content is voice input, the voice content needs to be converted into text content and then disassembled.
The further technical scheme is as follows: and arranging and displaying all the text data reaching the preset threshold according to the weight of the association degree with the keywords.
In a third aspect, a computer device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the video content search method as described above when executing the computer program.
In a fourth aspect, a storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the steps of the method of searching for video content as described above.
Compared with the prior art, the invention has the beneficial effects that: according to the method and the device, the audio data of the video in the video library are converted into the text data, the text data are stored after being time-stamped, if the text data associated with the keywords input by the user are found in the video library, the corresponding text data are displayed to the user, the playing progress of the video corresponding to the text data after the user selects the corresponding text data jumps to the time point position corresponding to the text data, so that the user can quickly locate a specific position of the video content by inputting the keywords, the trouble that the user needs to manually drag a video progress bar for searching a certain section or a certain story line in the video is eliminated, the precision and the accuracy of video searching of the user are greatly improved, and better user experience is achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented according to the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more apparent, the following detailed description will be given of preferred embodiments.
Drawings
FIG. 1 is a flowchart of a video content search method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a video content search apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of one embodiment of a computer device of the present invention.
Detailed Description
In order to more fully understand the technical content of the present invention, the technical solution of the present invention will be further described and illustrated with reference to the following specific embodiments, but not limited thereto.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The method and the device are mainly applied to a video searching scene to achieve the purpose of accurately searching the content in the video. The invention can be embedded into a search platform for use, and also can be independently used as a video search platform, or transplanted into a WeChat applet, a web browser, an intelligent television and an intelligent electric appliance. The invention is described below by means of specific examples.
Referring to fig. 1, a method for searching video content according to an embodiment of the present invention includes the following steps:
s10, extracting audio data of all videos in the video library;
specifically, since there is generally not only one video in the video library, the audio of all videos in the video library needs to be extracted, and the extracted audio data can provide a technology for converting subsequent audio into text. It should be noted that, since the audio extraction technology is a very mature technology at present, details of how to extract audio data from video are not repeated here.
S20, converting the audio data into text data;
specifically, since the audio data has been extracted from the video in the above process, the audio data only needs to be converted into text data in this step, and it should be noted that since the audio-to-text technology is a very mature technology at present, details of how to convert the audio data into text data are not repeated here.
S30, marking the time corresponding to the character content in the text data in the video, and storing the text data marked with the time in a video library;
specifically, since the content of the video is sequential, some frames in the entire video have no audio data, and some frames have no audio data, and naturally have no corresponding text data, the converted text data should be marked with a time corresponding to the content of the video, so that the text data can be associated with the time point of the audio in the video, for example, when the video playing time is 00:00:13, the corresponding text content is text a, when the playing time is 00:00:43, the corresponding text content is text B, and when the playing time is 00:01:12, the corresponding text content is C.
S40, acquiring video search content input by a user;
specifically, the input mode of the user video search content is voice input or text input.
S50, decomposing the video search content into a plurality of keywords;
specifically, when the input mode of the video search content is voice input, the voice content needs to be converted into text content and then disassembled. If the text content is directly input, the keyword decomposition can be directly carried out. The video content needs to be disassembled into the keywords instead of directly adopting the search content input by the user to perform subsequent text data association, and the main advantages are that the difficulty in technical processing is smaller and the association precision is higher because the text data association is performed after the keywords are disassembled.
S60, searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
specifically, the preset threshold may be adjusted according to a requirement, and preferably, the preset threshold is set to be 80% of the content of the keyword that is the same as or similar to the text data. This sets up that only text data associated with the keyword is counted up to a degree of association of 80%. It should be noted that: how keywords are associated with text data can be realized by an intelligent text matching algorithm, which is a very mature technology at present, and therefore, the detailed description is omitted here.
S70, displaying all text data reaching a preset threshold;
specifically, after the text data associated with the keyword is found, the user can see the associated text data only if the user needs to recommend and display the text data to the user. Since more than one text data with the association degree with the keyword can reach the preset threshold, all the text data with the association degree with the keyword are preferably displayed in an arrangement manner according to the weight of the association degree with the keyword, for example, the arrangement with high weight is in the front, and the arrangement with low weight is in the front, which is more convenient for the user to select.
And S80, jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the displayed text data selected by the user.
Specifically, after the user selects the corresponding text data, the playing progress of the video jumps to the time point corresponding to the text data. For example, the user has selected: the text content of 'coffee contains caffeine and does not drink coffee when people catch a cold', and the playing progress of the video jumps to the position of the text content. Therefore, the user can quickly locate a specific position of the video content, the trouble that the user needs to manually drag the video progress bar for searching a certain section or a certain story line in the video is avoided, the video searching precision and accuracy of the user are greatly improved, and better user experience is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Corresponding to the above video content searching method, an embodiment of the present invention further provides a video content searching apparatus. Referring to fig. 2, the video content search apparatus includes an audio extraction unit 1, a text conversion unit 2, a time stamp unit 3, an acquisition unit 4, a parsing unit 5, a search unit 6, a display unit 7, and a video progress skip unit 8;
the audio extraction unit 1 is used for extracting audio data of all videos in the video library;
a text conversion unit 2 for converting the audio data into text data;
the time marking unit 3 is used for marking the time corresponding to the text content in the video in the text data and storing the text data marked with the time in a video library;
an acquisition unit 4 for acquiring video search content input by a user;
a parsing unit 5, configured to parse the video search content into a plurality of keywords;
the searching unit 6 is used for searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
the display unit 7 is used for displaying all the text data reaching the preset threshold value;
and the video progress jumping unit 8 is used for jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
As shown in fig. 3, the embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the steps of the video content searching method described above are implemented.
The computer device 700 may be a terminal or a server. The computer device 700 includes a processor 720, memory, and a network interface 750, which are connected by a system bus 710, where the memory may include non-volatile storage media 730 and internal memory 740.
The non-volatile storage medium 730 may store an operating system 731 and computer programs 732. The computer programs 732, when executed, enable the processor 720 to perform any of a variety of video content searching methods.
The processor 720 is used to provide computing and control capabilities, supporting the operation of the overall computer device 700.
The internal memory 740 provides an environment for the execution of the computer program 732 in the non-volatile storage medium 730, and when the computer program 732 is executed by the processor 720, the processor 720 can be caused to execute any method for searching video content.
The network interface 750 is used for network communication such as sending assigned tasks and the like. Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration relevant to the present teachings and is not intended to limit the computing device 700 to which the present teachings may be applied, and that a particular computing device 700 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components. Wherein the processor 720 is configured to execute the program code stored in the memory to perform the following steps:
extracting audio data of all videos in a video library;
converting the audio data into text data;
marking the time corresponding to the text content in the text data in the video, and storing the text data marked with the time in a video library;
acquiring video search content input by a user;
decomposing the video search content into a plurality of keywords;
searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
displaying all text data reaching a preset threshold;
and jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
The further technical scheme is as follows: in the step of acquiring the video search content input by the user, the input mode of the video search content is voice input or text input.
The further technical scheme is as follows: in the step of splitting the video search content into the plurality of keywords, when the input mode of the video search content is voice input, the voice content needs to be converted into text content and then split.
The further technical scheme is as follows: in the step of displaying all the text data reaching the preset threshold, all the text data reaching the preset threshold are arranged and displayed according to the weight of the association degree with the keywords.
It should be understood that, in the embodiment of the present Application, the Processor 720 may be a Central Processing Unit (CPU), and the Processor 720 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that the configuration of computer device 700 depicted in FIG. 3 is not intended to be limiting of computer device 700 and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be implemented in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units is merely illustrated, and in practical applications, the above distribution of functions may be performed by different functional units according to needs, that is, the internal structure of the apparatus may be divided into different functional units to perform all or part of the functions described above. Each functional unit in the embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the application. For the specific working process of the units in the above-mentioned apparatus, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The technical contents of the present invention are further illustrated by the examples only for the convenience of the reader, but the embodiments of the present invention are not limited thereto, and any technical extension or re-creation based on the present invention is protected by the present invention. The protection scope of the invention is subject to the claims.
Claims (10)
1. A method for searching video content, the method comprising:
extracting audio data of all videos in a video library;
converting the audio data into text data;
marking the time corresponding to the text content in the text data in the video, and storing the text data marked with the time in a video library;
acquiring video search content input by a user;
decomposing the video search content into a plurality of keywords;
searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
displaying all text data reaching a preset threshold;
and jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
2. The method for searching for video content according to claim 1, wherein in the step of acquiring the video search content input by the user, an input mode of the video search content is a voice input or a text input.
3. The method for searching video contents according to claim 2, wherein in the step of parsing the video search contents into the plurality of keywords, when the input mode of the video search contents is voice input, the parsing is performed after the voice contents are converted into text contents.
4. The method for searching for video content according to claim 1, wherein in the step of displaying all the text data that reaches the preset threshold, all the text data that reaches the preset threshold are displayed in an arrangement according to a weight of a degree of association with the keyword.
5. The video content searching device is characterized by comprising an audio extracting unit, a text converting unit, a time marking unit, an acquiring unit, a disassembling unit, a searching unit, a displaying unit and a video progress jumping unit;
the audio extraction unit is used for extracting audio data of all videos in the video library;
the text conversion unit is used for converting the audio data into text data;
the time marking unit is used for marking the time corresponding to the text content in the text data in the video and storing the text data marked with the time in a video library;
the acquisition unit is used for acquiring video search content input by a user;
the disassembling unit is used for disassembling the video search content into a plurality of keywords;
the searching unit is used for searching all text data of which the association degree with the keywords reaches a preset threshold value in a video library;
the display unit is used for displaying all the text data reaching a preset threshold value;
and the video progress jumping unit is used for jumping the playing progress of the corresponding video to the time point position corresponding to the text data according to the text data displayed by the user.
6. The apparatus for searching video contents according to claim 5, wherein the input mode of the video search contents is voice input or text input.
7. The apparatus for searching for video content according to claim 6, wherein the input mode of the video search content is voice input, and the voice content is disassembled after being converted into text content.
8. The apparatus for searching video contents according to claim 6, wherein all text data reaching the preset threshold value are displayed in an arrangement according to a weight of the degree of association with the keyword.
9. Computer device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the video content search method steps as claimed in any one of claims 1 to 4 when executing the computer program.
10. A storage medium, characterized in that it stores a computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the steps of the method of searching for video content according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110266231.7A CN112883235A (en) | 2021-03-11 | 2021-03-11 | Video content searching method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110266231.7A CN112883235A (en) | 2021-03-11 | 2021-03-11 | Video content searching method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112883235A true CN112883235A (en) | 2021-06-01 |
Family
ID=76041740
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110266231.7A Pending CN112883235A (en) | 2021-03-11 | 2021-03-11 | Video content searching method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112883235A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113365100A (en) * | 2021-06-02 | 2021-09-07 | 中国邮政储蓄银行股份有限公司 | Video processing method and device |
CN114297433A (en) * | 2021-12-28 | 2022-04-08 | 北京字节跳动网络技术有限公司 | Method, device and equipment for searching question and answer results and storage medium |
CN115129923A (en) * | 2022-05-17 | 2022-09-30 | 荣耀终端有限公司 | Voice search method, device and storage medium |
CN115277650A (en) * | 2022-07-13 | 2022-11-01 | 深圳乐播科技有限公司 | Screen projection display control method, electronic equipment and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106488300A (en) * | 2016-10-27 | 2017-03-08 | 广东小天才科技有限公司 | Video content viewing method and device |
CN108829765A (en) * | 2018-05-29 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of information query method, device, computer equipment and storage medium |
CN109246472A (en) * | 2018-08-01 | 2019-01-18 | 平安科技(深圳)有限公司 | Video broadcasting method, device, terminal device and storage medium |
CN112395420A (en) * | 2021-01-19 | 2021-02-23 | 平安科技(深圳)有限公司 | Video content retrieval method and device, computer equipment and storage medium |
-
2021
- 2021-03-11 CN CN202110266231.7A patent/CN112883235A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106488300A (en) * | 2016-10-27 | 2017-03-08 | 广东小天才科技有限公司 | Video content viewing method and device |
CN108829765A (en) * | 2018-05-29 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of information query method, device, computer equipment and storage medium |
CN109246472A (en) * | 2018-08-01 | 2019-01-18 | 平安科技(深圳)有限公司 | Video broadcasting method, device, terminal device and storage medium |
CN112395420A (en) * | 2021-01-19 | 2021-02-23 | 平安科技(深圳)有限公司 | Video content retrieval method and device, computer equipment and storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113365100A (en) * | 2021-06-02 | 2021-09-07 | 中国邮政储蓄银行股份有限公司 | Video processing method and device |
CN113365100B (en) * | 2021-06-02 | 2022-11-22 | 中国邮政储蓄银行股份有限公司 | Video processing method and device |
CN114297433A (en) * | 2021-12-28 | 2022-04-08 | 北京字节跳动网络技术有限公司 | Method, device and equipment for searching question and answer results and storage medium |
CN114297433B (en) * | 2021-12-28 | 2024-04-19 | 抖音视界有限公司 | Method, device, equipment and storage medium for searching question and answer result |
CN115129923A (en) * | 2022-05-17 | 2022-09-30 | 荣耀终端有限公司 | Voice search method, device and storage medium |
CN115129923B (en) * | 2022-05-17 | 2023-10-20 | 荣耀终端有限公司 | Voice searching method, device and storage medium |
CN115277650A (en) * | 2022-07-13 | 2022-11-01 | 深圳乐播科技有限公司 | Screen projection display control method, electronic equipment and related device |
CN115277650B (en) * | 2022-07-13 | 2024-01-09 | 深圳乐播科技有限公司 | Screen-throwing display control method, electronic equipment and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112883235A (en) | Video content searching method and device, computer equipment and storage medium | |
CN111327955B (en) | User portrait based on-demand method, storage medium and smart television | |
EP3996373A2 (en) | Method and apparatus of generating bullet comment, device, and storage medium | |
KR20130100320A (en) | Method for displaying message and message display apparatus | |
CN106682049B (en) | Topic display system and topic display method | |
CN107566906B (en) | Video comment processing method and device | |
US11294964B2 (en) | Method and system for searching new media information | |
CN110674345A (en) | Video searching method and device and server | |
CN106815284A (en) | The recommendation method and recommendation apparatus of news video | |
CN105574030A (en) | Information search method and device | |
CN111104583A (en) | Live broadcast room recommendation method, storage medium, electronic device and system | |
CN105912586B (en) | Information searching method and electronic equipment | |
CN104598571A (en) | Method and device for playing multimedia resource | |
CN107688587B (en) | Media information display method and device | |
CN109033082B (en) | Learning training method and device of semantic model and computer readable storage medium | |
CN111401039A (en) | Word retrieval method, device, equipment and storage medium based on binary mutual information | |
CN106454397A (en) | Digital set top box program stream sharing method and apparatus | |
CN105872731A (en) | Data processing method and device | |
CN103399879B (en) | The interested entity preparation method and device of daily record are searched for based on user | |
CN110895555B (en) | Data retrieval method and device, storage medium and electronic device | |
CN106570003B (en) | Data pushing method and device | |
US20220027419A1 (en) | Smart search and recommendation method for content, storage medium, and terminal | |
CN104850608A (en) | Method for searching keywords on information exhibiting page | |
CN114339300A (en) | Subtitle processing method, subtitle processing device, electronic equipment, computer readable medium and computer product | |
CN111491198B (en) | Small video searching method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210601 |
|
RJ01 | Rejection of invention patent application after publication |