CN112163102A - Search content matching method and device, electronic equipment and storage medium - Google Patents

Search content matching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112163102A
CN112163102A CN202011052033.2A CN202011052033A CN112163102A CN 112163102 A CN112163102 A CN 112163102A CN 202011052033 A CN202011052033 A CN 202011052033A CN 112163102 A CN112163102 A CN 112163102A
Authority
CN
China
Prior art keywords
target
content
search content
search
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011052033.2A
Other languages
Chinese (zh)
Other versions
CN112163102B (en
Inventor
陈可蓉
熊梦园
钱程
杨晶生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011052033.2A priority Critical patent/CN112163102B/en
Publication of CN112163102A publication Critical patent/CN112163102A/en
Application granted granted Critical
Publication of CN112163102B publication Critical patent/CN112163102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the disclosure discloses a search content matching method, a search content matching device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring target search content edited in the search content editing control; and matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content. According to the technical scheme of the embodiment of the disclosure, the content which is completely the same as the target search content is matched from the subtitle information through the set rule, so that the technical effect of determining the accuracy of the target content is improved.

Description

Search content matching method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, and in particular relates to a search content matching method and device, an electronic device and a storage medium.
Background
Currently, when target content is screened from a document or a text, the target content including search content is screened from the document mostly according to the search content input by a user.
The target content searched by adopting the method not only comprises the content consistent with the searched content, but also comprises the content consistent with part of the content in the searched content, and at the moment, the user needs to filter again, so that the content matched with the user is searched, and the technical problems of low efficiency of searching the target content and poor user experience exist.
Disclosure of Invention
The embodiment of the disclosure provides a search content matching method and device, electronic equipment and a storage medium, so as to achieve the technical effect of quickly and accurately finding target content consistent with search content from subtitle information.
In a first aspect, an embodiment of the present disclosure provides a search content matching method, where the method includes:
acquiring target search content edited in the search content editing control;
and matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
In a second aspect, an embodiment of the present disclosure further provides a search content matching apparatus, where the apparatus includes:
the target search content acquisition module is used for acquiring the target search content edited in the search content editing control;
and the target content determining module is used for matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a search content matching method as in any of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the search content matching method according to any one of the embodiments of the present disclosure.
According to the technical scheme of the embodiment of the disclosure, firstly, the target search content edited in the search content editing control is obtained, and then the target content identical to the target search content is matched from the subtitle information generated based on the multimedia data stream, so that the technical problems that the content searched in the prior art is the content comprising the target search content, and the user needs to filter the content identical to the target content again from the searched content, so that the searching efficiency is low are solved, the content identical to the target search content can be directly searched, and the technical effects of convenience and high efficiency of searching the content identical to the target search content are improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of a search content matching method according to a first embodiment of the present disclosure;
FIG. 2 is a diagram illustrating a display of target content corresponding to a mark on a timeline according to a first embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a corresponding mark on a time axis being highlighted after triggering target content according to a first embodiment of the disclosure;
fig. 4 is a schematic flowchart of a search content matching method according to a second embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a search content matching apparatus according to a third embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Example one
Fig. 1 is a flowchart illustrating a search content matching method according to a first embodiment of the present disclosure, where the first embodiment of the present disclosure is applicable to a situation where the same content as the target search content is searched from the subtitle information, and the method may be performed by a search content matching apparatus, which may be implemented in software and/or hardware.
As shown in fig. 1, the method of the present embodiment includes:
and S110, acquiring the target search content edited in the search content editing control.
The search content editing control can be a control which is displayed on the target page and used for editing the search content. The target page may include subtitle information generated based on speech information of different language types. To screen out the target search content from the caption information, the target page may also include a search content editing control for entering the search content. The server can acquire the search content edited in the search content editing control and take the acquired search content as target search content. For example, the search content edited in the search content editing control is "find", and the target search content acquired by the server is: and (6) searching.
In some optional implementations of this embodiment, the obtaining of the target search content edited in the search content editing control includes: and if the control triggering the starting of the search is detected, acquiring the target search content edited in the search content editing control. Illustratively, a search content editing control and a search initiating control may be included on the target page, and optionally, the search initiating control may be a "confirm search" control. The user can edit corresponding content in the search content editing control, after the content editing is completed, when the user can trigger the search starting control, namely clicking the confirmation button, the server can acquire the target search content in the search content editing control.
Or, in other alternative implementations, if the trigger search content editing control is detected, the target search content edited in the search content editing control is acquired.
Illustratively, a search content editing control is included on the target page: and when detecting that the user triggers the search content editing control, starting to acquire the search content edited in the search content editing control, and optionally within 30S within a preset time length, and when not detecting that the user edits new content, taking the acquired search content as target search content.
And S120, matching target contents corresponding to the target search contents from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search contents.
The multimedia data stream may be a multimedia data stream generated based on a real-time interactive scene (e.g., multimedia conference, live broadcast, video chat), or may also be a recorded multimedia data stream. The real-time interactive scene can be realized by the internet and a computer means, for example, an interactive application program realized by a native program or a web program. The audio information of the multimedia data stream can be collected, converted into corresponding characters and displayed on the target page. The text displayed on the target page after conversion may be used as subtitle information. The target content refers to the entire content identical to the target search content found from the subtitle information. That is, the target content includes a plurality of contents that exactly match the target search content.
Specifically, based on the audio information of the multimedia data stream, the audio information may be converted into corresponding text information, and the converted text information may be used as the subtitle information. After the target search content is acquired, content consistent with the target search content can be matched from the subtitle information to serve as the target content. Each target content is identical to the target search content.
In the present embodiment, although the caption information includes the user's id and the speaking timestamp, in the present embodiment, the caption information is mainly matched with the same content as the target search content.
According to the technical scheme of the embodiment of the disclosure, firstly, the target search content edited in the search content editing control is obtained, and then the target content identical to the target search content is matched from the subtitle information generated based on the multimedia data stream, so that the technical problems that the content searched in the prior art is the content comprising the target search content, and the user needs to filter the content identical to the target content again from the searched content, so that the searching efficiency is low are solved, the content identical to the target search content can be directly searched, and the technical effects of convenience and high efficiency of searching the content identical to the target search content are improved. In some optional implementations of this embodiment, the generating of the subtitle information based on the multimedia data stream may be: determining voice information and recording an original language type corresponding to the voice information based on the multimedia data stream; and generating caption information which is displayed on a target page and corresponds to the target translation language type according to the voice information, the original language type corresponding to the voice information and the target translation language type.
In some application scenarios, the number of users participating in real-time interaction or users participating in speech in the screen recording video may be multiple, the speech types used by each speech user when speaking may be the same or different, and when the difference between the speech types used by other speech users and the speech types used by the other speech users is large, the situation that the speech information of the other speech users cannot be known may exist.
In order to solve this problem, the caption information generated based on the original language type may be further converted into the language desired by the user (target translation language type) on the basis of the collected speech information of the speaking user and converting it into the corresponding caption information.
The original language type is understood to mean the language used by each user participating in the voice interaction in the multimedia data stream. Accordingly, the target translation language type may be understood as a language type desired by the user for displaying the subtitle information. Accordingly, the caption information corresponding to the type of the target translation language may be translation data corresponding to the voice information, which is presented in the target translation language. In order to facilitate the user to visually determine the speaking user and the speaking time corresponding to each piece of translation data from the caption information, the caption information may display the speaking user identification and the speaking time stamp of each piece of translation data.
Specifically, the voice data, i.e., the voice information, of each interactive user participating in the interaction may be collected from the multimedia data stream corresponding to the interactive behavior interface, and the original language type corresponding to the voice information may be recorded. The voice information can be translated into a target translation language type from an original language type to obtain translation data corresponding to the voice information. And generating caption information displayed on the target page by the translation data, the speaking user identity corresponding to the translation data and the speaking timestamp.
In some application scenarios of these alternative implementations, a language type selection control may be set on the target page. In this way, the user can select the translation language type by triggering the language type selection control on the target page, so as to translate the subtitle information in the original language generated by the voice information of each speaking user into the subtitle information corresponding to the selected translation language type.
Alternatively, in other application scenarios of these alternative implementations, the target translation language type may also be determined by at least one of the following: acquiring a language type preset on a target client as a target translation language type; and acquiring a login address of the target client, and determining a target language translation type corresponding to the geographical position of the target client based on the login address.
That is, determining the target translation language type may include at least two ways. The first way may be: obtaining the language type of the equipment to which the speaking user belongs and using the language type as the target translation language type, or the user can preset the language type to be converted, and can use the language type preset by the user as the target translation language type. For example, before the speech text, the user may preset a target translation language type, and optionally, when the user triggers the language type conversion control, a language type selection list may pop up on the target page for the user to select. The user can select any language type, for example, the user triggers a Chinese language type in the language selection list and clicks a confirmation key, and the server or the client can determine that the target translation language type is the Chinese language type. That is, the voice information in the multimedia data stream may be converted into chinese subtitle information and presented on the target interface.
The second way may be: the login address of the client, namely the IP address of the client, can be obtained, the area to which the client belongs can be determined according to the login address, the language type used by the area is further determined, and the used language type is used as the target language type.
In this embodiment, the voice information in the multimedia data stream is converted into the subtitle information in the target translation language type, so that the subtitle information more conforms to the reading habit of the user, the user can quickly understand the content corresponding to the multimedia data stream, and the technical effect of the interactive interaction efficiency is further improved.
On the basis of the technical scheme, after the target content is obtained, the target content can be distinguished and displayed in the subtitle information.
Specifically, when the target content is determined from the caption information, the target content itself is also one or more elements in the caption information, and can be displayed in a manner of being distinguished from other elements during display, so that the screened target content is highlighted, and a user can find the target content more intuitively and conveniently. The distinctive display may be in a display format such as color, font, background pattern, or the like.
On the basis of the technical scheme, the method further comprises the following steps: the time stamp of the multimedia data stream corresponding to each target content in the time axis is determined, and the position corresponding to the time stamp on the time axis is marked.
Optionally, the total duration corresponding to the multimedia data stream is 50min, and the time axis corresponding to the multimedia data stream is also 50 min.
Specifically, after the target content is determined, the timestamp corresponding to the target content may be determined according to the sentence to which the target content belongs. After determining the time stamp, a specific position of the time stamp on the time axis may be determined and marked at that position, for example, under the specific position of the time axis, with a dot, or with a triangle, see fig. 2.
Illustratively, referring to fig. 2, when the search content edited by the user in the search content editing control is "algorithm", the same target content as the "algorithm" can be searched from the subtitle information and displayed distinctively, such as bold display, and a timestamp of a sentence to which the target content belongs is determined, and the target content is marked on a time axis corresponding to the multimedia data stream based on the timestamp, such as marked with a dot. The color, size and the like of the mark can be set by a user according to actual requirements, and are not limited herein.
It should be noted that in the search content editing control, the number of the target content may be displayed, for example, the total number displayed in the search content editing control is 12, see fig. 2.
In this embodiment, marking the audio/video frames corresponding to the target content on the time axis has the advantage that the user can clearly determine the specific position of the target content in the multimedia data stream according to the mark on the time axis, thereby improving the convenience of searching the corresponding target content.
It should be noted that the number of the target contents may be more than one, and correspondingly, the number of the marks on the time axis may be more than one, see fig. 2, the number of the target contents is 12, and the number of the marks on the time axis is also 12. Certainly, in order to facilitate the user to determine that the currently triggered target content is the first of all the target contents, the search content editing control further displays the sequence corresponding to the currently triggered target content.
On the basis of the technical scheme, the method further comprises the following steps: when trigger target content is detected, determining a target timestamp corresponding to the target content; and displaying the mark corresponding to the target timestamp in a distinguishing way.
Specifically, a user may trigger any one target content, and when the target content triggered by the user is detected, a timestamp (target timestamp) corresponding to the triggered target content may be determined, a target mark corresponding to the target timestamp on a time axis may be determined, and the target mark and other marks on the time axis are displayed in a differentiated manner, so as to highlight the target mark. For example, the target mark and the other marks are displayed differently in colors.
For example, referring to fig. 3, when the user triggers the target content corresponding to the tag 1, the target timestamp corresponding to the target content of the tag 1 may be determined, the tag corresponding to the tag 2 on the time axis may be determined according to the target timestamp, and the tag may be highlighted.
In this embodiment, when the target content is triggered, the mark corresponding to the target content is displayed differently on the time axis, which is beneficial to enabling the user to know the specific position of the triggered target content in the multimedia data stream, and improving the accuracy of determining the audio/video frame corresponding to the target content by the user.
Example two
Fig. 4 is a flowchart illustrating a search content matching method according to a second embodiment of the disclosure. Based on the foregoing embodiment, the present embodiment may display the subtitle information and the multimedia data stream on the same page, so that after the target content is determined from the subtitle information, if the user triggers the target content, the multimedia data stream may be transitioned to an audio/video frame corresponding to the target content. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 4, the method includes:
and S210, acquiring the target search content edited in the search content editing control.
And S220, matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream.
In the practical application process, the subtitle information may include at least one vocabulary, and in order to screen out the content identical to the target search content from the subtitle information, the subtitle information may be divided into at least one vocabulary to be matched according to separators in the subtitle information. The delimiter may be a space in the subtitle information. If the target search content comprises one vocabulary to be searched, the vocabulary completely identical to the vocabulary to be searched can be screened from all the vocabularies to be matched and used as the target content, namely the target content is determined from the subtitle information.
That is, the vocabulary to be searched in the target search content can be acquired; and dividing the subtitle information into at least one vocabulary to be matched according to the separators in the subtitle information so as to screen out target contents which are the same as the vocabulary to be searched from the vocabulary to be matched in the subtitle information.
Of course, there are also situations where there is more than one word to be searched in the target search content. At this time, in order to accurately match the target content consistent with the target search content from the subtitle information, first, the number of the words to be searched may be determined according to the separators in the target search content. And then, determining content which is the same as the at least two vocabularies to be searched, has the same vocabulary quantity and is adjacent to the position from the at least one vocabulary to be matched corresponding to the subtitle information as target content.
For example, if the target search content is "I am". According to the separator in the target search content, optionally, the separator is a space, the number of the words to be searched of the target search content can be determined to be two, I is one of the words to be searched of the target, and am is the other word to be searched of the target. And searching the content which has the same number as the target search vocabulary and is adjacent to the target search vocabulary in position from the vocabulary to be matched corresponding to the subtitle information to be used as the target content.
Illustratively, the target search content is "I am", and the target search content includes a separator, i.e. two vocabularies to be searched. And acquiring content with I and am adjacent and completely consistent as target content from the vocabulary to be matched of the subtitle information.
In this embodiment, in order to clearly make the user know the playing time of the target content in the multimedia data stream, the time stamp of the multimedia data stream in the time axis corresponding to each target content may be determined and displayed at the position corresponding to the target content.
It is understood that the target content can be displayed on the target page in a differentiated manner, and the target content can be annotated, where the annotated content can be a corresponding playing time of the target content in the multimedia data stream.
For example, a multimedia data stream is acquired based on a screen recording video, 2 target contents are acquired from subtitle information based on target search contents, an annotation box corresponding to the target contents may be popped up, the annotation contents in the annotation box may be the playing time of the target contents in the screen recording video, and optionally, the annotation of the first target content is 00:50, the target search content appears in the fiftieth minute of the screen recording video, and the annotation of the second target content is 01: 20.
on the basis, after the subtitle information is obtained, the method further includes: and establishing a timestamp synchronization association relation between the subtitle information and the multimedia data stream, and displaying the subtitle information and the multimedia data stream on a target page so as to jump the multimedia data stream to a video playing time corresponding to the target content when the triggering target content is detected.
The timestamp synchronization association relationship can be understood as that the multimedia data stream and the subtitle information are linked based on time synchronization. When a certain content in the subtitle information is triggered, a current timestamp corresponding to the content can be determined, based on a pre-established timestamp synchronization association relationship, a multimedia data stream corresponding to the current timestamp is skipped to, for example, if the multimedia data stream is acquired based on a screen recording video, the screen recording video is adjusted to play an audio/video frame corresponding to the current timestamp. Certainly, when the progress bar of the screen recording video is dragged to a certain audio/video frame, the current timestamp corresponding to the audio/video frame can be obtained, the translation data corresponding to the audio/video frame in the subtitle information is determined based on the pre-established timestamp synchronous association relationship, and the translation data can be displayed in a distinguishing manner, optionally, highlighted, and convenient for a user to confirm.
And S230, when the triggering target content is detected, determining the current timestamp corresponding to the target content.
Specifically, after determining the target content, the user may trigger the target content. When the server detects that the user triggers the target content, the current timestamp of the target content, namely the video playing time of the screen recording video corresponding to the target content, is determined.
And S240, according to the pre-established timestamp synchronous incidence relation and the current timestamp, skipping the multimedia data stream to an audio/video frame corresponding to the current timestamp.
It should be noted that, because the time stamp synchronization relationship between the multimedia data stream and the caption information is pre-established, when any piece of translation data in the caption information is triggered, the time stamp corresponding to the translation data can be obtained, and further, the multimedia data stream corresponding to the current time stamp in the screen recording video can be obtained.
Specifically, according to the current timestamp of the target content and the timestamp synchronization relationship between the pre-established multimedia data stream and the subtitle information, the multimedia data stream corresponding to the current timestamp in the screen recording video can be acquired, that is, the audio/video frame corresponding to the current timestamp in the screen recording video is jumped to.
Illustratively, when a user triggers a target content a, and the current timestamp of the target content a is 00:50, based on the timestamp synchronization association relationship between the pre-established multimedia data stream and the subtitle information, the screen recording video is skipped to 00: and audio and video frames corresponding to the 50 moments.
According to the technical scheme of the embodiment of the disclosure, the timestamp synchronous linkage relation between the multimedia data stream and the caption information is pre-established, so that the multimedia data stream can jump to the corresponding audio/video frame when the caption information is triggered, a user can conveniently and quickly know the corresponding situation of each voice information, and the technical effect of improving the interaction efficiency is further achieved.
EXAMPLE III
Fig. 5 is a schematic structural diagram of a keyword matching apparatus according to a third embodiment of the present disclosure. As shown in fig. 5, the apparatus includes: a target search content acquisition module 310 and a target content determination module 320.
The target search content obtaining module 310 is configured to obtain target search content edited in the search content editing control; a target content determining module 320, configured to match target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, where each target content is the same as the target search content.
According to the technical scheme of the embodiment of the disclosure, the target content which is completely the same as the target search content can be searched from the subtitle information through the target search content which is edited in advance, and the technical effects of determining the accuracy and convenience of the target content are improved.
On the basis of the above technical solution, the apparatus further includes:
the voice information acquisition module is used for determining voice information and the original language type of the voice information based on the multimedia data stream; and the caption information generating module is used for generating caption information which is displayed on a target page and corresponds to the target translation language type according to the voice information, the original language type corresponding to the voice information and the target translation language type.
On the basis of the above technical solutions, the apparatus further includes: the target translation language type determining module is used for determining the target translation language type and comprises at least one of the following items: acquiring a language type preset on a target client as a target translation language type; and acquiring a login address of the target client, and determining a target language translation type corresponding to the geographical position of the target client based on the login address.
On the basis of the above technical solutions, the target search content obtaining module is further configured to:
if the control triggering the starting of the search is detected, acquiring target search content edited in the search content editing control; or, if the trigger search content editing control is detected, acquiring the target search content edited in the search editing control.
On the basis of the above technical solutions, the target search content acquisition module: after the method for obtaining the target search content edited in the search content editing control, the method further comprises: acquiring a vocabulary to be searched in the target search content; and dividing the subtitle information into at least one vocabulary to be matched according to the separators in the subtitle information so as to screen out target contents which are the same as the vocabulary to be searched from the vocabulary to be matched in the subtitle information.
On the basis of the above technical solutions, the target search content includes at least two vocabularies to be searched, and the target content determining module includes:
the number determining unit of the vocabulary to be searched is used for determining the number of the vocabulary to be searched according to the separators in the target search content; and the target content determining unit is used for determining the content which is the same as the at least two vocabularies to be searched, has the same vocabulary quantity and is adjacent to the position from the at least one vocabulary to be matched corresponding to the subtitle information as the target content.
On the basis of the above technical solutions, the apparatus further includes: the device also comprises a marking module which is used for respectively determining the playing time of the multimedia data stream corresponding to each target content in a time axis and displaying the video playing time on the time axis of the target content.
On the basis of the above technical solutions, the apparatus further includes a synchronization relationship establishing module, configured to establish a timestamp synchronization relationship between the subtitle information and the multimedia data stream, so that when a trigger target content is detected, a corresponding multimedia data stream is displayed based on the timestamp synchronization relationship.
On the basis of the above technical solutions, the apparatus further includes: the synchronous display module is used for determining a current timestamp corresponding to the target content when the triggering target content is detected; and jumping the multimedia data stream to the video playing time corresponding to the current timestamp according to the pre-established timestamp synchronization relation and the current timestamp.
On the basis of the above technical solution, the apparatus further includes a marking module, configured to determine a timestamp of the multimedia data stream corresponding to each target content in a time axis, and mark a position on the time axis corresponding to the timestamp.
On the basis of the technical scheme, the device further comprises a highlighting module, wherein the highlighting module is used for determining a target timestamp corresponding to the target content when the triggering target content is detected;
and marking the target corresponding to the target timestamp on the time axis for distinguishing and displaying.
The matching device of the key words provided by the embodiment of the disclosure can execute the matching method of the key words provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Example four
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 6) 400 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 406 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 46 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 406 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 6 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 406, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
The electronic device provided by the embodiment of the present disclosure and the search content matching method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
EXAMPLE five
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the search content matching method provided by the above-described embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring target search content edited in the search content editing control;
and matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit/module does not in some cases constitute a limitation on the unit itself, for example, the target search content acquisition module may also be described as a "search content acquisition module".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a search content matching method, the method including:
acquiring target search content edited in the search content editing control;
and matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a search content matching method, further comprising:
optionally, based on the multimedia data stream, determining the voice information and the original language type of the voice information;
and generating caption information which is displayed on a target page and corresponds to the target translation language type according to the voice information, the original language type corresponding to the voice information and the target translation language type.
According to one or more embodiments of the present disclosure, [ example three ] there is provided a search content matching method, further comprising:
optionally, determining the language type of the target translation includes at least one of the following:
acquiring a language type preset on a target client as a target translation language type;
and acquiring a login address of the target client, and determining a target language translation type corresponding to the geographical position of the target client based on the login address.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a search content matching method, further comprising:
optionally, the obtaining of the target search content edited in the search content editing control includes:
if the control triggering the starting of the search is detected, acquiring target search content edited in the search content editing control; or the like, or, alternatively,
and if the trigger search content editing control is detected, acquiring the target search content edited in the search editing control.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a search content matching method, further comprising:
optionally, after obtaining the target search content edited in the search content editing control, the method further includes:
acquiring a vocabulary to be searched in the target search content;
and dividing the subtitle information into at least one vocabulary to be matched according to the separators in the subtitle information so as to screen out target contents which are the same as the vocabulary to be searched from the vocabulary to be matched in the subtitle information.
According to one or more embodiments of the present disclosure, [ example six ] there is provided a search content matching method, further comprising:
optionally, the target search content includes at least two vocabularies to be searched, and the matching of the target content corresponding to the target search content from the caption information generated based on the multimedia data stream includes:
determining the number of the vocabulary to be searched according to the separators in the target search content;
and determining content which is the same as the at least two vocabularies to be searched, has the same vocabulary quantity and is adjacent to the position from at least one vocabulary to be matched corresponding to the subtitle information as target content.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a search content matching method, further comprising:
optionally, the target content is differentially displayed in the subtitle information.
According to one or more embodiments of the present disclosure, [ example eight ] there is provided a search content matching method, further comprising:
optionally, a timestamp of the multimedia data stream corresponding to each target content in a time axis is determined, and a position on the time axis corresponding to the timestamp is marked.
According to one or more embodiments of the present disclosure, [ example nine ] there is provided a search content matching method, further comprising:
optionally, when the trigger target content is detected, determining a target timestamp corresponding to the target content;
and marking the target corresponding to the target timestamp on the time axis for distinguishing and displaying.
According to one or more embodiments of the present disclosure, [ example ten ] there is provided a search content matching method, further comprising:
optionally, a timestamp synchronization relationship between the subtitle information and the multimedia data stream is established, so that when the trigger target content is detected, the corresponding multimedia data stream is displayed based on the timestamp synchronization relationship.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a search content matching method, further comprising:
optionally, when the trigger target content is detected, determining a current timestamp corresponding to the target content;
and jumping the multimedia data stream to an audio/video frame corresponding to the current timestamp according to a pre-established timestamp synchronous rolling relation and the current timestamp.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided a search content matching apparatus including:
the target search content acquisition module is used for acquiring the target search content edited in the search content editing control;
and the target content determining module is used for matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. A search content matching method, comprising:
acquiring target search content edited in the search content editing control;
and matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
2. The method of claim 1, further comprising:
determining voice information and an original language type of the voice information based on a multimedia data stream;
and generating caption information which is displayed on a target page and corresponds to the target translation language type according to the voice information, the original language type corresponding to the voice information and the target translation language type.
3. The method of claim 2, wherein determining the target translation language type comprises at least one of:
acquiring a language type preset on a target client as a target translation language type;
and acquiring a login address of the target client, and determining a target language translation type corresponding to the geographical position of the target client based on the login address.
4. The method of claim 1, wherein obtaining the target search content edited in the search content editing control comprises:
if the control triggering the starting of the search is detected, acquiring target search content edited in the search content editing control; or the like, or, alternatively,
and if the trigger search content editing control is detected, acquiring the target search content edited in the search editing control.
5. The method of claim 1, after obtaining the target search content edited in the search content editing control, further comprising:
acquiring a vocabulary to be searched in the target search content;
and dividing the subtitle information into at least one vocabulary to be matched according to the separators in the subtitle information so as to screen out target contents which are the same as the vocabulary to be searched from the vocabulary to be matched in the subtitle information.
6. The method of claim 1, wherein the target search content comprises at least two words to be searched, and the matching of the target content corresponding to the target search content from the caption information generated based on the multimedia data stream comprises:
determining the number of the vocabulary to be searched according to the separators in the target search content;
and determining content which is the same as the at least two vocabularies to be searched, has the same vocabulary quantity and is adjacent to the position from at least one vocabulary to be matched corresponding to the subtitle information as target content.
7. The method of claim 1, further comprising:
and displaying the target content in the subtitle information in a distinguishing way.
8. The method of claim 1, further comprising:
the time stamp of the multimedia data stream corresponding to each target content in the time axis is determined, and the position corresponding to the time stamp on the time axis is marked.
9. The method of claim 8, further comprising:
when trigger target content is detected, determining a target timestamp corresponding to the target content;
and marking the target corresponding to the target timestamp on the time axis for distinguishing and displaying.
10. The method of claim 2, further comprising:
and establishing a timestamp synchronization relationship between the subtitle information and the multimedia data stream, and displaying the corresponding multimedia data stream based on the timestamp synchronization relationship when the triggering target content is detected.
11. The method of claim 10, further comprising:
when triggering target content is detected, determining a current timestamp corresponding to the target content;
and jumping the multimedia data stream to an audio/video frame corresponding to the current timestamp according to a pre-established timestamp synchronous rolling relation and the current timestamp.
12. A search content matching apparatus, comprising:
the target search content acquisition module is used for acquiring the target search content edited in the search content editing control;
and the target content determining module is used for matching target content corresponding to the target search content from the subtitle information generated based on the multimedia data stream, wherein each target content is the same as the target search content.
13. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the search content matching method of any of claims 1-11.
14. A storage medium containing computer-executable instructions for performing the search content matching method of any one of claims 1-11 when executed by a computer processor.
CN202011052033.2A 2020-09-29 2020-09-29 Search content matching method and device, electronic equipment and storage medium Active CN112163102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011052033.2A CN112163102B (en) 2020-09-29 2020-09-29 Search content matching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011052033.2A CN112163102B (en) 2020-09-29 2020-09-29 Search content matching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112163102A true CN112163102A (en) 2021-01-01
CN112163102B CN112163102B (en) 2023-03-17

Family

ID=73861515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011052033.2A Active CN112163102B (en) 2020-09-29 2020-09-29 Search content matching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112163102B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114997116A (en) * 2021-03-01 2022-09-02 北京字跳网络技术有限公司 Document editing method, device, equipment and storage medium
CN114995691A (en) * 2021-03-01 2022-09-02 北京字跳网络技术有限公司 Document processing method, device, equipment and medium
WO2023246384A1 (en) * 2022-06-24 2023-12-28 抖音视界(北京)有限公司 Search result display method and apparatus, computer device, and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387310A (en) * 2010-08-31 2012-03-21 腾讯科技(深圳)有限公司 Method and device for positioning video segments
US20130308922A1 (en) * 2012-05-15 2013-11-21 Microsoft Corporation Enhanced video discovery and productivity through accessibility
CN103838751A (en) * 2012-11-23 2014-06-04 鸿富锦精密工业(深圳)有限公司 Video content searching system and method
CN104185086A (en) * 2014-03-28 2014-12-03 无锡天脉聚源传媒科技有限公司 Method and device for providing video information
CN104219459A (en) * 2014-09-30 2014-12-17 上海摩软通讯技术有限公司 Video language translation method and system and intelligent display device
CN104618807A (en) * 2014-03-31 2015-05-13 腾讯科技(北京)有限公司 Multimedia playing method, device and system
CN105163178A (en) * 2015-08-28 2015-12-16 北京奇艺世纪科技有限公司 Method and device for locating video playing position
US20160301982A1 (en) * 2013-11-15 2016-10-13 Le Shi Zhi Xin Electronic Technology (Tianjin) Limited Smart tv media player and caption processing method thereof, and smart tv
WO2017191397A1 (en) * 2016-05-03 2017-11-09 Orange Method and device for synchronising subtitles
CN108401189A (en) * 2018-03-16 2018-08-14 百度在线网络技术(北京)有限公司 A kind of method, apparatus and server of search video
CN110035326A (en) * 2019-04-04 2019-07-19 北京字节跳动网络技术有限公司 Subtitle generation, the video retrieval method based on subtitle, device and electronic equipment
CN110035313A (en) * 2019-02-28 2019-07-19 阿里巴巴集团控股有限公司 Video playing control method, video playing control device, terminal device and electronic equipment
CN110390051A (en) * 2019-07-19 2019-10-29 北京字节跳动网络技术有限公司 A kind of search implementation method, device, electronic equipment and storage medium
CN110719518A (en) * 2018-07-12 2020-01-21 阿里巴巴集团控股有限公司 Multimedia data processing method, device and equipment
CN110913241A (en) * 2019-11-01 2020-03-24 北京奇艺世纪科技有限公司 Video retrieval method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387310A (en) * 2010-08-31 2012-03-21 腾讯科技(深圳)有限公司 Method and device for positioning video segments
US20130308922A1 (en) * 2012-05-15 2013-11-21 Microsoft Corporation Enhanced video discovery and productivity through accessibility
CN103838751A (en) * 2012-11-23 2014-06-04 鸿富锦精密工业(深圳)有限公司 Video content searching system and method
US20160301982A1 (en) * 2013-11-15 2016-10-13 Le Shi Zhi Xin Electronic Technology (Tianjin) Limited Smart tv media player and caption processing method thereof, and smart tv
CN104185086A (en) * 2014-03-28 2014-12-03 无锡天脉聚源传媒科技有限公司 Method and device for providing video information
CN104618807A (en) * 2014-03-31 2015-05-13 腾讯科技(北京)有限公司 Multimedia playing method, device and system
CN104219459A (en) * 2014-09-30 2014-12-17 上海摩软通讯技术有限公司 Video language translation method and system and intelligent display device
CN105163178A (en) * 2015-08-28 2015-12-16 北京奇艺世纪科技有限公司 Method and device for locating video playing position
WO2017191397A1 (en) * 2016-05-03 2017-11-09 Orange Method and device for synchronising subtitles
CN108401189A (en) * 2018-03-16 2018-08-14 百度在线网络技术(北京)有限公司 A kind of method, apparatus and server of search video
CN110719518A (en) * 2018-07-12 2020-01-21 阿里巴巴集团控股有限公司 Multimedia data processing method, device and equipment
CN110035313A (en) * 2019-02-28 2019-07-19 阿里巴巴集团控股有限公司 Video playing control method, video playing control device, terminal device and electronic equipment
CN110035326A (en) * 2019-04-04 2019-07-19 北京字节跳动网络技术有限公司 Subtitle generation, the video retrieval method based on subtitle, device and electronic equipment
CN110390051A (en) * 2019-07-19 2019-10-29 北京字节跳动网络技术有限公司 A kind of search implementation method, device, electronic equipment and storage medium
CN110913241A (en) * 2019-11-01 2020-03-24 北京奇艺世纪科技有限公司 Video retrieval method and device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114997116A (en) * 2021-03-01 2022-09-02 北京字跳网络技术有限公司 Document editing method, device, equipment and storage medium
CN114995691A (en) * 2021-03-01 2022-09-02 北京字跳网络技术有限公司 Document processing method, device, equipment and medium
WO2022184034A1 (en) * 2021-03-01 2022-09-09 北京字跳网络技术有限公司 Document processing method and apparatus, device, and medium
CN114995691B (en) * 2021-03-01 2024-03-08 北京字跳网络技术有限公司 Document processing method, device, equipment and medium
WO2023246384A1 (en) * 2022-06-24 2023-12-28 抖音视界(北京)有限公司 Search result display method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN112163102B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
US11917344B2 (en) Interactive information processing method, device and medium
CN112163102B (en) Search content matching method and device, electronic equipment and storage medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN113259740A (en) Multimedia processing method, device, equipment and medium
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN111753558B (en) Video translation method and device, storage medium and electronic equipment
WO2022105760A1 (en) Multimedia browsing method and apparatus, device and medium
US20230139416A1 (en) Search content matching method, and electronic device and storage medium
EP4124024A1 (en) Method and apparatus for generating interaction record, and device and medium
CN112380365A (en) Multimedia subtitle interaction method, device, equipment and medium
CN113724709A (en) Text content matching method and device, electronic equipment and storage medium
CN113014853B (en) Interactive information processing method and device, electronic equipment and storage medium
CN112163104B (en) Method, device, electronic equipment and storage medium for searching target content
CN112163103A (en) Method, device, electronic equipment and storage medium for searching target content
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN112163433B (en) Key vocabulary matching method and device, electronic equipment and storage medium
CN113552984A (en) Text extraction method, device, equipment and medium
CN113132789B (en) Multimedia interaction method, device, equipment and medium
US20230140442A1 (en) Method for searching target content, and electronic device and storage medium
US20230135783A1 (en) Target content search method, electronic device and storage medium
US20240114030A1 (en) Method and apparatus for multimedia processing, and electronic device and medium
CN117082292A (en) Video generation method, apparatus, device, storage medium, and program product
CN115373786A (en) Multimedia playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant