CN116644246A - Search result display method and device, computer equipment and storage medium - Google Patents

Search result display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116644246A
CN116644246A CN202310687075.0A CN202310687075A CN116644246A CN 116644246 A CN116644246 A CN 116644246A CN 202310687075 A CN202310687075 A CN 202310687075A CN 116644246 A CN116644246 A CN 116644246A
Authority
CN
China
Prior art keywords
video
information
target
search
language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310687075.0A
Other languages
Chinese (zh)
Inventor
刘思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310687075.0A priority Critical patent/CN116644246A/en
Publication of CN116644246A publication Critical patent/CN116644246A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language

Abstract

The disclosure provides a search result display method, a search result display device, computer equipment and a storage medium, wherein the method comprises the following steps: receiving video search information of a user, and determining a content type corresponding to the video search information; determining a target language type corresponding to the video search information in response to the content type matching the target search type; determining a target video matched with the video search information; the target video information of the target video is obtained after translating the original language expression content in the target video into the target language type; and displaying the target video matched with the video search information and the target video information of the target video under the target language type. The embodiment of the disclosure can display the video with the native language which is not the target language type to the user, so that the user can browse the video with more languages; the video contains translated video information, so that a user can accurately understand the video content according to the translated video information, the understanding cost is reduced, and the browsing efficiency is improved.

Description

Search result display method and device, computer equipment and storage medium
Technical Field
The disclosure relates to the field of information technology, and in particular relates to a search result display method, a search result display device, computer equipment and a storage medium.
Background
With the development of internet technology, more and more users can browse more media content through searches.
When a user searches media content by using a search word, the media content with the same language type as the search word is usually displayed to the user, and the user cannot browse more high-quality media content, so that the quality of the search result is affected.
Disclosure of Invention
The embodiment of the disclosure at least provides a search result display method, a search result display device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a search result display method, including:
receiving video search information of a user, and determining a content type corresponding to the video search information;
determining a target language type corresponding to the video search information in response to the content type being matched with the target search type;
determining a target video matched with the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
And displaying the target video matched with the video searching information and target video information of the target video under the target language type.
In an alternative embodiment, the content type is used to indicate a category of search intention matched with the video search information, and the category of search intention includes multiple target search types with comprehensive coverage requirements on search results and other search types.
In an alternative embodiment, the determining the target video matching the video search information includes:
and determining target videos matched with the video search information from the candidate videos according to target video information of the candidate videos in the multi-language video library under the target language type.
In an alternative embodiment, the target video information of each candidate video under the target language type is obtained according to the following steps:
extracting the original language expression content of each candidate video from the multi-language video library, and obtaining the target video information after carrying out online translation on the original language expression content; or alternatively, the process may be performed,
extracting the target video information under the target language type from video information of each candidate video under multiple language types, which is pre-stored in the multi-language video library.
In an optional implementation manner, the determining the target language type corresponding to the video search information includes:
and determining the target language type according to the first language type matched with the video search information and the historical behavior data of the user which is obtained through authorization.
In an alternative embodiment, the determining the target language type according to the first language type matched by the video search information and the authorized acquired historical behavior data of the user includes:
determining at least one second language type used by the user according to the authorized historical behavior data of the user;
and in response to the first language type not being present in the at least one second language type, determining a second language type with highest frequency of user use from the at least one second language type as the target language type.
In an alternative embodiment, each candidate video in the multi-language video library is obtained according to the following steps:
determining a first candidate video from each video according to the video attribute information of each video;
determining a second candidate video from the first candidate videos based on interaction data corresponding to the first candidate videos;
And performing de-duplication processing on each second candidate video based on the similarity between the second candidate videos to obtain a third candidate video, and storing the third candidate video into the multi-language video library.
In an alternative embodiment, the target video information of each candidate video includes subtitle information;
the caption information includes: and translating the caption information of the original language in the candidate video to obtain information and/or carrying out semantic recognition on each video frame in the candidate video to obtain text information.
In an alternative embodiment, the target video information of each candidate video includes dubbing information;
the dubbing information includes: and translating dubbing content of the original language in the candidate video to obtain information, and/or dubbing text information after semantic recognition corresponding to each video frame in the candidate video to obtain information.
In an alternative embodiment, the determining, from the candidate videos, the target video that matches the video search information includes:
determining target videos matched with the video search information according to index information in the target video information corresponding to each candidate video; the index information is obtained by extracting key information from the target video information.
In a second aspect, an embodiment of the present disclosure further provides a search result display apparatus, including:
the first determining module is used for receiving video searching information of a user and determining a content type corresponding to the video searching information;
the second determining module is used for determining a target language type corresponding to the video search information in response to the fact that the content type is matched with the target search type;
a third determining module, configured to determine a target video that matches the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
and the display module is used for displaying the target video matched with the video search information and the target video information of the target video under the target language type.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the alternative embodiments of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the alternative embodiments of the first aspect, described above.
According to the search result display method provided by the embodiment of the disclosure, the multi-language resource search of the multi-language resource library can be started under the condition that the content type of the video search information is matched with the target search type. Specifically, the target language type corresponding to the video search information is determined, then the target video matched with the video search information is determined, wherein the target video information of the target video, such as subtitle information, dubbing information and the like, is obtained by translating the original language expression content (namely subtitle and dubbing) in the target video into the target language type, so that more videos with the original language type different from the target language type can be provided for a user, the user can be helped to browse high-quality video resources under more language types, and the quality of search results is improved. In addition, the video provided for the user contains the video information translated into the target language type, so that even if the original language type of the searched video is inconsistent with the language type of the search word, the user can accurately understand the video content according to the translated target video information, the understanding cost of the user can be reduced to a certain extent, and the browsing efficiency of the user is improved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a search result presentation method provided by an embodiment of the present disclosure;
FIG. 2 shows a schematic illustration of a target video provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a search result display device according to an embodiment of the disclosure;
fig. 4 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
Through research, when a user searches multimedia content by using a search word, the user is usually presented with multimedia content of the same language type as the search word, and cannot browse high-quality multimedia content of other language types, so that the quality of the search result is affected.
Based on this, the present disclosure provides a search result display method, including: receiving video search information of a user, and determining a content type corresponding to the video search information; determining a target language type corresponding to the video search information in response to the content type being matched with the target search type; determining a target video matched with the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video; and displaying the target video matched with the video searching information and target video information of the target video under the target language type.
In the embodiment of the disclosure, the multilingual resource search of the multilingual resource library can be started under the condition that the content type of the video search information is matched with the target search type. Specifically, the target language type corresponding to the video search information is determined, and then the target video matched with the video search information is determined, wherein the target video information of the target video, such as subtitle information, dubbing information and the like, is obtained by translating the target video based on the original language expression content (namely, subtitle and dubbing) in the target video into the target language type, so that more videos with the original language type different from the target language type can be provided for a user. In addition, the video provided for the user contains the video information translated into the target language type, so that even if the original language type of the searched video is inconsistent with the language type of the search word, the user can accurately understand the video content according to the translated target video information, the understanding cost of the user can be reduced to a certain extent, and the browsing efficiency of the user is improved.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For the sake of understanding the present embodiment, first, a detailed description will be given of a search result display method disclosed in the embodiments of the present disclosure, where an execution body of the search result display method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability.
The search result display method provided by the embodiment of the present disclosure is described below by taking an execution body as a terminal device as an example.
Referring to fig. 1, a flowchart of a search result display method according to an embodiment of the disclosure is shown, where the method includes S101 to S104, where:
s101: and receiving video search information of a user, and determining the content type corresponding to the video search information.
The method for displaying the search results provided by the embodiment of the disclosure can be applied to the scene of searching the video by the user, for example, the scene of searching short video, video clips, movie plays, movies and the like, and the method is not particularly limited. The user can acquire the video matched with the video search information by inputting the video search information.
After receiving the input video search information, the content type of the video search information may be determined first, that is, the search intention type of the video search information may be determined.
Here, some search intention categories may be intentions requiring acquisition of knowledge content (such intentions include that a more comprehensive knowledge is required to be acquired through language content such as text, dubbing, etc.), some search intention categories may be intentions requiring acquisition of other language-type video content (such as original videos of some foreign celebrities loved by a user, etc.), some search intention categories may be search intentions for local information videos, etc. Based on this, the categories of search intent may include a variety of target search types that have a comprehensive coverage requirement for the search results, as well as other search types.
The comprehensive coverage requirement may also be referred to as a broad content requirement or a multi-native language content requirement, that is, a type of content that needs to be covered by the answer result as much as possible, so multiple target search types that have a comprehensive coverage requirement on the search result may include, for example, a search type that can obtain an answer only by knowledge in a multi-language resource such as a knowledge video, a question-answer video, etc. Under these target search types, the searched video may contain content that needs to be comprehensively obtained through language content such as text, dubbing and the like, so language translation needs to be performed on the searched video.
The other search type may be, for example, a search type for a local information video (in the search type for a local information video, when a user searches for video search information in a local language, only the local video content needs to be searched for, and no other language video is needed.
Here, the category of the search intention is determined because, in the case where the category of the search intention matches the target search type, i.e., the category of the search intention matching the video search information indicated by the content type matches the target search type having a comprehensive coverage requirement for the search result, there is a requirement for performing the video search in the multilingual video library, at this time, the target language type corresponding to the video search information may be determined first, and the target video matching the video search information may be determined, i.e., the steps of S102 to S103.
Next, a process of determining a target language type corresponding to the video search information in the case where the content type matches the target search type, that is, a step of S102 will be explained.
S102: and determining a target language type corresponding to the video search information in response to the content type matching with the target search type.
Typically, the video search information may be represented by text or speech. The language type corresponding to the video search information can be determined through the video search information expressed by words or voices, so that the language type can be used as a first language type matched with the video search information. For example, the user inputs "football match comment video" in chinese, and the language type corresponding to the video search information can be determined to be chinese according to the video search information input by the user.
Thus, in one embodiment, when determining the target language type corresponding to the video search information, the target language type may be determined according to the first language type matched with the video search information, that is, the first language type is taken as the target language type.
In the above embodiment, the first language type may be considered as a language type frequently used by the user, and then, when the video of the first language type is presented to the user, the loss of interest of the user due to language barrier may be avoided to some extent.
In some cases, although the corresponding first language type can be determined through the video search information, the first language type is not necessarily the language type frequently used by the user, so in order to more accurately determine the target language type corresponding to the video search information, in one embodiment, when determining the target language type corresponding to the video search information, the target language type may be determined according to the first language type matched with the video search information and the historical behavior data of the authorized user.
It should be emphasized here that, before the historical behavior data of the authorized user is obtained, the user is informed of the type, the use range, the use scenario, and the like of the personal information related to the disclosure in an appropriate manner according to relevant laws and regulations, the historical behavior data of the user can be obtained only if the user is authorized, and the historical behavior data of the user cannot be obtained if the user is not authorized. The historical behavior data of an authorized user as used in the present disclosure refers to the historical behavior data of the user acquired with the user authorization obtained.
The historical behavior data of the authorized user can include video search information used by the authorized user in a historical manner, video content browsed by the authorized user in a historical manner, video content published by the authorized user and the like. From the historical behavior data of the authorized user, a language type, i.e. a second language type, can be determined that matches the historical behavior data of the authorized user. The target language type may be determined based on the first language type and the second language type. The determined target language type may be a first language type, a second language type, or both the first language type and the second language type (e.g., english may be included or Chinese may be included when teaching a learning lesson in Chinese to a student who uses English).
Through the embodiment, the target language type is determined by combining the first language type matched with the video search information and the historical behavior data of the authorized user, so that the language type frequently used by the user can be determined more accurately, the video more conforming to the language type can be displayed to the user, and the condition that the interest of the user is lost due to language barriers can be further avoided.
In a further case, although the video search information is represented by text or voice, the first language type corresponding to the video search information cannot be determined, for example, the user inputs the international morse code distress signal "SOS", the video search information is not abbreviated as any word, and thus the first language type corresponding to the video search information cannot be determined through the video search information, and thus, in an embodiment, the step of determining the target language type for the first language type matched according to the video search information and the historical behavior data of the authorized user may be performed according to steps 11 to 12:
step 11: and determining at least one second language type used by the user according to the authorized historical behavior data of the user.
Step 12: in response to the first language type not being present in the at least one second language type, determining a second language type that is most frequently used by the user from the at least one second language type as the target language type.
Here, the description of step 11 may refer to the foregoing, and will not be repeated here.
In step 12, the first language type does not exist, that is, the corresponding first language type cannot be determined through the video search information, at this time, the determined at least one second language type may be ranked according to the frequency of use of the user, and the second language type with the highest frequency of use of the user is used as the target language type.
Next, a process of determining a target video that matches the video search information will be explained. It should be noted here that after determining the target language type of the video search information, if the determined target video matching the video search information is a video of the target language type, it is not necessary to translate the native language expression content of the target video. If the determined target video matching the video search information is not a video of the target language type, the step of S103 is performed.
S103: determining a target video matched with the video search information; and the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video.
Here, the native language is a language used when the target video is generated. The native language expression content in the target video may include subtitle information in a native language, dubbing information in a native language, characters in a native language apparent in a frame image, and the like.
In one embodiment, the target video may be selected from candidate videos, and specifically, the target video matched with the video search information may be determined from each candidate video according to target video information of each candidate video in the multi-language video library under the target language type.
Here, the multi-language video library may include candidate videos of various content types and various language types, may include target video information of each candidate video in a target language type, and may include native language expression content of each candidate video.
The target video information of each candidate video under the target language type can be obtained by online translation of the original language expression content of the candidate video, or can be extracted from video information under multiple language types translated offline. In one embodiment, the target video information of each candidate video under the target language type may be obtained according to the following steps:
extracting the original language expression content of each candidate video from a multi-language video library, and carrying out online translation on the original language expression content to obtain target video information; or extracting target video information under the target language type from video information of each candidate video under multiple language types, which is pre-stored in a multi-language video library.
The above steps include two embodiments of obtaining the target video information, and in a specific implementation, any one of the above embodiments may be selected to perform, where each pre-stored candidate video may include the target video information in the target language type in the video information in multiple language types.
The step of extracting the target video information under the target language type from the pre-stored video information of each candidate video under the multiple language types can be executed first, and the step of extracting the original language expression content of each candidate video and translating the original language expression content on line can be executed when the extraction fails. That is, for each candidate video, the target video information may be obtained after online translation of the native language expression content in the case where the candidate video does not have the target video information in the target language type or in the case where the candidate video is not updated in the target video information in the target language type. By the implementation mode, the target video information of the target language type can be obtained in time, so that the target video information more conforming to the language type can be displayed to the user.
The target video information of the target video may include subtitle information, dubbing information, and the like.
The subtitle information in the target video information may include: and translating the caption information of the original language in the candidate video to obtain information and/or carrying out semantic recognition on each video frame in the candidate video to generate text information.
Here, the subtitle information in the original language in the candidate video is text information, and the subtitle information in the target language type can be obtained by translating the text information in the original language.
Each video frame in the candidate video may contain video content, and the video content contained in each video frame may include picture content and text content. Wherein the text content may include subtitle information and non-subtitle information. In the case that the candidate video does not have corresponding subtitle information, the subtitle information under the target language type can be generated by performing semantic recognition on the video content. Under the condition that the candidate video corresponds to the subtitle information, the subtitle information contained in each video frame can be subjected to semantic recognition to generate the subtitle information containing the target language type, and the subtitle information can be directly translated into the subtitle information containing the target language type.
In one mode, when the candidate video contains dubbing information of the native language, the subtitle information containing the target language type may be obtained after translating according to the dubbing information of the native language.
The dubbing information in the target video information may include: and translating the dubbing content of the original language in the candidate video to obtain information, and/or dubbing the text information after semantic recognition corresponding to each video frame in the candidate video to obtain information.
And translating dubbing contents of the original language in the candidate video to obtain subtitle information containing the target language type.
Each video frame in the candidate video may contain video content, and the video content contained in each video frame may include picture content and text content. Wherein the text content may include subtitle information and non-subtitle information. Under the condition that the candidate video does not have the corresponding subtitle information, text information containing the target language type can be generated by carrying out semantic recognition on the video content, and finally, the subtitle information containing the target language type can be obtained by dubbing the text information. Under the condition that the candidate video corresponds to the subtitle information, semantic recognition can be carried out on the subtitle information contained in each video frame to generate text information containing the target language type, the subtitle information can also be directly translated into the text information containing the target language type, and finally, after the text information is dubbed, the subtitle information containing the target language type can be obtained.
In order to improve the matching degree between the searched video and the searching intention of the user, each candidate video in the multilingual video library may be stored after being filtered. In one embodiment, each candidate video in the multilingual video library may be obtained according to steps 21 to 23:
Step 21: and determining a first candidate video from the videos according to the video attribute information of the videos.
Step 22: and determining a second candidate video from the first candidate videos based on the interaction data corresponding to the first candidate videos.
Step 23: and performing de-duplication processing on each second candidate video based on the similarity between the second candidate videos to obtain a third candidate video, and storing the third candidate video into the multi-language video library.
In the above embodiment, the video attribute information may include information of a video type, a video quality, and the like.
Wherein the video type may refer to the content type of the video. The video quality may include information such as duration of play, frequency of play, popularity, etc. By screening the first candidate video according to video quality, a premium video that is more popular with the user may be presented to the user.
The determined first candidate video may include a video capable of translating native language expression content in the video into target video information of a target language type. For example, dance-like videos do not generally contain native language expression content, and a user can understand dance content by watching actions of a dancer, so that the videos do not need to be added into a multilingual video library. The determined first video type may include, for example, a knowledge type, a question-answer type, a useful type, etc., that is, contains the native language expression content, and needs to translate the native language expression content into target video information of a target language type.
The interaction data corresponding to each first candidate video may include data generated when the user interacts with the first candidate video, for example, may include information such as a praise amount, a collection amount, a play frequency, a play duration, and the like of the user. Here, according to the interaction data corresponding to each first candidate video, a second candidate video with more frequent user interaction may be selected from each first candidate video.
In a specific implementation, for example, a praise amount threshold, a collection amount threshold, a play frequency threshold, and a play duration threshold may be set, and if the interaction data corresponding to the first candidate video exceeds the threshold, the interaction data may be determined to be the second candidate video. For example, in the case where the praise amount corresponding to the first candidate video exceeds the praise amount threshold, it may be determined as the second candidate video; in the case where the number of plays corresponding to the first candidate video exceeds the number of plays threshold, it may be determined as the second candidate video. In a specific implementation, the second candidate video may be screened in combination with the interaction data of the above aspects, which may not be particularly limited herein.
In order to reduce the video showing more content repetition to the user, in an implementation, the deduplication process may be performed on each second candidate video according to the similarity between each second candidate video.
Here, the similarity between the respective second candidate videos may refer to the similarity in content. In one embodiment, the similarity of the content may be determined according to the similarity between the target video information corresponding to each of the second candidate videos. In another embodiment, the similarity of the content may also be determined according to the similarity of the frame images in each of the second candidate videos. There may be no particular limitation here.
And finally, the third candidate video obtained after the duplication removal can be stored into a multilingual video library.
To facilitate determining a target video that matches the video search information from among the candidate videos as soon as possible, in one embodiment, index information for searching for the target video may be set. Specifically, when the step of determining the target video matching the video search information is performed, the target video matching the video search information may be determined according to index information in the target video information corresponding to each candidate video; the index information is obtained by extracting key information of the target video information.
Here, the index information may be a keyword in the target video information. Because the target video information corresponds to the target video, the target video information which corresponds to the index information and is matched with the video search information can be determined according to the index information; then, based on the target video information, a target video matching the video search information can be determined.
In the process of determining the target video information corresponding to the index information and matched with the video search information according to the index information, the target video information matched with the video search information can be determined according to the target video information corresponding to the index information and the similarity of the target video information and the video search information, or the target index information can be determined according to the similarity of the index information and the video search information, and then the target video information corresponding to the target index information is used as the target video information matched with the video search information.
S104: and displaying the target video matched with the video searching information and target video information of the target video under the target language type.
Here, in the case where the target video information is subtitle information, the subtitle information may be displayed correspondingly when each frame of video of the target video is displayed. The caption information may be displayed in each frame of the target video or may be displayed at an upper layer of each frame of the video. In one embodiment, the subtitle information in the target language type of the target video may be displayed in synchronization with the subtitle information in the native language, so that the user can better understand the content of the target video.
Under the condition that the target video information is dubbing information, the dubbing information can be synchronously played when the target video is played. In one embodiment, the subtitle information and dubbing information may also be presented simultaneously, so that the user may better understand the content of the target video.
Fig. 2 shows a schematic presentation of a target video. In fig. 2, the user inputs video search information of "football rebroadcast", which is chinese. The determined target video matched with the video search information is football match explanation video with the original language being English. In the process of displaying the football game caption video, the Chinese caption translated from the English caption of the football game caption video can be displayed, and the original English caption can be displayed at the same time. In fig. 2, the display positions of the english caption and the chinese caption are merely illustrative. The display position thereof in the specific display process may not be particularly limited.
By the search result display method provided by the embodiment, the multilingual resource search of the multilingual resource library can be started under the condition that the content type of the video search information is matched with the target search type. Specifically, the target language type corresponding to the video search information is determined, and then the target video matched with the video search information is determined, wherein the target video information of the target video, such as subtitle information, dubbing information and the like, is obtained by translating the target video based on the original language expression content (namely, subtitle and dubbing) in the target video into the target language type, so that more videos with the original language type different from the target language type can be provided for a user. In addition, the video provided for the user contains the video information translated into the target language type, so that even if the original language type of the searched video is inconsistent with the language type of the search word, the user can accurately understand the video content according to the translated target video information, the understanding cost of the user can be reduced to a certain extent, and the browsing efficiency of the user is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a search result display device corresponding to the search result display method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the search result display method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is not repeated.
Referring to fig. 3, an architecture diagram of a search result display apparatus according to an embodiment of the disclosure is shown, where the apparatus includes:
a first determining module 301, configured to receive video search information of a user, and determine a content type corresponding to the video search information;
a second determining module 302, configured to determine a target language type corresponding to the video search information in response to the content type matching the target search type;
a third determining module 303, configured to determine a target video that matches the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
And the display module 304 is configured to display the target video matched with the video search information and target video information of the target video under the target language type.
In an alternative embodiment, the content type is used to indicate a category of search intention matched with the video search information, and the category of search intention includes multiple target search types with comprehensive coverage requirements on search results and other search types.
In an alternative embodiment, the third determining module 303 is specifically configured to:
and determining target videos matched with the video search information from the candidate videos according to target video information of the candidate videos in the multi-language video library under the target language type.
In an alternative embodiment, the apparatus further comprises:
the extraction module is used for extracting the original language expression content of each candidate video from the multi-language video library, and obtaining the target video information after carrying out online translation on the original language expression content; or alternatively, the process may be performed,
extracting the target video information under the target language type from video information of each candidate video under multiple language types, which is pre-stored in the multi-language video library.
In an alternative embodiment, the second determining module 302 is specifically configured to:
and determining the target language type according to the first language type matched with the video search information and the historical behavior data of the user which is obtained through authorization.
In an alternative embodiment, the second determining module 302 is specifically configured to:
determining at least one second language type used by the user according to the authorized historical behavior data of the user;
and in response to the first language type not being present in the at least one second language type, determining a second language type with highest frequency of user use from the at least one second language type as the target language type.
In an alternative embodiment, the apparatus further comprises:
a fourth determining module, configured to determine a first candidate video from each video according to video attribute information of each video;
a fifth determining module, configured to determine a second candidate video from the first candidate videos based on interaction data corresponding to the first candidate videos;
and the de-duplication module is used for de-duplication processing each second candidate video based on the similarity between the second candidate videos to obtain a third candidate video, and storing the third candidate video into the multi-language video library.
In an alternative embodiment, the target video information of each candidate video includes subtitle information;
the caption information includes: and translating the caption information of the original language in the candidate video to obtain information and/or carrying out semantic recognition on each video frame in the candidate video to obtain text information.
In an alternative embodiment, the target video information of each candidate video includes dubbing information;
the dubbing information includes: and translating dubbing content of the original language in the candidate video to obtain information, and/or dubbing text information after semantic recognition corresponding to each video frame in the candidate video to obtain information.
In an alternative embodiment, the third determining module 303 is specifically configured to:
determining target videos matched with the video search information according to index information in the target video information corresponding to each candidate video; the index information is obtained by extracting key information from the target video information.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device 400 according to an embodiment of the disclosure includes a processor 401, a memory 402, and a bus 403. The memory 402 is configured to store execution instructions, including a memory 4021 and an external memory 4022; the memory 4021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 401 and data exchanged with the external memory 4022 such as a hard disk, the processor 401 exchanges data with the external memory 4022 through the memory 4021, and when the computer device 400 operates, the processor 401 and the memory 402 communicate with each other through the bus 403, so that the processor 401 executes the following instructions:
receiving video search information of a user, and determining a content type corresponding to the video search information;
determining a target language type corresponding to the video search information in response to the content type being matched with the target search type;
determining a target video matched with the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
And displaying the target video matched with the video searching information and target video information of the target video under the target language type.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the search result presentation method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, where instructions included in the program code may be used to perform steps of the search result display method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein in detail.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A search result presentation method, comprising:
receiving video search information of a user, and determining a content type corresponding to the video search information;
determining a target language type corresponding to the video search information in response to the content type being matched with the target search type;
determining a target video matched with the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
and displaying the target video matched with the video searching information and target video information of the target video under the target language type.
2. The method of claim 1, wherein the content type is used to indicate a category of search intents that the video search information matches, the category of search intents including a plurality of target search types having a comprehensive coverage requirement for search results, and other search types.
3. The method of claim 1, wherein the determining a target video that matches the video search information comprises:
and determining target videos matched with the video search information from the candidate videos according to target video information of the candidate videos in the multi-language video library under the target language type.
4. A method according to claim 3, wherein the target video information of each candidate video in the target language type is obtained according to the steps of:
extracting the original language expression content of each candidate video from the multi-language video library, and obtaining the target video information after carrying out online translation on the original language expression content; or alternatively, the process may be performed,
extracting the target video information under the target language type from video information of each candidate video under multiple language types, which is pre-stored in the multi-language video library.
5. The method of claim 1, wherein determining the target language type to which the video search information corresponds comprises:
and determining the target language type according to the first language type matched with the video search information and the historical behavior data of the user which is obtained through authorization.
6. The method of claim 5, wherein the determining the target language type based on the first language type matched by the video search information and the authorized historical behavioral data of the user comprises:
Determining at least one second language type used by the user according to the authorized historical behavior data of the user;
and in response to the first language type not being present in the at least one second language type, determining a second language type with highest frequency of user use from the at least one second language type as the target language type.
7. The method of claim 3, wherein each candidate video in the multi-lingual video library is obtained according to the steps of:
determining a first candidate video from each video according to the video attribute information of each video;
determining a second candidate video from the first candidate videos based on interaction data corresponding to the first candidate videos;
and performing de-duplication processing on each second candidate video based on the similarity between the second candidate videos to obtain a third candidate video, and storing the third candidate video into the multi-language video library.
8. The method of claim 3, wherein the target video information of each candidate video includes subtitle information;
the caption information includes: and translating the caption information of the original language in the candidate video to obtain information and/or carrying out semantic recognition on each video frame in the candidate video to obtain text information.
9. The method of claim 3, wherein the target video information for each candidate video comprises dubbing information;
the dubbing information includes: and translating dubbing content of the original language in the candidate video to obtain information, and/or dubbing text information after semantic recognition corresponding to each video frame in the candidate video to obtain information.
10. The method of claim 3, wherein said determining a target video from said candidate videos that matches said video search information comprises:
determining target videos matched with the video search information according to index information in the target video information corresponding to each candidate video; the index information is obtained by extracting key information from the target video information.
11. A search result presentation apparatus, comprising:
the first determining module is used for receiving video searching information of a user and determining a content type corresponding to the video searching information;
the second determining module is used for determining a target language type corresponding to the video search information in response to the fact that the content type is matched with the target search type;
A third determining module, configured to determine a target video that matches the video search information; the target video information of the target video is obtained after translating the target video information into the target language type based on the original language expression content in the target video;
and the display module is used for displaying the target video matched with the video search information and the target video information of the target video under the target language type.
12. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the search result presentation method of any one of claims 1 to 10.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the search result presentation method of any of claims 1 to 10.
CN202310687075.0A 2023-06-09 2023-06-09 Search result display method and device, computer equipment and storage medium Pending CN116644246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310687075.0A CN116644246A (en) 2023-06-09 2023-06-09 Search result display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310687075.0A CN116644246A (en) 2023-06-09 2023-06-09 Search result display method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116644246A true CN116644246A (en) 2023-08-25

Family

ID=87624635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310687075.0A Pending CN116644246A (en) 2023-06-09 2023-06-09 Search result display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116644246A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808013A (en) * 2024-02-29 2024-04-02 济南幼儿师范高等专科学校 Interactive multi-language communication system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808013A (en) * 2024-02-29 2024-04-02 济南幼儿师范高等专科学校 Interactive multi-language communication system

Similar Documents

Publication Publication Date Title
US9100701B2 (en) Enhanced video systems and methods
US10652592B2 (en) Named entity disambiguation for providing TV content enrichment
CN109558513A (en) A kind of content recommendation method, device, terminal and storage medium
US7747429B2 (en) Data summarization method and apparatus
CN108924658B (en) Bullet screen association input method and device and computer readable storage medium
CN103069414A (en) Information processing device, information processing method, and program
CN103052953A (en) Information processing device, method of processing information, and program
WO2015054627A1 (en) Methods and systems for aggregation and organization of multimedia data acquired from a plurality of sources
CN112733654B (en) Method and device for splitting video
US11302361B2 (en) Apparatus for video searching using multi-modal criteria and method thereof
US11687714B2 (en) Systems and methods for generating text descriptive of digital images
CN110287375B (en) Method and device for determining video tag and server
Bai et al. Discriminative latent semantic graph for video captioning
CN109101505B (en) Recommendation method, recommendation device and device for recommendation
Roy et al. Tvd: a reproducible and multiply aligned tv series dataset
CN116644246A (en) Search result display method and device, computer equipment and storage medium
US20240103697A1 (en) Video display method and apparatus, and computer device and storage medium
CN113987274A (en) Video semantic representation method and device, electronic equipment and storage medium
US20230326369A1 (en) Method and apparatus for generating sign language video, computer device, and storage medium
US20100281046A1 (en) Method and web server of processing a dynamic picture for searching purpose
WO2023116785A1 (en) Information display method and apparatus, computer device, and storage medium
CN114402384A (en) Data processing method, device, server and storage medium
CN116541114A (en) Information display method, device, computer equipment and storage medium
Tapu et al. TV news retrieval based on story segmentation and concept association
CN113821677A (en) Method, device and equipment for generating cover image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination