CN112214695A - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN112214695A
CN112214695A CN201910630009.3A CN201910630009A CN112214695A CN 112214695 A CN112214695 A CN 112214695A CN 201910630009 A CN201910630009 A CN 201910630009A CN 112214695 A CN112214695 A CN 112214695A
Authority
CN
China
Prior art keywords
video
search
candidate
determining
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910630009.3A
Other languages
Chinese (zh)
Inventor
张宁静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910630009.3A priority Critical patent/CN112214695A/en
Publication of CN112214695A publication Critical patent/CN112214695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/954Navigation, e.g. using categorised browsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Abstract

The embodiment of the disclosure discloses an information processing method, an information processing device and electronic equipment. One embodiment of the method comprises: receiving a search keyword input by a user; determining at least one video data matching the search keyword and at least one search guide word; sending the at least one piece of video data and the at least one search leading word to a terminal device so that the terminal device displays the at least one piece of video data and the at least one search leading word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word. Therefore, the operation of obtaining the relevant information by the user is reduced, and the time of the user is saved.

Description

Information processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an information processing method and apparatus, and an electronic device.
Background
With the development of the internet, users increasingly use terminal devices to browse various information. The user may enter a search keyword in a search window of the information stream to obtain a plurality of search results. The search result here may be, for example, text information, picture information, and/or video information.
Taking the search result as video information as an example, after a user browses a piece of video data, there may be a need to browse other video data associated with the piece of video data. Currently, the user re-inputs the information corresponding to the other video data in the search window to acquire the other video data, or clicks a link corresponding to the other video data to acquire the other video data.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides an information processing method, an information processing device and electronic equipment, which are used for displaying a search guide word in a first search result display page of a user, guiding the user to browse other video data associated with a first search result through the search guide word, reducing the operation of obtaining other information by the user and saving the time for obtaining information by the user.
In a first aspect, an embodiment of the present disclosure provides an information processing method, which is applied to a server, and the method includes: receiving a search keyword input by a user; determining at least one video data matching the search keyword and at least one search guide word; sending the at least one piece of video data and at least one piece of search guide word to a terminal so that the terminal displays the at least one piece of video data and the at least one piece of search guide word in the same page; the search guide word is used for guiding the user to further search candidate video data corresponding to the search guide word.
In a second aspect, an embodiment of the present disclosure provides an information processing method, applied to a terminal, including: responding to a received search keyword input by a user, and sending an information acquisition request to a server, wherein the information acquisition request comprises the search keyword; receiving at least one video data and at least one search guide word sent by a server; displaying the at least one video data and at least one search guide word in the same display page; wherein the at least one video data and the at least one search guide word are determined based on the information processing method of the first aspect.
In a third aspect, an embodiment of the present disclosure provides an information processing apparatus, applied to a server, including: a receiving unit for receiving a search keyword input by a user; a determining unit for determining at least one video data matching the search keyword and at least one search guide word; the sending unit is used for sending the at least one piece of video data and the at least one search guide word to a terminal so that the terminal can display the at least one piece of video data and the at least one search guide word in the same page; the search guide word is used for guiding the user to further search candidate video data corresponding to the search guide word.
In a fourth aspect, an embodiment of the present disclosure provides an information processing apparatus, applied to a terminal, including: the system comprises a request unit, a search unit and a server, wherein the request unit is used for responding to a received search keyword input by a user and sending an information acquisition request to the server, and the information acquisition request comprises the search keyword; the data receiving unit is used for receiving at least one piece of video data and at least one piece of search guide word sent by the server; the display unit is used for displaying the at least one piece of video data and the at least one piece of search guide word in the same display page; wherein the at least one video data and the at least one search guide are determined based on the information processing apparatus of the third aspect.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the information processing method according to the first aspect.
In a sixth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the information processing method according to the first aspect.
According to the information processing method, the information processing device and the electronic equipment, the search keyword input by a user is received firstly; then, determining video data indicating a match with the search keyword; finally, at least one piece of video data and at least one search guide word are sent to the terminal equipment, so that the terminal equipment displays the at least one piece of video data and the at least one search guide word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word. Therefore, the search guide words are displayed in the initial search result display page of the user, and the user can be guided to browse other video data associated with the initial search result through the search guide words, so that the operation of acquiring other information by the user is reduced, and the time for acquiring the information by the user is saved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of an information processing method according to the present disclosure;
FIG. 2 illustrates the steps of determining search lead words in the embodiment shown in FIG. 1;
FIG. 3 is a flow diagram of another embodiment of an information processing method according to the present disclosure;
fig. 4 is a schematic diagram of an application scenario of an information processing method provided by the embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of one embodiment of an information processing apparatus according to the present disclosure;
FIG. 6 is a schematic block diagram of another embodiment of an information processing apparatus according to the present disclosure;
FIG. 7 is an exemplary system architecture to which the information processing method of one embodiment of the present disclosure may be applied;
fig. 8 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow of one embodiment of an information processing method according to the present disclosure is shown. The information processing method is applied to the server. The information processing method as shown in fig. 1 includes the steps of:
step 101, receiving a search keyword input by a user.
The user can use the terminal to interact with the server through the network. For example, a user may input a search keyword in various application clients installed in the terminal to initiate a search request to a server that is in communication connection with the terminal through the application clients. The application client can be a search application client, a web browser application client, a shopping application client, an instant messaging tool client, a mailbox client and the like. The network may be any of various existing and future developed wired or wireless communication networks.
The search keyword here may be a word, or the like.
At least one video data matching the search keyword and at least one search guide word are determined, step 102.
The indication of a match to a search keyword may be determined according to various methods, such as using a web crawler to crawl video data from the internet that matches the search keyword. Video data matching the search keyword may also be determined from a preset database.
In addition, search guide words that match the search keywords may also be determined. The search guide word here may be a word that is set in advance and corresponds to the search keyword. The search guide word may correspond to a plurality of candidate video data.
Step 103, sending at least one piece of video data and at least one search guide word to the terminal device, so that the terminal device displays the at least one piece of video data and the at least one search guide word in the same page.
The at least one video data determined in step 102 and the above-mentioned indication of a search guidance word may be transmitted to the terminal device. After the terminal device receives the at least one piece of video data and the at least one search guidance word, the at least one piece of video data and the at least one search guidance word may be presented in the same page. The at least one search guide word may be presented in the same line. If the number of the search guide words is large, and the page size matched with the screen of the terminal equipment cannot display all the search guide words at the same time, different search guide words can be displayed through sliding operation executed on the line where the search guide words are located.
It should be noted that the search guidance words may be displayed in the same row or in the same column.
When it needs to be explained, the position of the row where the search leading word is located in the page may be at any specified position. Preferably, the search guide word may be disposed below a search window of the page and above each search result presentation area.
The method provided by the above embodiment of the present disclosure receives a search keyword input by a user; then, determining video data indicating a match with the search keyword; finally, at least one piece of video data and at least one search guide word are sent to the terminal equipment, so that the terminal equipment displays the at least one piece of video data and the at least one search guide word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word. Therefore, the search guide words are displayed in the initial search result display page of the user, and the user can be guided to browse other video data related to the initial search result through the search guide words, so that the operation of obtaining related information by the user is reduced, and the time of the user is saved.
In some alternative implementations, please refer to fig. 2, which illustrates a step of determining a search lead word in the information processing method shown in fig. 1.
As shown in fig. 2, the search guidance word is determined based on the following steps:
step 201, determining the association degree between the search keyword and the keyword corresponding to each of the plurality of candidate videos stored in the preset database.
Step 202, using the keyword corresponding to at least one candidate video with the association degree between the keywords being greater than a first preset threshold as a search guide word.
The first preset threshold here may be, for example, 0.4, 0.5, etc. The first preset threshold may be set according to specific applications, and is not limited herein.
In some application scenarios, the step of determining the search guidance word further includes the following steps:
step 203, determining at least one keyword corresponding to the search result;
the search result here may be video data. The video data may include textual information corresponding to the video. At least one keyword corresponding to the search result can be extracted from the text information corresponding to the video.
The keywords corresponding to the search results can be extracted in real time, or extracted in advance and stored in a preset database in association with the search results.
It should be noted that extracting keywords from text information is a well-known technique that is widely researched and applied at present, and is not described herein again.
Step 204, determining, in the preset database, a keyword corresponding to at least one candidate video, where the association degree between the at least one keyword corresponding to the search result is greater than a second preset threshold, as a search guide word.
It should be noted that the search guidance word may be determined by using step 201 and step 202 alone, or may be determined by using step 203 and step 204 alone. Search guide words may also be determined using steps 201 through 204.
The second preset threshold value here may be, for example, 0.6, 0.7, etc. The specific value of the second preset threshold may be set according to an actual application scenario, and is not limited herein.
The preset database can store a plurality of videos and keywords respectively corresponding to the videos in an associated manner. Where each video may correspond to at least one keyword. The keywords may be extracted from text information corresponding to the video. For example, the keyword corresponding to each video may be extracted from the subtitle corresponding to the video.
It should be noted that the preset database may be a local database of the server, or may be a remote electronic device that is in communication connection with the server.
In some embodiments, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos. In these embodiments, the category corresponding to the candidate video is determined based on the following steps:
firstly, performing video identification on the candidate video to obtain a video identification result.
The candidate video may be video-identified using various existing video identification methods. The video recognition result here may include an initial classification of the video.
In some application scenarios, the above-mentioned performing video identification on the candidate video to obtain a video identification result may specifically include the following steps:
firstly, frame extraction is carried out on the candidate video according to a preset frame extraction rule, and a plurality of frame extraction results are obtained.
Each of the frame extraction results may correspond to an image included in the candidate video.
Typically a candidate video may comprise a plurality of images. The preset frame extraction rule may include, for example, extracting an image from the candidate video at preset time intervals. The preset time interval here may be, for example, 10 seconds, 20 seconds, 30 seconds, or the like. The preset time interval may be determined according to the size of the candidate video, and is not limited herein.
The preset frame extraction rule may further include extracting candidate video key frames. Here, the key frame of the candidate video refers to one frame image of the candidate video whose picture content has changed greatly from the previous frame image.
And secondly, extracting video characteristics corresponding to each frame extraction result.
The video features corresponding to the frame extraction results can be extracted by using the existing image feature extraction method. Such as principal component analysis, linear discriminant analysis, multidimensional scaling, kernel principal component analysis, flow pattern learning based methods, and the like.
It should be noted that the principal component analysis method, the linear discriminant analysis method, the multidimensional scaling method, the kernel principal component analysis method, and the method based on flow pattern learning are well-known technologies that are widely researched and applied at present, and are not described herein again.
Thirdly, the video characteristics corresponding to the plurality of frame extraction results are eliminated.
Here to eliminate duplicate video features.
Fourthly, matching the video characteristics of the candidate video after the duplication elimination with the sample image characteristics in a sample image characteristic library established in advance, and determining the initial category corresponding to the candidate video according to the matching result.
And taking the video features respectively corresponding to the de-duplicated frame extraction results as the video features of the candidate videos.
The pre-established sample image feature library may store a plurality of sample images and image features corresponding to the sample images in an associated manner. Each sample image may comprise, for example, an image of an object of a known class.
Here, the initial category refers to a category corresponding to a candidate video obtained by video recognition.
It should be noted that, here, the initial category corresponding to the candidate video may be at least one category. As one schematic illustration, for example, the candidate video includes a car image, a person image, and a road image. The initial category to which the candidate video corresponds may be a "car" category, a "people" category, or a "roads" category, etc.
And secondly, extracting text information corresponding to the candidate video.
The text information corresponding to the candidate video may be, for example, subtitle information of the candidate video. The candidate videos can also be titles of the candidate videos, comment information corresponding to the candidate videos, and the like.
And thirdly, determining the category corresponding to the candidate video according to the video identification result and the text information.
The second category corresponding to the candidate video can be determined according to the text content corresponding to the extracted text information. For example, the keyword corresponding to the text content may be extracted, and the second category may be determined according to the text content keyword. Also, the second category here may be a plurality of categories.
The category corresponding to the candidate video can be determined according to the initial category and the second category corresponding to the candidate video. Here, the category corresponding to the candidate video may be, for example, a category in which the initial category coincides with the second category. Or the categories corresponding to the candidate videos comprise an initial category and a second category.
In some optional implementations, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos. The category corresponding to the candidate video can be further determined based on the following steps: and inputting the candidate video and the text information corresponding to the candidate video into a pre-trained video classification model to obtain the category corresponding to the candidate video. The video classification model is used for determining the category of the video according to the input video data and the text information corresponding to the video.
The video classification model can be various types of machine learning models, such as an artificial neural network model, a convolutional neural network model, and other deep learning models.
It can be understood that before the video classification model is used, the video data added with the class label and the text information of the video data are input into the video classification model, and the labeled class is used as an output to train the initial classification model, so as to obtain the trained video classification model.
It should be noted that, the above method for training the video classification model may refer to the existing method for training the machine learning model, which is not described herein again.
Referring to fig. 3, a flow of another embodiment of an information processing method according to the present disclosure is shown. The information processing method is applied to the terminal. As shown in fig. 3, the information processing method includes the steps of:
step 301, in response to receiving a search keyword input by a user, sending an information acquisition request to a server.
The terminal can receive a search keyword input by a user in an application client installed thereon in real time. And then sending an information acquisition request to a server matched with the application client through the application client. The information acquisition request here may include the search keyword described above.
Step 302, at least one video data and at least one search guide word sent by a server are received.
After receiving the information acquisition request, the server may determine at least one video data and at least one search guidance word according to the search keyword using the information processing method provided in the embodiment shown in fig. 1.
The specific steps of the server determining at least one video data and at least one search guide word may refer to the embodiment shown in fig. 1, which are not described herein again.
And 303, displaying at least one piece of video data and at least one piece of search guide word in the same display page.
The at least one search guide word may be presented in a preset area of the page. For example in the same row of the page or in the same column of the page. The row of the search guide word can slide to show other search guide words which are not shown in the current page through sliding.
According to the method provided by the embodiment of the disclosure, the search guide word is displayed in the initial search result display page of the user, and the user can be guided to browse other video data associated with the initial search result by the search guide word, so that the operation of obtaining relevant information by the user is reduced, and the time of the user can be saved.
Referring to fig. 4, a schematic diagram of an application scenario of the information processing method according to the embodiment of the present disclosure is shown.
As shown in fig. 4, the user inputs a search keyword "cat" in a search area 402 of a display page of the terminal 401. The terminal 401 sends the search keyword "cat" to the server. The server determines at least one video data and at least one search guide word according to the method of the embodiment shown in fig. 1. And the server side sends the at least one piece of video data, the at least one search guide word and the display rule to the terminal equipment. After receiving the at least one piece of video data and the at least one search guidance word, the terminal 401 may display the at least one piece of video data and the at least one search guidance word in the same page according to the display rule. As shown in fig. 3, the at least one video data may include, for example, video 1, video 2, video 3, and video 4. The search guidance word 403 may be "pet", "lovely pet", "cat food", "wallpaper", etc. shown in fig. 3. The user slides the row of the search guide word 403 in a touch manner, and can slide each search guide word to browse other search guide words not shown in the current page. The search guide words can guide the user to acquire more video data related to the search guide words according to the initial search result.
The above information processing method can be applied to text information processing and the like.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an information processing apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the information processing apparatus of the present embodiment includes: receiving section 501, determining section 502, and transmitting section 503. The receiving unit 501 is configured to receive a search keyword input by a user; a determining unit 502 for determining at least one video data matching the search keyword and at least one search guide word; a sending unit 503, configured to send the at least one piece of video data and the at least one search guidance word to a terminal, so that the terminal displays the at least one piece of video data and the at least one search guidance word in the same page; the search guide word is used for guiding the user to further search candidate video data corresponding to the search guide word.
In this embodiment, specific processing of the receiving unit 501, the determining unit 502, and the sending unit 503 of the information processing apparatus and technical effects thereof can refer to related descriptions of step 101, step 102, and step 103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some optional implementations, the information processing apparatus further includes a search guidance word determination unit (not shown in the drawings) configured to determine the search guidance word according to the following steps: determining the association degree between the search keyword and a keyword corresponding to each of a plurality of candidate videos stored in a preset database; taking keywords corresponding to at least one candidate video with the association degree between the keywords and the search keywords being greater than a first preset association degree threshold value as search guide words; or determining the keywords corresponding to at least one candidate video with the association degree between the keywords corresponding to the search results larger than a second preset association degree threshold value in the preset database as search guide words.
In some optional implementations, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; the information processing apparatus further includes a first video category determination unit (not shown in the figure) for determining a category to which the candidate video corresponds based on: performing video identification on the candidate video to obtain a video identification result; extracting text information corresponding to the candidate video; and determining the category corresponding to the candidate video according to the video identification result and the text information.
In some optional implementations, the first video category determining unit is further configured to: performing frame extraction on the candidate video according to a preset frame extraction rule to obtain a plurality of frame extraction results; extracting video features corresponding to each frame extraction result; eliminating duplication of video features respectively corresponding to a plurality of frame extraction results; matching the video features of the candidate video after the duplication elimination with sample image features in a pre-established sample image feature library, and determining the initial category corresponding to the candidate video according to the matching result; the method comprises the steps that a plurality of sample image features and categories corresponding to the sample image features are stored in a sample feature library in advance; determining a second category corresponding to the candidate video according to the text information; determining a category corresponding to the candidate video based on the initial category and the second category.
In some optional implementations, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; the information processing apparatus further includes a second video category determination unit operable to: inputting the candidate videos and the text information corresponding to the candidate videos into a pre-trained video classification model to obtain categories corresponding to the candidate videos; the video classification model is used for determining the category of the video according to the video input into the video classification model and the text information corresponding to the video.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides another embodiment of an information processing apparatus, which corresponds to the method embodiment shown in fig. 3, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the information processing apparatus of the present embodiment includes: a requesting unit 601, a data receiving unit 602 and a presenting unit 603. The information acquisition method comprises a request unit 601, a search unit and a processing unit, wherein the request unit 601 is used for responding to a received search keyword input by a user and sending an information acquisition request to a server, and the information acquisition request comprises the search keyword; a data receiving unit 602, configured to receive at least one video data and at least one search guide word sent by a server; a presentation unit 603, configured to present the at least one video data and the at least one search guidance word in a same presentation page; wherein the at least one video data and the at least one search guide word are determined based on the information processing apparatus of the embodiment shown in fig. 5.
In this embodiment, specific processing of the request unit 601, the data receiving unit 602, and the presentation unit 603 of the information processing apparatus and technical effects thereof can refer to the related descriptions of step 601, step 602, and step 603 in the corresponding embodiment of fig. 3, which are not described herein again.
Referring to fig. 7, fig. 7 illustrates an exemplary system architecture to which the information processing method of one embodiment of the present disclosure may be applied.
As shown in fig. 7, the system architecture may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 701, 702, 703 may interact with a server 705 over a network 704 to receive or send messages or the like. The terminal devices 701, 702, 703 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client applications in the terminal devices 701, 702, and 703 may receive the instruction of the user, and complete corresponding functions according to the instruction of the user, for example, send a search request to the server according to a search keyword input by the user.
The terminal devices 701, 702, and 703 may be hardware or software. When the terminal devices 701, 702, 703 are hardware, various electronic devices having a display screen and supporting web browsing may be available, including but not limited to mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a car terminal (e.g., car navigation terminal), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. When the terminal devices 701, 702, and 703 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 705 may be a server that provides various services, for example, receives a search request sent by the terminal devices 701, 702, and 703, and acquires information matching the search keyword in various ways according to the search request. And sends the search results to the terminal devices 701, 702, 703.
It should be noted that the information processing method provided by the embodiment of the present disclosure may be executed by a server, and accordingly, the information processing apparatus may be provided in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a schematic diagram of an electronic device (e.g., the server of FIG. 7) suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 806 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 806 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 806, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a search keyword input by a user; determining at least one video data matching the search keyword and at least one search guide word; sending the at least one piece of video data and the at least one search leading word to a terminal device so that the terminal device displays the at least one piece of video data and the at least one search leading word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation on the unit itself, for example, a receiving unit may also be described as a "unit that receives a search keyword input by a user".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an information processing method including: receiving a search keyword input by a user; determining at least one video data matching the search keyword and at least one search guide word; sending the at least one piece of video data and the at least one search leading word to a terminal device so that the terminal device displays the at least one piece of video data and the at least one search leading word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word.
According to one or more embodiments of the present disclosure, the search guidance word is determined based on the following steps: determining the association degree between the search keyword and a keyword corresponding to each of a plurality of candidate videos stored in a preset database; taking keywords corresponding to at least one candidate video with the association degree between the keywords and the search keywords being greater than a first preset association degree threshold value as search guide words; or determining the keywords corresponding to at least one candidate video with the association degree between the keywords corresponding to the search results larger than a second preset association degree threshold value in the preset database as search guide words.
According to one or more embodiments of the present disclosure, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; and the category corresponding to the candidate video is determined based on the following steps: performing video identification on the candidate video to obtain a video identification result; extracting text information corresponding to the candidate video; and determining the category corresponding to the candidate video according to the video identification result and the text information.
According to one or more embodiments of the present disclosure, the performing video identification on the candidate video to obtain a video identification result includes: performing frame extraction on the candidate video according to a preset frame extraction rule to obtain a plurality of frame extraction results; extracting video features corresponding to each frame extraction result; eliminating duplication of video features respectively corresponding to a plurality of frame extraction results; matching the video features of the candidate video after the duplication elimination with sample image features in a pre-established sample image feature library, and determining the initial category corresponding to the candidate video according to the matching result; the method comprises the steps that a plurality of sample image features and categories corresponding to the sample image features are stored in a sample feature library in advance; and the determining the category corresponding to the candidate video according to the video recognition result and the text information comprises: determining a second category corresponding to the candidate video according to text information; determining a category corresponding to the candidate video based on the initial category and the second category.
According to one or more embodiments of the present disclosure, the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; and the category corresponding to the candidate video is determined based on the following steps: inputting the candidate videos and the text information corresponding to the candidate videos into a pre-trained video classification model to obtain categories corresponding to the candidate videos; the video classification model is used for determining the category of the video according to the video input into the video classification model and the text information corresponding to the video.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (14)

1. An information processing method is applied to a server side, and is characterized by comprising the following steps:
receiving a search keyword input by a user;
determining at least one video data matching the search keyword and at least one search guide word;
sending the at least one piece of video data and the at least one search leading word to a terminal device so that the terminal device displays the at least one piece of video data and the at least one search leading word in the same page; the search guide word is used for guiding the user to further obtain at least one candidate video data corresponding to the search guide word.
2. The method of claim 1, wherein the search guidance word is determined based on the steps of:
determining the association degree between the search keyword and a keyword corresponding to each of a plurality of candidate videos stored in a preset database;
taking keywords corresponding to at least one candidate video with the association degree between the keywords and the search keywords being greater than a first preset association degree threshold value as search guide words; and/or
Determining at least one keyword corresponding to the search result;
and determining at least one keyword corresponding to the candidate video with the association degree between the keywords corresponding to the search results being greater than a second preset association degree threshold value in the preset database as a search guide word.
3. The method of claim 2, wherein the keywords corresponding to the candidate videos comprise categories corresponding to the candidate videos; and
the category corresponding to the candidate video is determined based on the following steps:
performing video identification on the candidate video to obtain a video identification result;
extracting text information corresponding to the candidate video;
and determining the category corresponding to the candidate video according to the video identification result and the text information.
4. The method of claim 3, wherein the performing video recognition on the candidate video to obtain a video recognition result comprises:
performing frame extraction on the candidate video according to a preset frame extraction rule to obtain a plurality of frame extraction results;
extracting video features corresponding to each frame extraction result;
eliminating duplication of video features respectively corresponding to a plurality of frame extraction results;
matching the video characteristics of the candidate video after the duplication elimination with a pre-established sample image characteristic library, and determining the initial category corresponding to the candidate video according to the matching result; the method comprises the steps that a plurality of sample image features and categories corresponding to the sample image features are stored in a sample feature library in advance; and
the determining the category corresponding to the candidate video according to the video recognition result and the text information includes:
determining a second category corresponding to the candidate video according to text information;
determining a category corresponding to the candidate video based on the initial category and the second category.
5. The method of claim 2, wherein the keywords corresponding to the candidate videos comprise categories corresponding to the candidate videos; and
the category corresponding to the candidate video is determined based on the following steps:
inputting the candidate videos and the text information corresponding to the candidate videos into a pre-trained video classification model to obtain categories corresponding to the candidate videos; wherein
The video classification model is used for determining the category of the video according to the video data input into the video classification model and the text information corresponding to the video.
6. An information processing method applied to a terminal is characterized by comprising the following steps:
responding to a received search keyword input by a user, and sending an information acquisition request to a server, wherein the information acquisition request comprises the search keyword;
receiving at least one video data and at least one search guide word sent by the server;
displaying the at least one video data and the at least one search guide word in the same display page; wherein
The at least one video data and the at least one search guidance word are determined based on the information processing method according to one of claims 1 to 5.
7. An information processing apparatus applied to a server, comprising:
a receiving unit for receiving a search keyword input by a user;
a determining unit for determining at least one video data matching the search keyword and at least one search guide word;
the sending unit is used for sending the at least one piece of video data and the at least one search guide word to a terminal so that the terminal can display the at least one piece of video data and the at least one search guide word in the same page; the search guide word is used for guiding the user to further search candidate video data corresponding to the search guide word.
8. The apparatus of claim 7, further comprising a search lead determination unit configured to determine a search lead according to:
determining the association degree between the search keyword and a keyword corresponding to each of a plurality of candidate videos stored in a preset database;
taking keywords corresponding to at least one candidate video with the association degree between the keywords and the search keywords being greater than a first preset association degree threshold value as search guide words; and/or
Determining at least one keyword corresponding to the search result;
and determining at least one keyword corresponding to the candidate video with the association degree between the keywords corresponding to the search results being greater than a second preset association degree threshold value in the preset database as a search guide word.
9. The apparatus according to claim 8, wherein the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; the device further comprises a first video category determining unit, which is used for determining the category corresponding to the candidate video based on the following steps:
performing video identification on the candidate video to obtain a video identification result;
extracting text information corresponding to the candidate video;
and determining the category corresponding to the candidate video according to the video identification result and the text information.
10. The apparatus of claim 9, wherein the first video category determining unit is further configured to:
performing frame extraction on the candidate video according to a preset frame extraction rule to obtain a plurality of frame extraction results;
extracting video features corresponding to each frame extraction result;
eliminating duplication of video features respectively corresponding to a plurality of frame extraction results;
matching the video features of the candidate video after the duplication elimination with sample image features in a pre-established sample image feature library, and determining the initial category corresponding to the candidate video according to the matching result; the method comprises the steps that a plurality of sample image features and categories corresponding to the sample image features are stored in a sample feature library in advance; and
determining a second category corresponding to the candidate video according to text information;
determining a category corresponding to the candidate video based on the initial category and the second category.
11. The apparatus according to claim 8, wherein the keywords corresponding to the candidate videos include categories corresponding to the candidate videos; the apparatus further comprises a second video category determination unit to:
inputting the candidate videos and the text information corresponding to the candidate videos into a pre-trained video classification model to obtain categories corresponding to the candidate videos; wherein
The video classification model is used for determining the category of the video according to the video input into the video classification model and the text information corresponding to the video.
12. An information processing apparatus applied to a terminal, comprising:
the system comprises a request unit, a search unit and a server, wherein the request unit is used for responding to a received search keyword input by a user and sending an information acquisition request to the server, and the information acquisition request comprises the search keyword;
the data receiving unit is used for receiving at least one piece of video data and at least one piece of search guide word sent by the server;
the display unit is used for displaying the at least one piece of video data and the at least one piece of search guide word in the same display page; wherein
The at least one video data and the at least one search guidance word are determined based on the information processing apparatus according to one of claims 7 to 11.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201910630009.3A 2019-07-12 2019-07-12 Information processing method and device and electronic equipment Pending CN112214695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910630009.3A CN112214695A (en) 2019-07-12 2019-07-12 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910630009.3A CN112214695A (en) 2019-07-12 2019-07-12 Information processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112214695A true CN112214695A (en) 2021-01-12

Family

ID=74048562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910630009.3A Pending CN112214695A (en) 2019-07-12 2019-07-12 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112214695A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190695A (en) * 2021-05-06 2021-07-30 北京百度网讯科技有限公司 Multimedia data searching method and device, computer equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120271843A1 (en) * 2011-04-19 2012-10-25 International Business Machines Corporation Computer Processing Method and System for Searching
CN103678668A (en) * 2013-12-24 2014-03-26 乐视网信息技术(北京)股份有限公司 Prompting method of relevant search result, server and system
CN104462375A (en) * 2014-12-09 2015-03-25 北京百度网讯科技有限公司 Barrage media based search processing method and barrage media based search processing system
CN106095858A (en) * 2016-06-02 2016-11-09 海信集团有限公司 A kind of audio video searching method, device and terminal
CN107590255A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Information-pushing method and device
CN107729573A (en) * 2017-11-24 2018-02-23 百度在线网络技术(北京)有限公司 Information-pushing method and device
CN107784029A (en) * 2016-08-31 2018-03-09 阿里巴巴集团控股有限公司 Generation prompting keyword, the method for establishing index relative, server and client side

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120271843A1 (en) * 2011-04-19 2012-10-25 International Business Machines Corporation Computer Processing Method and System for Searching
CN103678668A (en) * 2013-12-24 2014-03-26 乐视网信息技术(北京)股份有限公司 Prompting method of relevant search result, server and system
CN104462375A (en) * 2014-12-09 2015-03-25 北京百度网讯科技有限公司 Barrage media based search processing method and barrage media based search processing system
CN106095858A (en) * 2016-06-02 2016-11-09 海信集团有限公司 A kind of audio video searching method, device and terminal
CN107784029A (en) * 2016-08-31 2018-03-09 阿里巴巴集团控股有限公司 Generation prompting keyword, the method for establishing index relative, server and client side
CN107590255A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Information-pushing method and device
CN107729573A (en) * 2017-11-24 2018-02-23 百度在线网络技术(北京)有限公司 Information-pushing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190695A (en) * 2021-05-06 2021-07-30 北京百度网讯科技有限公司 Multimedia data searching method and device, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN107679211B (en) Method and device for pushing information
US11758088B2 (en) Method and apparatus for aligning paragraph and video
CN110969012A (en) Text error correction method and device, storage medium and electronic equipment
CN112287206A (en) Information processing method and device and electronic equipment
CN111897950A (en) Method and apparatus for generating information
CN113313064A (en) Character recognition method and device, readable medium and electronic equipment
CN111598006A (en) Method and device for labeling objects
CN112766284A (en) Image recognition method and device, storage medium and electronic equipment
CN114443897A (en) Video recommendation method and device, electronic equipment and storage medium
CN115294501A (en) Video identification method, video identification model training method, medium and electronic device
CN110851032A (en) Display style adjustment method and device for target device
CN109947526B (en) Method and apparatus for outputting information
CN112819512A (en) Text processing method, device, equipment and medium
CN112148962B (en) Method and device for pushing information
CN111767259A (en) Content sharing method and device, readable medium and electronic equipment
CN109472028B (en) Method and device for generating information
CN112084441A (en) Information retrieval method and device and electronic equipment
US20130230248A1 (en) Ensuring validity of the bookmark reference in a collaborative bookmarking system
CN112214695A (en) Information processing method and device and electronic equipment
CN110598049A (en) Method, apparatus, electronic device and computer readable medium for retrieving video
CN113220922B (en) Image searching method and device and electronic equipment
CN110334763B (en) Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium
CN114004229A (en) Text recognition method and device, readable medium and electronic equipment
CN113191257A (en) Order of strokes detection method and device and electronic equipment
CN113222050A (en) Image classification method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination