CN109063200B - Resource searching method and device, electronic equipment and computer readable medium - Google Patents

Resource searching method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN109063200B
CN109063200B CN201811056722.3A CN201811056722A CN109063200B CN 109063200 B CN109063200 B CN 109063200B CN 201811056722 A CN201811056722 A CN 201811056722A CN 109063200 B CN109063200 B CN 109063200B
Authority
CN
China
Prior art keywords
search request
audio
visual
scene
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811056722.3A
Other languages
Chinese (zh)
Other versions
CN109063200A (en
Inventor
孙连生
尹康平
王祥志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uc Mobile China Co ltd
Original Assignee
Uc Mobile China Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uc Mobile China Co ltd filed Critical Uc Mobile China Co ltd
Priority to CN201811056722.3A priority Critical patent/CN109063200B/en
Publication of CN109063200A publication Critical patent/CN109063200A/en
Application granted granted Critical
Publication of CN109063200B publication Critical patent/CN109063200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a resource searching method and device, electronic equipment and a computer readable medium. The method comprises the steps of determining a demand scene of a current search request according to an analysis result of the current search request, and if the demand scene of the current search request is a picture-text and audio-visual demand scene, acquiring candidate resources matched with the picture-text and audio-visual demand scene from a resource library. The embodiment of the application realizes the scheme of directly determining the scene as the image-text and audio-visual requirement according to the search request and displaying the resources in the corresponding forms, namely, the user can conveniently find the required image-text or audio-visual resources after carrying out primary search, the user does not need to carry out secondary search, and the user does not need to modify the search request for obtaining the image-text or audio-visual resources, so that the user experience is improved.

Description

Resource searching method and device, electronic equipment and computer readable medium
Technical Field
The present application relates to the field of data processing, and in particular, to the field of internet technologies, and in particular, to a resource search method and apparatus, an electronic device, and a computer-readable medium.
Background
In the information age, which is rapidly developing today, in order to meet the demand of people for information, various media forms, such as one or more combinations of characters, pictures, sounds and videos, etc., are presented.
The requirements of people on media forms are different in different scenes, and in some scenes, users have the requirements on not only acquired image-text form resources but also acquired audio-video form resources. However, in the current search engine, the resources in the form of graphics and text and the resources in the form of audio and video in the search result are displayed in a dispersed manner, even in some cases, only one form of resource is displayed, and a user needs to add some keywords during searching to obtain the resources in a required form, which results in poor user experience.
In view of the above, a technical problem to be solved in the prior art is how to effectively display resources in a form required by a user.
Disclosure of Invention
The application aims to provide a resource searching method and a device thereof, electronic equipment and a computer readable medium, which are used for solving the problem that resources in a form required by a user cannot be effectively displayed in the prior art.
In a first aspect, the present application provides a resource search method, which includes:
determining a demand scene of the current search request according to an analysis result of the current search request;
and if the required scene of the current search request is the image-text and audio-visual required scene, acquiring candidate resources matched with the image-text and audio-visual required scene from a resource library.
In a second aspect, an embodiment of the present application provides a resource search apparatus, which includes:
the scene analysis module is configured to determine a demand scene of the current search request according to an analysis result of the current search request;
and the resource acquisition module is configured to acquire candidate resources matched with the image-text and audio-visual demand scenes from a resource library if the demand scenes of the current search request are the image-text and audio-visual demand scenes.
In a third aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
a computer readable medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method as described above.
In a fourth aspect, the present application provides a computer readable medium, on which a computer program is stored, which when executed by a processor implements the method as described above.
According to the resource searching method and the device, the electronic equipment and the computer readable medium thereof, the required scene of the current searching request is determined according to the analysis result of the current searching request, if the required scene of the current searching request is the image-text and audio-visual required scene, the candidate resource matched with the image-text and audio-visual required scene is obtained from the resource library, so that the image-text and audio-visual required scene is directly determined according to the searching request, and the resource scheme in the corresponding form is displayed, namely, a user can conveniently find the required image-text or audio-visual resource after carrying out primary searching, the user does not need to carry out secondary searching, and the user does not need to modify the searching request for obtaining the image-text or audio-visual resource, thereby improving the user experience.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flowchart illustrating a method for searching and displaying resources according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a resource searching method in a second embodiment of the present application;
fig. 3 is a schematic flowchart of a resource search method in a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a resource searching apparatus in a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a resource search apparatus in a fifth embodiment of the present application;
fig. 6 is a schematic view of a scenario of an application resource search scheme according to a sixth embodiment of the present application;
FIG. 7 shows the results of the sixth embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device in an eighth embodiment of the present application;
fig. 9 is a hardware configuration of an electronic device according to a ninth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flowchart of a resource searching and displaying method in an embodiment of the present application, where the flowchart is used to describe an overall process of resource searching and displaying, which is applied to a resource searching scheme provided in the following embodiments of the present application, and of course, the following scheme may also be applied to other processes, which is not limited in this embodiment.
As shown in fig. 1, the method for searching and displaying resources provided in this embodiment includes:
and S11, inquiring by a user.
In this embodiment, when the user queries, the user may input a keyword to perform a query, may input a picture, a voice, and the like to perform a query, which is not limited in this embodiment.
Upon user input, i.e. a search request is generated, subsequent steps S12-S14 may be a response by the server to the search request.
And S12, identifying the image, text and audio-visual scene.
In this embodiment, the recognition of the graphics context and the audio-visual scene is performed for the user query, and after the user inputs the user query, it can be recognized whether the applicable demand scene is the graphics context and the audio-visual demand scene. If so, the resources matched with the graphics and text and the audiovisual demand scene are determined and displayed through the subsequent steps S13 and S14, and the specific identification method is described in detail in the following embodiments, which are not described herein again.
S13, sequencing the image-text and the audio-visual resources.
In this embodiment, after the scenes are identified and determined to be the image-text and audio-visual demand scenes, the image-text resources and the audio-visual resources in the candidate resources matched with the user input are respectively sorted, and a sorting result is determined.
And S14, screening and displaying results.
In this embodiment, the image-text resources and the audiovisual resources are respectively screened according to the sorting result of the step S13, and the screened image-text resources and audiovisual resources can be fed back to the user as response results for display.
The following describes a specific implementation of some or all of the steps in fig. 1 by using several embodiments.
Fig. 2 is a schematic flowchart of a resource search method in a second embodiment of the present application, and as shown in fig. 2, the resource search method provided in this embodiment includes the following steps:
and S21, determining a demand scene of the current search request according to an analysis result of the current search request.
The current search request is generated based on search information currently input by the user.
In this embodiment, different search requests may correspond to different demand scenarios, for example, an analysis result of a book may correspond to a text demand scenario, and a search result of a tv series may correspond to a picture or video demand scenario. After the search request is analyzed, the required scene of the search request can be determined according to the analysis result of the search request.
Specifically, step S11 may specifically include: the method comprises the steps of determining an analysis dimension for analyzing a current search request, and analyzing the current search request according to the analysis dimension to determine a demand scene of the current search request, so that the current search request can be analyzed based on a plurality of analysis dimensions, and the determined demand scene can better meet the demand of a user.
In this embodiment, the analysis dimension may include: the content of the current search request itself, historical search requests before the current search request, candidate resources of the current search request, keywords of the candidate resources of the current search request, and the like, and the analysis dimension may include a combination of at least one or more of the foregoing, and may further include other analysis dimensions, for example, user preferences corresponding to the current search request, and the like, which is not limited in this embodiment.
In addition, since the following step S22 is directed to the graphics context and the audiovisual demand scene, the demand scene may be directly divided into two types in this step, one type is the graphics context and the audiovisual demand scene, and the other type is the demand scene except the graphics context and the audiovisual demand scene. Of course, in other implementation manners of the embodiment of the present application, other classification methods may be used as long as the method includes graphics and text and an audiovisual demand scene.
S22, if the demand scene of the current search request is a picture and text and audio-visual demand scene, acquiring candidate resources matched with the picture and text and audio-visual demand scene from a resource library.
The resources in the form of pictures and texts can be multimedia resources consisting of pictures and texts; the resources in audiovisual form may be multimedia resources composed of video or sound, such as audio and video.
The graphics context and the audio-visual demand scene can be a scene when the resources which the user wants to show are resources in the form of graphics context and resources in the form of audio-visual.
In this embodiment, the candidate resources matched with the graphics and text and the audiovisual demand scene may include not only the resources in the graphics and text form and the resources in the audiovisual form, but also other forms of resources, such as the resources in the text form, and the like, which is not limited in this embodiment.
According to the resource searching method provided by the embodiment, the required scene of the current searching request is determined according to the analysis result of the current searching request, if the required scene of the current searching request is the image-text and audio-visual required scene, the candidate resource matched with the image-text and audio-visual required scene is obtained from the resource library, so that the image-text and audio-visual required scene is directly determined according to the searching request, and the scheme of the resource in a corresponding form is displayed, namely, a user can conveniently find the required image-text or audio-visual resource after carrying out primary searching, the user does not need to carry out secondary searching, and the user does not need to modify the searching request for obtaining the image-text or audio-visual resource, thereby improving the user experience.
Optionally, in this embodiment, the method may further include:
and S23, sequencing the candidate resources to generate a candidate resource queue.
Specifically, in this embodiment, a predefined sorting dimension may be determined, and then the candidate resources may be sorted according to the sorting dimension to generate a candidate resource queue.
Wherein the ranking dimension comprises: at least one of a resource relevance dimension, a resource authority dimension, a resource timeliness dimension, and a resource quality dimension. Of course, the sorting dimension may also include other dimensions, which is not limited in this embodiment.
When the candidate resources with higher relevance to the current search request are ranked according to the resource relevance dimension, ranking the candidate resources with higher relevance to the current search request in front; when ranking is carried out according to the resource authority dimension, the candidate resources of the authority website are ranked in the front; when the sequencing is performed according to the resource timeliness dimension, the time of the candidate resource is close to the sequencing of the current time and is front, and when the sequencing is performed according to the resource quality dimension, the content of the candidate resource is comprehensive and rich, and the sequencing without advertisement content and the like is front.
Optionally, in this embodiment, the method may further include:
and S24, displaying the candidate resources.
When displaying, only part of the resources in the form of pictures and texts and part of the resources in the form of audio and visual information in all the candidate resources may be displayed, or all of the resources may be displayed, or other forms of resources may be further displayed on the premise of displaying the pictures, texts and audio and visual resources, which is not limited in this embodiment.
When displaying, the sequence in the candidate resource queue may be displayed first, or only the first 5 graphics and text and audio-visual resources in the candidate resource queue may be displayed, which is not limited in this embodiment.
Fig. 3 is a schematic flowchart of a resource search method in a third embodiment of the present application, and in this embodiment, a demand scenario includes an image-text and an audio-visual demand scenario and a demand scenario other than the image-text and the audio-visual demand scenario are taken as an example, and as shown in fig. 3, the resource search method provided in this embodiment includes the following steps:
s31, determining a historical search request related to the current search request and a demand scenario of the historical search request, and determining the demand scenario of the current search request through step S35 according to the demand scenario of the historical search request.
In specific implementation, a corresponding user can be determined according to a current search request, so that a historical search request of the user can be obtained, and a requirement scene of the historical search request is determined.
In this embodiment, when the required scene of the current search request is determined through step S35 according to the required scene of the historical search request, the search request in which the required scene in the historical search request is the image-text and audio-visual required scene may be predetermined, and a search request set may be established. After the current search request is determined, the similarity between the current search request and the historical search request in the pre-established search request set can be judged, and the required scene of the current search request is determined to be the image-text and audio-visual required scene according to the similarity.
Further, in the history search process, in order to obtain the search result including the image, the text and the audio, the user may modify the input content, for example, keywords such as "picture", "video", and the like are added, at this time, the content input by the user may be divided into two parts, one part is the content input for obtaining information, and the other part is the content input for obtaining the search result including the image, the text and the audio, and the like, so that when the search request set is established, the history search request may be processed to remove data corresponding to the content input by the user for obtaining the search result including the image, the text and the audio, and the like, so that the established search request set is more accurate, and further, the result when the required scene of the current search request is determined to be the image, the text and the audio-visual required scene is more accurate.
And S32, determining the keyword corresponding to the current search request, and determining the demand scene of the current search request through the step S35 according to the keyword corresponding to the current search request.
When a user searches, the input content may be a character string, and the generated search request includes a keyword in the character string. Of course, if the content input by the user is in a format such as a picture, the picture or the like may be processed to determine the keyword, which is not limited in this embodiment.
In this embodiment, when determining the keyword, the word segmentation processing may be performed on the character string input by the user, and the keyword may be extracted from the character string.
When the requirement scene of the current search request is determined through step S35 according to the keyword corresponding to the current search request, a keyword set corresponding to the image-text and audio-visual requirement scene may be set in advance in the database, and then the number of the keyword sets falling into the preset keyword set in the keyword corresponding to the search request is determined, so as to determine the requirement scene of the current search request as the image-text and audio-visual requirement scene according to the number.
S33, determining the number of the image-text and audio-visual resources in the candidate resources matched with the current search request in the resource library, and determining the required scene of the current search request as the image-text and audio-visual required scene through the step S35 according to the number of the image-text and audio-visual resources.
In this embodiment, after determining the current search request, a search may be performed in the resource library to determine a candidate resource corresponding to the current search request. For a specific search process, reference may be made to the prior art, and details of this embodiment are not repeated herein.
According to the number of the graphics and the audio-visual resources in the candidate resources, when the required scene of the current search request is determined to be the graphics and the audio-visual required scene through the step S35, a first quantity threshold value may be preset, and after the number of the graphics and the audio-visual resources in the candidate resources is determined, if the number is greater than the preset quantity threshold value, the required scene of the current search request is determined to be the graphics and the audio-visual required scene, otherwise, the required scene of the current search request is determined to be the required scene except the graphics and the audio-visual required scene.
Of course, in another implementation manner of this embodiment, when it is determined that the required scene of the current search request is the image-text and audiovisual required scene through step S35 according to the numbers of the image-text and audiovisual resources in the candidate resource, the numbers of the image-text and audiovisual resources in the candidate resource may be determined, and then it is determined that the required scene of the current search request is the image-text and audiovisual required scene according to the values of the numbers.
S34, determining the number of the images and texts and the number of the audio-visual keywords in the keywords of the candidate resources according to the candidate resources matched with the current search request in the resource library, and determining the required scene of the current search request as the image and text and audio-visual required scene through the step S35 according to the number of the images and texts and the audio-visual keywords in the keywords of the candidate resources.
In this embodiment, after determining the current search request, a search may be performed in the resource library to determine a candidate resource corresponding to the current search request. For the specific search process, reference may be made to the prior art, and details are not repeated here.
After determining the candidate resource, a keyword corresponding to the candidate resource may be determined, specifically, the keyword of the candidate resource may be directly obtained from the resource library, and the determined keyword of the candidate resource may be a set of keywords of all candidate resources, which is referred to as a second keyword set hereinafter.
After the second keyword set is determined, the number of the image-text keywords and the number of the audio-visual keywords can be determined. In this embodiment, the graphics and text and the audio-visual keywords may be preset as well, and this embodiment is not described herein again. Of course, the keyword in the second keyword set may also be analyzed to determine whether the keyword is a text-based keyword or an audio-visual keyword, which is not limited in this embodiment.
According to the number of the graphics and text and the audio-visual keywords in the keywords of the candidate resources, when the number of the demand scene of the current search request is determined to be the graphics and text and audio-visual demand scene through step S35, a second number threshold value can also be preset, if the number of the graphics and text and the audio-visual keywords in the second keyword set exceeds the second number threshold value, the demand scene of the current search request is determined to be the graphics and text and audio-visual demand scene, otherwise, the demand scene of the current search request is determined to be the demand scene except the graphics and audio-visual demand scene.
Similarly to step S33, in this step, the number of the graphics context and the audio-visual keyword in the second keyword set may also be calculated, so as to determine the ratio of the graphics context and the audio-visual keyword in the second keyword set, and determine the demand scene of the current search request as the graphics context and the audio-visual demand scene according to the value of the ratio.
And S35, determining the requirement scene of the current search request.
The method for determining the requirement scenario of the current search request through step S35 corresponding to steps S31, S32, S33, and S34 is described in detail above, and is not repeated herein.
In addition, in a specific implementation, only one of the steps S31, S32, S33, and S34 may be executed, or a plurality of the steps S31, S32, S33, and S34 may be executed at the same time, which is not limited in this embodiment.
When executing a plurality of the above steps S31, S32, S33, and S34, step S35 further includes: and synthesizing a demand scene of the historical search request, the keyword corresponding to the current search request, the number of the images and texts in the candidate resources matched with the current search request and the number of the audio-visual keywords in the candidate resources to determine the demand scene of the current search request.
Specifically, in step S35, the similarity between the current search request determined in S31 and the corresponding historical search request may be denoted as sim, and the corresponding weight may be determined to be ω 1; the number of the images, texts and audio-visual keywords in the keywords of the current search request determined in the step S32 can be denoted as C (q), and the corresponding weight is determined as ω 2; the number of graphics and text and audio-visual resources in the candidate resources matched with the current search request determined in S33 may be denoted by sum, and the corresponding weight is determined to be ω 3; the number of the images, texts and the audio-visual keywords in the keywords of the candidate resources determined in S34 may be denoted as C (r), and the corresponding weight is determined to be ω 4, and the comprehensive weight W is calculated according to the following formula:
W=ω1*sim+ω2*C(q)+ω3*sum+ω4*C(r)
of course, the weight values ω 1, ω 2, ω 3, and ω 4 may be adjusted, which is not limited in this embodiment.
And S36, if the demand scene of the current search request is the image-text and audio-visual demand scene, acquiring candidate resources matched with the image-text and audio-visual demand scene from a resource library.
And S37, sequencing the candidate resources to generate a candidate resource queue.
And S38, displaying the candidate resources.
In this embodiment, steps S36 to S38 are the same as steps S22 to S24, and this embodiment does not limit this.
Fig. 4 is a schematic structural diagram of a resource search apparatus in a fourth embodiment of the present application, and as shown in fig. 4, the resource search apparatus includes:
the scene analysis module is configured to determine a demand scene of the current search request according to an analysis result of the current search request;
and the resource acquisition module is configured to acquire candidate resources matched with the image-text and audio-visual demand scenes from a resource library if the demand scenes of the current search request are the image-text and audio-visual demand scenes.
Optionally, in this embodiment, the scene analysis module is specifically configured to:
determining an analysis dimension for analyzing the current search request, and analyzing the current search request according to the analysis dimension to determine a demand scene of the current search request.
Optionally, in this embodiment, the method further includes: and the resource sorting module is configured to sort the candidate resources to generate a candidate resource queue.
Optionally, in this embodiment, the method further includes: and the resource display module is configured to display the candidate resources.
Fig. 5 is a schematic structural diagram of a resource search apparatus in the fifth embodiment of the present application, optionally, as shown in fig. 5, the scene analysis module includes:
a history module configured to determine a history search request associated with the current search request and a demand scenario of the history search request;
and the scene determining module is configured to determine the requirement scene of the current search request according to the requirement scene of the historical search request.
Optionally, in this embodiment, as shown in fig. 5, the scene analysis module includes:
the keyword determining module is configured to determine a keyword corresponding to the current search request;
and the scene determining module is configured to determine a demand scene of the current search request according to the keyword corresponding to the current search request.
Optionally, in this embodiment, as shown in fig. 5, the scene analysis module includes:
the candidate resource quantity determining module is configured to determine the quantity of the images and texts and the audio-visual resources in the candidate resources matched with the current search request in the resource library;
and the scene determining module is configured to determine the required scene of the current search request as the image-text and audio-visual required scene according to the image-text and audio-visual resource quantity.
Optionally, in this embodiment, as shown in fig. 5, the scene analysis module includes:
the candidate resource keyword determining module is configured to determine candidate resources matched with the current search request in a resource library and determine the number of images and texts and audio-visual keywords in the keywords of the candidate resources;
and the scene determining module is configured to determine that the required scene of the current search request is a picture-text and audio-visual required scene according to the number of the picture-text and audio-visual keywords in the keywords of the candidate resources.
Fig. 6 is a schematic view of a scenario of an application resource search scheme in a sixth embodiment of the present application, as shown in fig. 6, including:
s61, the user searches for 'Li Mou wedding Fan Mou'.
S62, according to the analysis result of the Li Mou marriage Fan Mou, the requirement scene is determined to be the image-text and audio-visual requirement scene.
S63, sorting the candidate resources corresponding to the Li Mou marriage Fan Mou.
And S64, displaying the candidate resources ranked in the front to the user, so that the user can interact through the displayed result, and the displayed result is shown in FIG. 7.
"video 1", "video 2" and "video 3" in fig. 7 are the audio-visual candidate resources ranked first three, and the teletext candidate resource "Li Mou wedding Fan Mou" and the like are displayed in the ranked order below the videos.
Fig. 8 is a schematic structural diagram of an electronic device in an eighth embodiment of the present application; the electronic device may specifically comprise a device, a terminal or a server. The electronic device may include:
one or more processors 801;
a computer-readable medium 802, which may be configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a resource search method as in any one of the embodiments described above.
FIG. 9 is a diagram illustrating a hardware configuration of an electronic device according to a ninth embodiment of the present application; as shown in fig. 9, the hardware structure of the electronic device may include: a processor 901, a communication interface 902, a computer-readable medium 903, and a communication bus 904;
wherein the processor 901, the communication interface 902, and the computer readable medium 903 are in communication with each other via a communication bus 904;
optionally, the communication interface 902 may be an interface of a communication module, such as an interface of a GSM module;
the processor 901 may be specifically configured to: determining a demand scene of the current search request according to an analysis result of the current search request; and if the required scene of the current search request is the image-text and audio-visual required scene, acquiring candidate resources matched with the image-text and audio-visual required scene from a resource library.
Processor 901 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code configured to perform the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a central processing module (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory medium (RAM), a read-only memory medium (ROM), an erasable programmable read-only memory medium (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory medium (CD-ROM), an optical storage medium, a magnetic storage medium, or any suitable combination of the foregoing. In the context of this application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, there are specific precedence relationships, but these precedence relationships are only exemplary, and in particular implementation, the steps may be fewer, more, or the execution order may be adjusted. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a scene analysis module, a resource acquisition module. The names of these modules do not constitute a limitation to the modules themselves in some cases, for example, the scenario analysis module may also be described as "determining a requirement scenario of a current search request according to an analysis result of the current search request".
As another aspect, the present application also provides a computer readable medium, on which a computer program is stored, which program, when executed by a processor, implements the method as described in any of the embodiments above.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: determining a demand scene of the current search request according to an analysis result of the current search request; and if the required scene of the current search request is the image-text and audio-visual required scene, acquiring candidate resources matched with the image-text and audio-visual required scene from a resource library.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements in which any combination of the features described above or their equivalents does not depart from the spirit of the invention disclosed above. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (9)

1. A method for resource search, comprising:
determining a demand scenario of a current search request according to an analysis result of the current search request, including: acquiring the similarity sim of a current search request and a corresponding historical search request and the weight omega 1 corresponding to the similarity sim, the number C (q) of images and texts and audio-visual keywords in the keywords of the current search request and the weight omega 2 corresponding to the number, the number sum of the images and texts and audio-visual resources in candidate resources matched with the current search request and the weight omega 3 corresponding to the number, and the number C (r) of the images and texts and audio-visual keywords in the keywords of the candidate resources and the weight omega 4 corresponding to the number; calculating to obtain a comprehensive weight W according to the following formula, and determining whether the required scene of the current search request is a picture and text and an audio-visual required scene according to the comprehensive weight W:
W=ω1*sim+ω2*C(q)+ω3*sum+ω4*C(r)
and if the scene is the image-text and audio-visual demand scene, acquiring candidate resources matched with the image-text and audio-visual demand scene from a resource library.
2. The method of claim 1, further comprising: and carrying out sorting processing on the candidate resources to generate a candidate resource queue.
3. The method of claim 2, wherein the ordering the candidate resources to generate a candidate resource queue comprises:
and determining a predefined sorting dimension, and sorting the candidate resources according to the sorting dimension to generate a candidate resource queue.
4. The method of claim 3, wherein the ordering dimension comprises: at least one of a resource relevance dimension, a resource authority dimension, a resource timeliness dimension, and a resource quality dimension.
5. The method of claim 1, further comprising: and displaying the candidate resources.
6. A resource search apparatus, comprising:
the scene analysis module is configured to determine a demand scene of the current search request according to an analysis result of the current search request, and comprises: obtaining the similarity sim between a current search request and a corresponding historical search request and the weight omega 1 corresponding to the similarity sim, the number C (q) of images and texts and audio-visual keywords in the keywords of the current search request and the weight omega 2 corresponding to the number, the number sum of the images and texts and audio-visual resources in candidate resources matched with the current search request and the weight omega 3 corresponding to the number, and the number C (r) of the images and texts and audio-visual keywords in the keywords of the candidate resources and the weight omega 4 corresponding to the number; calculating to obtain a comprehensive weight W according to the following formula, and determining whether the required scene of the current search request is a picture and text and an audio-visual required scene according to the comprehensive weight W:
W=ω1*sim+ω2*C(q)+ω3*sum+ω4*C(r);
and the resource acquisition module is configured to acquire candidate resources matched with the image-text and the audio-visual demand scene from a resource library if the image-text and the audio-visual demand scene are the image-text and the audio-visual demand scene.
7. The apparatus of claim 6, further comprising: and the resource sorting module is configured to sort the candidate resources to generate a candidate resource queue.
8. An electronic device, comprising:
one or more processors;
a computer readable medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811056722.3A 2018-09-11 2018-09-11 Resource searching method and device, electronic equipment and computer readable medium Active CN109063200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811056722.3A CN109063200B (en) 2018-09-11 2018-09-11 Resource searching method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811056722.3A CN109063200B (en) 2018-09-11 2018-09-11 Resource searching method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN109063200A CN109063200A (en) 2018-12-21
CN109063200B true CN109063200B (en) 2022-10-14

Family

ID=64761223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811056722.3A Active CN109063200B (en) 2018-09-11 2018-09-11 Resource searching method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN109063200B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134760A (en) * 2019-05-17 2019-08-16 北京思维造物信息科技股份有限公司 A kind of searching method, device, equipment and medium
CN111310008A (en) * 2020-03-20 2020-06-19 北京三快在线科技有限公司 Search intention recognition method and device, electronic equipment and storage medium
CN111506817A (en) * 2020-04-21 2020-08-07 北京四维智联科技有限公司 Method and system for determining search service

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419776A (en) * 2011-12-31 2012-04-18 北京百度网讯科技有限公司 Method and equipment for meeting multi-dimensional search requirement of user
CN103942337A (en) * 2014-05-08 2014-07-23 北京航空航天大学 Video search system based on image recognition and matching
CN105159930A (en) * 2015-08-05 2015-12-16 百度在线网络技术(北京)有限公司 Search keyword pushing method and apparatus
CN106874467A (en) * 2017-02-15 2017-06-20 百度在线网络技术(北京)有限公司 Method and apparatus for providing Search Results
CN107590214A (en) * 2017-08-30 2018-01-16 腾讯科技(深圳)有限公司 The recommendation method, apparatus and electronic equipment of search key
CN108416649A (en) * 2018-02-05 2018-08-17 北京三快在线科技有限公司 Search result ordering method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2640639C2 (en) * 2015-11-17 2018-01-10 Общество С Ограниченной Ответственностью "Яндекс" Method and system of search query processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419776A (en) * 2011-12-31 2012-04-18 北京百度网讯科技有限公司 Method and equipment for meeting multi-dimensional search requirement of user
CN103942337A (en) * 2014-05-08 2014-07-23 北京航空航天大学 Video search system based on image recognition and matching
CN105159930A (en) * 2015-08-05 2015-12-16 百度在线网络技术(北京)有限公司 Search keyword pushing method and apparatus
CN106874467A (en) * 2017-02-15 2017-06-20 百度在线网络技术(北京)有限公司 Method and apparatus for providing Search Results
CN107590214A (en) * 2017-08-30 2018-01-16 腾讯科技(深圳)有限公司 The recommendation method, apparatus and electronic equipment of search key
CN108416649A (en) * 2018-02-05 2018-08-17 北京三快在线科技有限公司 Search result ordering method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109063200A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN107844586B (en) News recommendation method and device
CN109101658B (en) Information searching method and device, and equipment/terminal/server
WO2016107126A1 (en) Image search method and device
WO2018000575A1 (en) Artificial intelligence-based search result aggregation method and apparatus and search engine
WO2020155750A1 (en) Artificial intelligence-based corpus collecting method, apparatus, device, and storage medium
CN105653572A (en) Resource processing method and apparatus
CN109063200B (en) Resource searching method and device, electronic equipment and computer readable medium
CN109255037B (en) Method and apparatus for outputting information
CN109271556B (en) Method and apparatus for outputting information
CN111680189B (en) Movie and television play content retrieval method and device
CN111314732A (en) Method for determining video label, server and storage medium
CN107977678B (en) Method and apparatus for outputting information
CN109862100B (en) Method and device for pushing information
CN112597396A (en) Search recall ranking method, system and computer readable storage medium
CN108470057B (en) Generating and pushing method, device, terminal, server and medium of integrated information
CN114095749A (en) Recommendation and live interface display method, computer storage medium and program product
CN108334626B (en) News column generation method and device and computer equipment
US20190082236A1 (en) Determining Representative Content to be Used in Representing a Video
CN111897950A (en) Method and apparatus for generating information
CN115080816A (en) Method, device, equipment and medium for generating summary information and displaying search result
US10929447B2 (en) Systems and methods for customized data parsing and paraphrasing
CN112949430A (en) Video processing method and device, storage medium and electronic equipment
TWI709905B (en) Data analysis method and data analysis system thereof
CN111259225A (en) New media information display method and device, electronic equipment and computer readable medium
US9886415B1 (en) Prioritized data transmission over networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200604

Address after: 310051 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping square B radio tower 12 layer self unit 01

Applicant before: GUANGZHOU SHENMA MOBILE INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: Room 508, 5 / F, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220920

Address after: 510665 Room 302, Room 301, No. 38, Gaopu Road, Tianhe District, Guangzhou, Guangdong

Applicant after: UC MOBILE (CHINA) Co.,Ltd.

Address before: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant