CN111753180A - Search method, search device, electronic equipment and computer storage medium - Google Patents
Search method, search device, electronic equipment and computer storage medium Download PDFInfo
- Publication number
- CN111753180A CN111753180A CN201910235863.XA CN201910235863A CN111753180A CN 111753180 A CN111753180 A CN 111753180A CN 201910235863 A CN201910235863 A CN 201910235863A CN 111753180 A CN111753180 A CN 111753180A
- Authority
- CN
- China
- Prior art keywords
- information
- scene
- node
- search
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 135
- 230000001960 triggered effect Effects 0.000 claims description 27
- 230000002452 interceptive effect Effects 0.000 claims description 23
- 238000004891 communication Methods 0.000 claims description 20
- 230000003993 interaction Effects 0.000 claims description 20
- 230000001737 promoting effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 230000019771 cognition Effects 0.000 abstract description 21
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 10
- 210000005079 cognition system Anatomy 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000013480 data collection Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000000474 nursing effect Effects 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 241000209094 Oryza Species 0.000 description 2
- 235000007164 Oryza sativa Nutrition 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 235000013312 flour Nutrition 0.000 description 2
- 239000008267 milk Substances 0.000 description 2
- 210000004080 milk Anatomy 0.000 description 2
- 235000013336 milk Nutrition 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 235000009566 rice Nutrition 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a searching method, a searching device, electronic equipment and a computer storage medium, wherein the searching method comprises the following steps: generating a search request according to the search keyword; according to the feedback of the search request, acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword; and displaying the acquired information of the target object and the scene information of the scene. According to the embodiment of the invention, the user requirements can be solved in a one-stop manner, a user does not need to process and screen a large amount of information required by the initial requirement scene cognition, and the user experience and the adhesion degree are effectively improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a searching method, a searching device, electronic equipment and a computer storage medium.
Background
The rapid development of networks has made it difficult for users to obtain truly useful information while providing users with a tremendous amount of information. In order to facilitate the user to obtain desired information, most applications provide a search function.
At present, in the process of using the search function, a user mostly inputs a definite search target, that is, a keyword of a target object to be searched is used for searching the target object. For example, when a user purchases a product through the shopping APP, the user usually inputs an accurate item or name of the product to be purchased to obtain useful information, for example, after the user inputs an item "milk bottle", a series of information of milk bottle products can be obtained, and the shopping APP realizes better subsequent shopping guide and finally generates a deal.
However, this approach is limited in that only target objects whose names or titles match the input search keyword can be provided to the user. On one hand, the knowledge of the user on the information related to the target object is limited, and the information expansion of the user is not facilitated; on the other hand, the platform which is not favorable for providing the search function provides rich information for the user, and the problems of poor user experience and user loss can be caused.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a search scheme to at least partially solve the above problems.
According to a first aspect of the embodiments of the present invention, there is provided a search method, including: generating a search request according to the search keyword; according to the feedback of the search request, acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword; and displaying the acquired information of the target object and the scene information of the scene.
According to a second aspect of the embodiments of the present invention, there is provided another search method, including: receiving a commodity search keyword input by a user through a search input box of a shopping platform, and generating a search request according to the commodity search keyword; according to the feedback of the search request, acquiring information of a plurality of commodities matched with the commodity search keyword, and acquiring scene information of at least one shopping scene matched with the commodity search keyword; and displaying the obtained information of the commodities and the scene information of the shopping scene.
According to a third aspect of the embodiments of the present invention, there is provided still another search method, including: acquiring a search keyword from a received search request; determining the information of the matched target object and the scene information of the scene according to the search keyword; and feeding back the information of the target object and the scene information to a sender of the search request.
According to a fourth aspect of the embodiments of the present invention, there is provided a search apparatus including: the generating module is used for generating a search request according to the search keyword; the first acquisition module is used for acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword according to the feedback of the search request; and the display module is used for displaying the acquired information of the target object and the scene information of the scene.
According to a fifth aspect of the embodiments of the present invention, there is provided another search apparatus, including: the request generation module is used for receiving a commodity search keyword input by a user through a search input box of a shopping platform and generating a search request according to the commodity search keyword; the information acquisition module is used for acquiring information of a plurality of commodities matched with the commodity search keyword according to the feedback of the search request and acquiring scene information of at least one shopping scene matched with the commodity search keyword; and the information display module is used for displaying the acquired information of the target object and the scene information of the scene.
According to a sixth aspect of the embodiments of the present invention, there is provided still another search apparatus including: the second acquisition module is used for acquiring search keywords from the received search request; the first determining module is used for determining the information of the matched target object and the scene information of the scene according to the search keyword; and the sending module is used for feeding back the information of the target object and the scene information to a sender of the search message.
According to a seventh aspect of the embodiments of the present invention, there is provided an electronic apparatus including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the search method according to the first aspect, the second aspect or the third aspect.
According to an eighth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements a search method as described in the first aspect or the second aspect or the third aspect.
According to the scheme provided by the embodiment of the invention, different from the conventional mode of obtaining the search result by matching the search keyword with the name or title of the target object, the scheme provided by the embodiment of the invention not only returns the information of the specific target object, but also returns the scene information of the scene matched with the search keyword. On one hand, the scene information is obtained according to the search keywords, so that the requirement scene of the user can be effectively reflected, and the requirement of the user can be hit with a higher probability; on the other hand, the scene information is not limited to a specific target object and can cover various demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain more abundant information of the target object through the scene information, and the user cognition is enriched; compared with the traditional method that the user needs to perform initial demand scene cognition and split two nodes for shopping appeal splitting, demand data collection is performed through various vertical professional channels such as hundredths, learners and the like or learners, and then demands are further determined, the scheme provided by the embodiment of the invention can solve the user demands in a one-stop manner, does not need the user to process and screen a large amount of information required by the initial demand scene cognition, and effectively improves the user experience and the adhesion.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
Fig. 1 is a flowchart illustrating steps of a searching method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a searching method according to a second embodiment of the present invention;
FIG. 3 is a schematic view of an information presentation interface in the embodiment of FIG. 2;
FIG. 4 is a schematic view of another information presentation interface in the embodiment shown in FIG. 2;
FIG. 5 is a schematic diagram of yet another information presentation interface in the embodiment shown in FIG. 2;
FIG. 6 is a flowchart illustrating steps of a searching method according to a third embodiment of the present invention;
FIG. 7 is a flowchart illustrating steps of a searching method according to a fourth embodiment of the present invention;
FIG. 8 is a diagram illustrating a scene information structure in the embodiment shown in FIG. 7;
FIG. 9 is a flowchart illustrating steps of a searching method according to a fifth embodiment of the present invention;
fig. 10 is a block diagram of a search apparatus according to a sixth embodiment of the present invention;
fig. 11 is a block diagram of a search apparatus according to a seventh embodiment of the present invention;
fig. 12 is a block diagram of a search apparatus according to an eighth embodiment of the present invention;
fig. 13 is a block diagram of a searching apparatus according to a ninth embodiment of the present invention;
fig. 14 is a block diagram of a search apparatus according to a tenth embodiment of the present invention;
fig. 15 is a schematic structural diagram of a terminal device according to an eleventh embodiment of the present invention;
fig. 16 is a schematic structural diagram of a server according to a twelfth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1, a flowchart illustrating steps of a searching method according to a first embodiment of the present invention is shown.
The embodiment describes a search method provided by the embodiment of the present invention from the perspective of a client. The searching method of the embodiment comprises the following steps:
step S102: and generating a search request according to the search keyword input by the user.
The search keyword may be a conventional search keyword, including but not limited to a keyword of the target object to be searched, such as a name (e.g., IPHONE X), or a keyword of a category to which the target object to be searched belongs (e.g., a mobile phone).
Step S104: and according to the feedback of the search request, acquiring the information of the target object matched with the search keyword and the scene information of the scene matched with the search keyword.
The feedback to the search request may be in the form of a feedback message, and the information of the target object and the scene information of the scene may be fed back through one feedback message at the same time, or the information of the target object and the scene information of the scene may be fed back through a plurality of messages, such as two messages, respectively. The feedback message may be generated by a search request receiver, such as a server, after determining corresponding target objects and scenes according to the search keywords, according to information of the target objects and scene information of the scenes, and then sent to a search request sender, such as a client. The scene can effectively represent a user requirement scene related to the search keyword, and the information of the nonspecific target object related to the search keyword can be further obtained through the scene information of the scene. The scene matched with the search keyword may include one scene or may include a plurality of scenes.
For example, if the user inputs the search keyword "baby bottle" (category of goods), information on goods containing "baby bottle" in the name or title can be obtained, such as: the method comprises the following steps of 'baby nursing', 'parent-child interaction', 'baby complementary food', 'baby travel', and the like. After the scene information is displayed to the user, the user can view the content in the corresponding scene according to the requirement, which may be the content of the next level sub-scene or the content of the next level class, but in any way, the required target object can be finally obtained through the paths of scene → class → target object. And because the scene types are different, the user can acquire different types of target objects by checking different scenes. The target object may be a specific thing or service according to different practical applications, and in a specific application, the target object may be set by a person skilled in the art according to practical requirements, such as a commodity, an electronic book, a course, a service, and so on. The target object is classified, so that a category corresponding to the target object can be obtained, or the category or the type to which the target object belongs can be considered, for example, the category corresponding to "128G IPHONEX" is "mobile phone", and the like.
In the present specification, the terms "plurality of stages", "plural", and the like, as used in relation to plural, mean two or more unless otherwise specified.
Step S106: and displaying the acquired information of the target object and the scene information of the scene.
The information of the target object may be specific information meeting the final requirements of the user, such as specific information of a commodity, and the target object may be operated, such as sharing, purchasing, paying attention to, collecting, and the like, through the information. And as for the scene information of the scene, the scene information corresponds to the corresponding category node and the target object node, the information of the target object in the corresponding scene can be acquired through the step-by-step operation of the user on the displayed scene information and the displayed category information.
Through the embodiment, different from the conventional mode of obtaining the search result by matching the search keyword with the name or title of the target object, the scheme of the embodiment of the invention not only returns the specific information of the target object, but also returns the scene information of the scene matched with the search keyword. On one hand, the scene information is obtained according to the search keywords, so that the requirement scene of the user can be effectively reflected, and the requirement of the user can be hit with a higher probability; on the other hand, the scene information is not limited to a specific target object and can cover various demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain more abundant information of the target object through the scene information, and the user cognition is enriched; compared with the traditional method that the user needs to perform initial demand scene cognition and split two nodes for shopping appeal splitting, demand data collection is performed through various vertical professional channels such as hundredths, learners and the like or learners, and then demands are further determined, the scheme provided by the embodiment of the invention can solve the user demands in a one-stop manner, does not need the user to process and screen a large amount of information required by the initial demand scene cognition, and effectively improves the user experience and the adhesion.
The search method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals (such as tablet computers, mobile phones and the like), PCs and the like.
Example two
Referring to fig. 2, a flowchart illustrating steps of a searching method according to a second embodiment of the present invention is shown.
The embodiment describes the search method provided by the embodiment of the present invention from the perspective of the client. The searching method of the embodiment comprises the following steps:
step S202: and generating a search request according to the search keyword.
The search keywords are used for indicating target objects to be searched or categories to which the target objects belong; the search keyword may be a conventional search keyword, including but not limited to a keyword, such as a name, of the target object to be searched, or a keyword of a category to which the target object to be searched belongs.
The search keyword may be obtained in an appropriate manner, such as according to an input of a user in a search box, or according to a click of a search option by the user, and the like, which is not limited in this embodiment of the present invention.
Step S204: and according to the feedback of the search request, acquiring the information of the target object matched with the search keyword and the scene information of the scene matched with the search keyword.
The information of the target object and the scene information of the scene may be determined by a receiver of the search request, such as a server, and returned to a sender of the search request, such as a client.
The server side can be provided with corresponding relations of various scenes, categories, target objects, after the search keyword is obtained from the search request, the corresponding relation matched with the search keyword can be determined in any appropriate mode, and then the scenes in the corresponding relation are determined to be the scenes matched with the search keyword.
The corresponding relation of "scene- > category- > target object" is expressed in different forms in different applications, for example, in a form of "shopping scene- > commodity category- > commodity" in a shopping application, in a form of "emotion scene- > audio-video type- > audio-video" in an audio-video playing application, in a form of "interaction intention scene- > topic- > interaction data" in an interaction community application, in a form of "reading intention scene- > electronic book type- > electronic book" in an electronic book reading application, and the like, and the embodiment of the present invention does not limit the specific forms.
But not limited to the above manner of determining the scene information of the scene, in another possible manner, the acquired scene information of the scene may be determined according to the search keyword and the scene auxiliary information, where the scene auxiliary information includes at least one of: searching time information, searching position information and personalized information of the user.
The search time information is used for indicating the time of the search operation, and various scenes such as holiday scenes, night scenes and the like can be determined through the search time information; the search position information is used for indicating the geographic position of the search operation, and various scenes such as a tourist area scene, a long-distance business trip scene and the like can be determined by searching the position information; the personalized information of the user is used for indicating the personalized characteristics of the user, and the requirement scenes of the user, such as the scenes preferred by the user, the scenes suitable for the user relatives and the like, can be determined in an auxiliary manner through the personalized information of the user. Through the scene auxiliary information, scenes matched with the search keywords can be determined in a more targeted manner, and the requirements of users are better met.
When determining a scene matched with the search keyword by means of the scene auxiliary information, at least one of the scene, the category and the target object in the corresponding relation of 'scene- > category- > target object' has corresponding attribute information corresponding to the scene auxiliary information, and accordingly, a scene matched with both the scene auxiliary information and the search keyword can be determined from the attribute information, and more accurate scene matching is achieved. Alternatively, scene matching may be performed by an appropriate algorithm or machine learning model.
In addition, when the correspondence relationship of "scene- > category- > target object" includes multiple levels of scene nodes, the root scene node may be determined as a matched scene node to obtain more scene information and information to the category information and the target object; or, the scene node at the lowest level may be determined as the matched scene node, so as to reduce the viewing burden of the user and meet the user requirements. Of course, any level scene node between the root scene node at the uppermost level and the scene node at the lowermost level may be determined as a matching scene node.
The information of the target object can be determined by a conventional method, such as a method of matching a name or a title with a search keyword, or by the corresponding relationship, for example, after the corresponding relationship matched with the search keyword is determined, the information of the upper category of the search keyword is determined, and the information of all target objects under the upper category is determined as the information of the target objects matched with the search keyword.
Step S206: and acquiring interactive information in a preset format.
The interactive information is used for displaying preset information and realizing interaction with a user when the displayed preset information is triggered, and the interactive information comprises but is not limited to interactive information of preset popularization activities; the preset format comprises at least one of the following: text format, audio and video format, and picture format.
It should be noted that this step is an optional step; moreover, the predetermined information can be set as any appropriate information by those skilled in the art according to actual needs, such as promotion activities, games, comments, red envelope distribution, and the like; in practical applications, this step may be executed before step S204, or may be executed in parallel with step S204.
Through the interactive information, information recommendation or activity propaganda can be carried out on the user, information display is enriched, and the interactive effect is improved.
Step S208: and displaying the acquired information.
The method comprises the following steps: and displaying the acquired information of the target object matched with the search keyword and the scene information of the scene. If step S206 is executed and the interactive information is acquired, in this step, the interactive information is also displayed.
Step S210: and carrying out operation processing on the displayed information and displaying an operation processing result.
For the obtained information of the target object matched with the search keyword, the information can be directly operated and processed, such as viewing, clicking, sharing, collecting and the like, and then a corresponding operation processing result can be displayed.
Optionally, for the acquired scene information of the scene matched with the search keyword, a selection operation of the displayed scene information of the scene can be received; obtaining information of a subordinate node of the scene selected by the selecting operation, wherein the subordinate node includes: a lower level sub-scene node, or a lower level category node; and displaying the information of the subordinate node so as to acquire the information of the target object under the scene through the information of the subordinate node. As described above, a scene node may be a sub-scene node or a category node, if a currently displayed scene is selected, the node corresponding to the scene further displays information of the sub-scene node if there is a sub-scene node, and if the node corresponding to the scene is a sub-category node, the node directly displays information of the category node. In practical application, for the multi-level scene nodes, the multi-level scene nodes can be triggered to the category node level step by step, and then the information of the target object node in the scene is acquired and displayed by triggering the category node.
Optionally, the presenting the information of the subordinate node to obtain the information of the target object in the scene through the information of the subordinate node may include: displaying the information of the subordinate node; receiving a selection operation of the displayed information of the subordinate node; if the subordinate node selected by the selection operation is the subordinate category node, displaying information of a target object corresponding to the subordinate category node; and if the lower node selected by the selection operation is the lower sub-scene node, displaying the information of the lower node of the lower sub-scene node, and returning to the process of receiving the selection operation of the displayed information of the lower node to continue execution, wherein the lower node of the lower sub-scene node is the sub-scene node or the category node. Through the step-by-step display mode, the user can know the scene, the category and the target object related to the scene search word conveniently, and the user can switch between scenes or between categories conveniently.
In one example, an information presentation interface is shown in FIG. 3. The interface on the left side of fig. 3 shows the interface of the information displayed after the user inputs the search keyword "baby crawls clothes", it can be seen that the current interface shows the scene of "baby's wearing degree" (as shown by the dotted line frame in the figure) in addition to the information of the specific goods of the three kinds of baby crawls clothes, and when more target objects and/or scenes are present, the user can view other information by scrolling the interface downwards. When the user clicks the picture corresponding to the scene of the wearing degree of the baby, the lower-level scene of the wearing degree of the baby is triggered and displayed, as shown in the middle interface of fig. 3, the lower-level scene comprises scenes of 'baby wearing, taking' travel necessity ',' baby nursing ', baby washing' and the like. When a user clicks a picture corresponding to a scene of 'baby wearing' the clothes, category information under the scene is triggered and displayed, as shown in a right side interface in fig. 3, including categories of 'jumpsuit', 'pajamas', 'summer clothing', 'baby bag', and the like. In this case, if the user clicks any one of the categories, the display of information of a specific target object under the category, such as information of corresponding various baby jumpsuits, is triggered.
And if the currently displayed scene information also corresponds to the scene information of the corresponding sub-scene, clicking any one currently displayed scene picture and then displaying the scene information of the lower sub-scene, and further gradually operating by a user until the information of the target object node is reached through the information of the category node.
For example, another information presentation interface is shown in FIG. 4. The left interface in fig. 4 shows the information displayed after the user inputs the search keyword "baby crawls", and shows a scene of "baby care" (as shown by a dotted line frame in the figure) in addition to the information of the specific goods that the three babies crawl. When more information is available, the user can view other information by scrolling down the interface. When the user clicks the picture corresponding to the "baby care" scene, the scene information of the subordinate sub-scene is triggered and displayed, as shown in the right interface of fig. 4, the method includes: scenes such as parent-child interaction, baby nursing, baby complementary food, baby travel and the like can be displayed, and when more scenes exist, the user can view other scene information by downwards rolling the interface. When a user clicks a picture corresponding to a 'baby care' scene, if a subordinate sub-scene still exists in the scene, information of the subordinate sub-scene is continuously displayed, and if the scene is a category node, information of the category node is triggered to be displayed.
In addition, if step S206 is executed, in this step, when the information of the target object and the scene information of the scene matching the search keyword are displayed, the obtained interaction information is also displayed. An interface that simultaneously presents the above information is shown in fig. 5. The left interface in fig. 5 shows the information presented after the user inputs the search keyword "baby climbs clothes", and shows the scene of "baby wearing user" in addition to the information of the specific goods climbed by three babies; meanwhile, the interface also shows the obtained interactive information in the format of multiple pictures, namely interactive information pictures, the multiple interactive information pictures are switched at certain intervals, and the currently displayed interactive information picture shown on the left interface in fig. 5 is a 'sign-for-offer' interactive picture. After clicking the interactive picture of 'sign-in preferential', the user displays a right-side sign-in interface shown in fig. 5, the user can carry out sign-in operation on the interface, and after the number of sign-in operations reaches a certain number, corresponding coupons can be issued to the user.
Generally, scene information of a corresponding scene may be obtained according to a search keyword or according to a combination of the search keyword and scene auxiliary information. However, in order to further meet the user requirements and improve the user experience, in a feasible manner, the returned information may also be combined with the personalized features of the user. In this case, since the search request carries the user identifier of the user, the server may further obtain personalized information (such as preference information, family information, personal information of the user, such as age, sex, occupation, and the like) of the user based on the user identifier, and further filter the scene information to be displayed according to the personalized information, so that different users can obtain different scene information even if inputting the same search keyword.
For example, if a scene node currently selected by a user has a lower level node, and the lower level node includes a lower level scene node, when displaying information of the lower level node, a personalized information option generated according to personalized information of the user may also be displayed; according to the selection operation of the user on the personal information option, determining the subordinate scene nodes matched with the personal information option selected by the selection operation from the acquired subordinate scene nodes, and displaying the scene information of the matched subordinate scene nodes. The personalized information of the user is used for representing characteristics of the user, such as age, family condition, professional preference and the like, such as information of a preferred price interval or information of a brand. The user's personalized information options correspond to the user's personalized information. It should be noted that, in practical application, the personalized information of the user may be any suitable information that can represent the characteristics of the user, and the personalized information option may be any suitable option according to the user, which is not limited in this embodiment.
For example, after the user clicks a picture corresponding to certain scene information, personalized information options are displayed, for example, after the user clicks a scene picture of "creative gift", options of "give elder", "give wife", and "give child" are displayed, after the user selects the corresponding options, a sub-scene matching the selection can be screened from the obtained sub-scenes according to the selection of the user, and then the scene information of the sub-scene is displayed.
For another example, when the information of the subordinate node is displayed, the obtained scene information of the subordinate sub-scene node and the personalized information option generated according to the personalized information of the user may be displayed; and updating the displayed subordinate sub-scene nodes and displaying the updated scene information of the subordinate sub-scene nodes according to the selection operation of the user on the personal information option.
For example, if the user clicks the scene picture of "christmas day", besides displaying a plurality of sub-scenes such as "christmas gifts", "christmas cates" and "christmas trips", the options of "give wife" and "give daughter" may be displayed in the interface displaying the scene information of the sub-scenes or through pop-up windows for the user to select. And further, based on the selection of the user, more targeted information is provided for the user subsequently.
In another feasible manner of this embodiment, the feedback of the search message may also carry scene information of an associated scene associated with a scene matching the search keyword, or the scene information of a scene matching the search keyword and the scene information of the associated scene associated with the lower-level sub-scene node may be displayed after the scene information of the scene matching the search keyword is selected. By associating scenes, scene cognition of the user can be enriched, and richer information is provided for the user.
For example, if the user inputs a search keyword "baby rice flour", the scene information of a scene of "baby eating and wearing the user" can be obtained simultaneously in addition to the various specific "baby rice flour" commodities. After the picture corresponding to the scene information is clicked, the scene information of the relevant scenes such as ' baby's complementary food ', ' baby's nursing ', ' baby's trip ' and the like can be displayed. When the search keyword is displayed, the search keyword can be displayed in different display areas respectively, so that a user can obtain more related information conveniently, and the user can clearly distinguish scenes matched with the search keyword from related scenes.
Through the embodiment, different from the conventional mode of obtaining the search result by matching the search keyword with the name or title of the target object, the scheme of the embodiment of the invention not only returns the specific information of the target object, but also returns the scene information of the scene matched with the search keyword. On one hand, the scene information is obtained according to the search keywords, so that the requirement scene of the user can be effectively reflected, and the requirement of the user can be hit with a higher probability; on the other hand, the scene information is not limited to a specific target object and can cover various demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain more abundant information of the target object through the scene information, and the user cognition is enriched; compared with the traditional method that the user needs to perform initial demand scene cognition and split two nodes for shopping appeal splitting, demand data collection is performed through various vertical professional channels such as hundredths, learners and the like or learners, and then demands are further determined, the scheme provided by the embodiment of the invention can solve the user demands in a one-stop manner, does not need the user to process and screen a large amount of information required by the initial demand scene cognition, and effectively improves the user experience and the adhesion.
The search method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals (such as tablet computers, mobile phones and the like), PCs and the like.
EXAMPLE III
Referring to fig. 6, a flowchart illustrating steps of a searching method according to a third embodiment of the present invention is shown.
In this embodiment, an electronic shopping platform is taken as an example, and the searching method provided by the embodiment of the present invention is still described from the perspective of the client. It should be understood by those skilled in the art that other manners in which a search function is provided and corresponding scenes and scene information can be determined according to an input search keyword can be implemented with reference to the present embodiment.
The searching method of the embodiment comprises the following steps:
step S302: and receiving a commodity search keyword input by a user through a search input box of the shopping platform, and generating a search request according to the commodity search keyword.
For example, the user may use a conventional search input box for conventional search input, such as entering "baby crawl", "music box", or the like.
Step S304: according to the feedback of the search request, information of a plurality of commodities matched with the commodity search keyword is obtained, and scene information of at least one shopping scene matched with the commodity search keyword is obtained.
For example, the server side may determine a shopping scene corresponding to the product search keyword by using the correspondence "scene → category → target object", or determine a shopping scene corresponding to the product search keyword by using the correspondence and the scene assistance information, and obtain scene information of the shopping scene.
In one example, the user inputs 'baby climbing clothes', if the server determines that the reloading is to be carried out in spring now according to the scene auxiliary information, the scene information of the reloading scene of the baby in spring is returned to be displayed at the client; after the user clicks to enter the corresponding scene at the client, the scene information of different levels related to the baby and the climbing clothes can be displayed at the corresponding position such as the head of the search result display page in turn according to the user operation, and the search result is recalled based on the scene at the moment. In addition, optionally, scene information of a scene associated with the same level as the current scene can be displayed in the current scene, for example, in a baby clothing scene, associated scenes such as early education, traveling and the like of a baby can be seen, and the flow of the user can be circulated at will.
In another example, the user inputs a "music box", the server determines that the valentine's day is coming soon according to the scene auxiliary information, and knows that the user is male and has a wife and a daughter through the personalized information of the user, at this time, the server presumes that the user may buy the valentine's day gift for the wife, at this time, the server carries corresponding information in the feedback of the search request, so as to display the scene of sending the girls gift in the valentine's day on the search result display page, if the user enters the scene, the user will see the scene cards and gift categories of different types of sending the girls gift, and the user can select based on different gift categories. Meanwhile, the lover's plot does not need to forget the lovers (baby girls) of the ancestors, and the gift-sending objects, such as ' girls ' and ' wives ', can be displayed in the interface, and after the user selects ' girls ', the gift-sending objects can be switched in the gift flow.
In addition, in addition to the information of the target object matching the commodity search keyword and the scene information of at least one shopping scene, optionally, commodity category information corresponding to each shopping scene may also be acquired. By the method, the information of the target object, the scene information and the commodity category information can be carried in one feedback of the search request, and the information transmission and processing efficiency is improved.
Step S306: and displaying the acquired commodity information and the scene information of the shopping scene.
If the commodity category information is also obtained from the feedback of the search request, in this step, the obtained information of the commodity, the scene information and the commodity category information can be displayed at the same time.
In one possible way, the obtained information of the commodity can be displayed and simultaneously the obtained scene information of at least one shopping scene can be displayed by using at least one scene card, wherein each scene card displays the scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card. By the method, the information display efficiency is improved, and the user can conveniently check and operate the information display device.
Optionally, the interaction information in a preset format may also be acquired, where the preset format includes at least one of: a text format, an audio and video format, and a picture format; and displaying the interactive information. The interactive information includes, but is not limited to, interactive information of a predetermined promotion activity, such as festival discount information, game information, audio and video information, sign-in prompt information, red packet distribution information, and the like.
Step S308: receiving a trigger operation on the displayed commodity category information; and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
For example, still taking the aforementioned corresponding relationship of "scene → category → target object" as an example, when the merchandise category information is displayed, the triggering operation on the displayed merchandise category information can be received; displaying all commodity category information corresponding to the current shopping scene according to the triggering operation so as to obtain the information of the corresponding commodity through the commodity category information; and/or the information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation. Therefore, quick acquisition and operation of information are realized.
In addition, optionally, option information of the target intention person determined according to the personalized information of the user can be displayed in one or more of a display interface of the scene information, a display interface of the commodity category information and a display interface of the commodity information; according to the selection operation of the user on the option information, at least one of the following information is updated: scene information of the displayed shopping scene, commodity category information and commodity information. After the information of the relation people related to the user, such as family members, is obtained through the personalized information of the user, the target intention people of the user can be determined, such as sending the Christmas gifts to girls, wives or elders, and the like, so that the information can be provided for the user in a more targeted manner, and the requirements of the user can be better met.
Therefore, by the mode, the search function provided by the electronic shopping platform can be used for mining the potential appeal of the user through the current appeal of the user based on the cognition of the user portrait and the understanding of the commodity and the relationship, recombining the search display strategy of the commodity and giving the user a one-stop type and immersive shopping experience in the corresponding scene. For the platform, the cross-commodity-category purchasing behavior of the user is promoted, and the purchasing quantity is increased; for a user, the cognitive cost is reduced, and the shopping efficiency is improved; the user can slowly generate cognition through the platform, and can quickly purchase all the time through the electronic shopping platform when the follow-up purchase appeal is generated.
According to the embodiment, according to conventional search keywords, a knowledge system which needs to be known by an original user through a Baidu, Zhi-wait vertical channel can be associated with commodities, and the associated keywords are displayed to the user through scene information so as to inform the user that things such as a baby can buy in the early education mode, and the type of commodities can be picked up when a living room is decorated, and the retrieval strategy is not limited by the original search and based on the title attribute of the commodities; and can combine user portrait, let the user possess extremely individualized experience, for example: the user A sees the early education of the baby of one year old, the user B sees the early education of the baby of 6 months old, the user C sees the mediterranean style living room decoration recommendation card, and the user D sees the American living room decoration recommendation. Therefore, the user can obtain corresponding commodities through the scene information, the user requirements can be met in a one-stop mode, the user does not need to process and screen a large amount of information required by the initial requirement scene cognition, and the user experience and the adhesion degree are effectively improved.
The search method of the present embodiment may be performed by any suitable terminal device having data processing capabilities, including but not limited to: mobile terminals (such as tablet computers, mobile phones and the like), PCs and the like.
Example four
Referring to fig. 7, a flowchart illustrating steps of a searching method according to a fourth embodiment of the present invention is shown.
The embodiment describes a search method provided by the embodiment of the present invention from the perspective of a server. The searching method of the embodiment comprises the following steps:
step S402: and acquiring a search keyword from the received search request.
In this embodiment, as a server, the server receives a search request sent from a client, and obtains a search keyword from the search request.
Step S404: and determining the matched information of the target object and the scene information of the scene according to the search keyword.
The server side is preset with corresponding relations among multiple scenes- > categories- > target objects, wherein the scenes can be in multiple stages. The server side can firstly correspond the search keyword to a corresponding relation of a certain scene → category → target object through any appropriate matching algorithm, such as similarity matching, distance matching and the like, so as to further determine the corresponding scene, and finally can correspond to the corresponding target object through the corresponding relation. Wherein the scene information includes, but is not limited to, text information and/or picture information for describing a scene.
A structure of scene information is shown in fig. 8, and as can be seen from fig. 8, the corresponding relationship is illustrated as a tree structure which includes five levels, where three levels of scene nodes are included as shown by the dotted line frame portion in fig. 8, a category node is below the three levels of scene nodes, and a target object node is below the category node. Of course, fig. 8 is only an exemplary illustration, in practical applications, the number of levels of the scene nodes may be more, and those skilled in the art may also store more information in the tree structure according to actual requirements.
Step S406: and feeding back the information of the target object and the scene information to a sender of the search request.
After determining the scene information of the scene matched with the search keyword, the feedback information may be fed back to the sender of the search request, such as the client, in a suitable form, such as a form of generating a corresponding feedback message, and the feedback message may be sent to the sender of the search request, such as the client.
Through the embodiment, the server side can provide the scene information of the corresponding scene through the search keywords in the search request, the scene can include multiple aspects related to the demand scene of the user, and can include multiple demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain the corresponding target object through the scene information, the user demand can be solved in a one-stop mode, a large amount of information required by the initial demand scene cognition is not required to be processed and screened by the user, and the user experience and the adhesion degree are effectively improved.
EXAMPLE five
Referring to fig. 9, a flowchart illustrating steps of a searching method according to a fifth embodiment of the present invention is shown.
The embodiment describes the searching method provided by the embodiment of the present invention from the perspective of the server. The searching method of the embodiment comprises the following steps:
step S502: and acquiring a search keyword from the received search request.
Generally, the search request also carries a user identifier of a user who sent the search request, and therefore, optionally, this step may also be implemented to obtain a search keyword and a user identifier from the received search request. In subsequent operations, the server side can acquire personalized information of the user through the user identification so as to perform personalized processing on a target which is required to be searched by a search request of the user.
Step S504: and determining the matched information of the target object and the scene information of the scene according to the search keyword.
In this embodiment, the setting server side is preset with a scene tree storing a correspondence relationship between scene → category → target object, where the scene tree includes: the node structure comprises scene nodes, category nodes and target object nodes, wherein the scene nodes comprise at least two levels of nodes, a root node of a scene tree is a scene node, the target object nodes of the scene tree are leaf nodes, and the category nodes are intermediate nodes between the scene nodes and the target object nodes, as shown in fig. 8.
Based on the above scene tree, this step can be implemented as follows: determining a corresponding scene tree according to the search keyword; and determining a node corresponding to the search keyword in the scene tree, and determining the scene information of the scene node corresponding to the node as the scene information of the scene matched with the search keyword. Through the form of the scene tree, on one hand, data storage and search are more effective, on the other hand, the hierarchy among the data is simplified and clear, and the nodes are conveniently managed by taking the level as a unit.
For example, if the search keyword corresponds to a category node, such as "category node 1" in fig. 8, the "third-level node 1" corresponding to "category node 1" is determined as the scene node matching the search keyword, and meanwhile, optionally, other scene nodes having the same level as "third-level node 1", such as "third-level node 2" - "third-level node 7" may also be determined as the associated scene nodes; or determining the "second node 1" corresponding to the "category node 1" as the scene node matched with the search keyword, and meanwhile, optionally, determining other scene nodes at the same level as the "second node 1", such as "second node 2", "second node 3", and "second node 4", as the associated scene nodes; or, determining the "primary node 1" corresponding to the "category node 1" as the scene node matched with the search keyword.
However, no matter which level of scene node is determined as the scene node matched with the search keyword, as can be seen from the above scene tree, the category node can be reached to the final target object node through level-by-level display and operation. Moreover, as can be seen from the above description, when determining the associated scene node, other scene nodes in the scene level where the matched scene node is located can be determined as the associated scene node, which is simple and convenient to implement and low in implementation cost.
Further, in the scene tree, target object nodes belonging to the same category node have the same object label. For example, multiple target object nodes 1-N under "Category node 1" all have the same object tag, such as "1111". Compared with the traditional method that the name or title of the target object can be recalled by searching only by containing the search keyword, the method can effectively recall the target object even if the name or title of the target object does not contain the same or similar search keyword, thereby avoiding information omission or misjudgment.
Therefore, the scene matched with the search keyword can be effectively determined through the scene tree structure. However, in order to further approach the user demand scenario and meet the user demand to realize accurate scenario push, in a feasible manner, scenario assistance information is also obtained, where the scenario assistance information is used to indicate a search scenario. The determining the information of the matched target object and the scene information of the scene according to the search keyword may include: and determining the matched information of the target object and the scene information of the scene according to the search keyword and the scene auxiliary information.
Optionally, the scene assistance information includes at least one of: searching time information, searching position information and personalized information of the user. The specific meanings are as described above, and are not described herein again.
When the scene auxiliary information includes search time information, determining information of the matched target object and scene information of the scene according to the search keyword and the scene auxiliary information may include: determining information of a corresponding festival scene according to the search time information; information of the matched target object is determined according to the search keyword, and scene information of a scene matched with the search keyword is determined according to information of the holiday scene.
When the scene assistance information includes search location information, determining information of the matched target object and scene information of the scene according to the search keyword and the scene assistance information may include: determining information of a corresponding geographic range according to the search position information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the geographic range.
When the scene auxiliary information includes personalized information of the user, determining information of the matched target object and scene information of the scene according to the search keyword and the scene auxiliary information may include: determining preference information and/or relationship information of a user according to the personalized information of the user; information of the matched target object is determined according to the search keyword, and scene information of a scene matched with the search keyword is determined according to preference information and/or relationship information of the user.
The corresponding relation between the scene auxiliary information and the scene may be preset, or the attribute information corresponding to the scene auxiliary information may be set in a scene node, a category node, or a target object node of the scene tree, thereby realizing determination of a scene in which the search keyword and the scene auxiliary information are simultaneously matched.
For example, if the user a and the user B both click the scene card of "baby eating assistant", and determine that the baby of the user a is a baby of 3 to 6 months according to the personalized information of the user a, and determine that the baby of the user B is a baby of 1 to 2 years old according to the personalized information of the user B, a scene of "baby eating assistant" for the baby of 3 to 6 months will be provided for the user a, and a scene of "baby eating assistant" for the baby of 1 to 2 years old will be provided for the user B, that is, the scenes provided for the user a and the user B will be different.
Step S506: and feeding back the determined information to a sender of the search request.
The method comprises the following steps: and feeding back the determined information of the target object and the scene information to a sender of the search request, such as a client.
In addition, if the scene information of the associated scene associated with the scene is determined, in this step, the information of the target object, the scene information of the scene, and the scene information of the associated scene are carried in a feedback message and sent to a sender of the search request. By associating the scene information of the scene, the scene cognition and information acquisition channels of the user can be effectively expanded, the user experience is improved, and the adhesion degree of the user to the search platform is increased.
Step S508: receiving a scene trigger message; acquiring information of a lower node of a node corresponding to the scene information according to a node relation of the triggered scene information in a scene tree; and sending the information of the lower node to the sender.
Wherein the scene trigger message is used to indicate that the scene information is triggered. For example, certain context information sent to the sender of the search request is triggered at the sender, e.g., clicked on, etc. Based on the relationship structure of the scene tree, if the subordinate node of the scene node corresponding to the triggered scene information is displayed next, the subordinate node may still be the scene node or may also be the category node. If the subordinate node is a scene node, the user can perform triggering operation on the information corresponding to the scene node step by step at the client until the information of the category node is displayed, and then perform triggering operation on the information of the corresponding category node until the information of the target object is displayed. If the subordinate node is a category node, displaying information of the category node at the client, and displaying information of a target object under the triggered category node after a user triggers the information of one category node. The user operates the displayed information of the target object, and operations such as viewing, sharing, purchasing and the like can be realized.
Through the embodiment, the server side can provide the scene information of the corresponding scene through the search keywords in the search request, the scene can include multiple aspects related to the demand scene of the user, and can include multiple demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain the corresponding target object through the scene information, the user demand can be solved in a one-stop mode, a large amount of information required by the initial demand scene cognition is not required to be processed and screened by the user, and the user experience and the adhesion degree are effectively improved.
EXAMPLE six
Referring to fig. 10, a block diagram of a search apparatus according to a sixth embodiment of the present invention is shown.
The search apparatus of this embodiment may be provided at a client, and the search apparatus includes: a generating module 602, configured to generate a search request according to the search keyword; a first obtaining module 604, configured to obtain information of a target object matching the search keyword and scene information of a scene matching the search keyword according to feedback of the search request; a displaying module 606, configured to display the obtained information of the target object and the scene information of the scene.
The searching apparatus of this embodiment is used to implement the corresponding client-side searching method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the search apparatus of this embodiment is used to implement the search method, and therefore the description is relatively simple, and relevant portions may refer to the description of corresponding portions in the foregoing method embodiments, and are not described herein again.
EXAMPLE seven
Referring to fig. 11, a block diagram of a search apparatus according to a seventh embodiment of the present invention is shown.
The search apparatus of this embodiment may be provided at a client, and the search apparatus includes: a generating module 702, configured to generate a search request according to the search keyword; a first obtaining module 704, configured to obtain, according to feedback of the search request, information of a target object matching the search keyword and scene information of a scene matching the search keyword from the search result; and a display module 706, configured to display the obtained information of the target object and the scene information of the scene.
Optionally, the search apparatus of this embodiment further includes: a lower node module 708, configured to receive a selection operation of the scene information of the displayed scene; obtaining information of a subordinate node of the scene selected by the selecting operation, wherein the subordinate node includes: a lower level sub-scene node, or a lower level category node; and displaying the information of the subordinate node so as to acquire the information of the target object under the scene through the information of the subordinate node.
Optionally, when the lower node module 708 acquires the information of the target object in the scene through the information of the lower node: receiving a selection operation of the displayed information of the subordinate node; if the subordinate node selected by the selection operation is the subordinate category node, displaying information of a target object corresponding to the subordinate category node; and if the lower node selected by the selection operation is the lower sub-scene node, displaying the information of the lower node of the lower sub-scene node, and returning to the process of receiving the selection operation of the displayed information of the lower node to continue execution, wherein the lower node of the lower sub-scene node is the sub-scene node or the category node.
Optionally, when the subordinate node includes the subordinate child scene node, the subordinate node module 708 presents information of the subordinate node by: displaying personalized information options generated according to the personalized information of the user; according to the selection operation of the user on the personal information option, determining a subordinate scene node matched with the personal information option selected by the selection operation from the acquired subordinate scene nodes, and displaying the scene information of the matched subordinate scene node; or, displaying the acquired scene information of the subordinate scene node and an individual information option generated according to the individual information of the user; and updating the displayed subordinate sub-scene nodes and displaying the updated scene information of the subordinate sub-scene nodes according to the selection operation of the user on the personal information option.
Optionally, the lower node module 708 is further configured to display scene information of an associated scene associated with the lower node.
Optionally, the search apparatus of this embodiment further includes: the interaction module 710 is configured to obtain interaction information in a preset format, where the preset format includes at least one of: a text format, an audio and video format, and a picture format; the display module 706 is further configured to display the interaction information.
Optionally, the interaction information is interaction information of a predetermined promotional activity.
Optionally, the search keyword is used to indicate a target object to be searched or a category to which the target object belongs; the scene information of the scene is determined according to the search keyword and the scene auxiliary information, wherein the scene auxiliary information comprises at least one of the following: searching time information, searching position information and personalized information of the user.
Optionally, the generating module 702 is configured to receive a product search keyword input by a user through a search input box of a shopping platform, and generate a search request according to the product search keyword; the first obtaining module 704 is configured to obtain information of a plurality of commodities matching the commodity search keyword according to the feedback of the search request, and obtain scene information of at least one shopping scene matching the commodity search keyword.
Optionally, the first obtaining module 704 is configured to obtain, according to the feedback of the search request, information of a plurality of commodities that match the commodity search keyword, and obtain scene information of at least one shopping scene that matches the commodity search keyword, and commodity category information corresponding to each shopping scene.
Optionally, the displaying module 706 is configured to display the obtained information of the commodity, and display scene information of the at least one shopping scene by using at least one scene card, where each scene card displays scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card.
Optionally, the search apparatus of this embodiment further includes: a first triggering module 712, configured to receive a triggering operation on the displayed commodity category information; and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
Optionally, the search apparatus of this embodiment further includes: an option module 714, configured to display option information of the target intention person determined according to the personalized information of the user; according to the selection operation of the user on the option information, updating at least one of the following information: scene information of the displayed shopping scene, commodity category information and commodity information.
The searching apparatus of this embodiment is used to implement the corresponding client-side searching method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the search apparatus of this embodiment is used to implement the search method, and therefore the description is relatively simple, and relevant portions may refer to the description of corresponding portions in the foregoing method embodiments, and are not described herein again.
Example eight
Referring to fig. 12, a block diagram of a search apparatus according to an eighth embodiment of the present invention is shown.
The search apparatus of this embodiment may be disposed at a server side, and the search apparatus includes: a second obtaining module 802, configured to obtain a search keyword from a received search request; a first determining module 804, configured to determine, according to the search keyword, information of a target object and scene information of a scene that are matched with each other; a sending module 806, configured to feed back the information of the target object and the scene information to a sender of the search request.
The searching apparatus of this embodiment is used to implement the corresponding server-side searching method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the search apparatus of this embodiment is used to implement the search method, and therefore the description is relatively simple, and relevant portions may refer to the description of corresponding portions in the foregoing method embodiments, and are not described herein again.
Example nine
Referring to fig. 13, a block diagram of a search apparatus according to a ninth embodiment of the present invention is shown.
The search apparatus of this embodiment may be disposed at a server side, and the search apparatus includes: a second obtaining module 902, configured to obtain a search keyword from a received search request; a first determining module 904, configured to determine, according to the search keyword, information of a target object and scene information of a scene that are matched with each other; a sending module 906, configured to feed back the information of the target object and the scene information to a sender of the search request.
Optionally, the second obtaining module 902 is further configured to obtain scene assistance information, where the scene assistance information is used to indicate a search scene; the first determining module 904 is configured to determine, according to the search keyword and the scene auxiliary information, information of a target object and scene information of a scene that are matched with each other.
Optionally, the scene assistance information includes at least one of: searching time information, searching position information and personalized information of the user.
Optionally, when the scene assistance information includes the search time information, the first determining module 904 includes: the time information module 9042 is configured to determine information of a corresponding holiday scene according to the search time information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the holiday scene.
Optionally, when the scene assistance information includes the search location information, the first determining module 904 includes: a location information module 9044, configured to determine, according to the search location information, information of a corresponding geographic range; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the geographic range.
Optionally, when the scene assistance information includes personalized information of the user, the first determining module 904 includes: the personalized information module 9046 is configured to determine preference information and/or information of a related person of the user according to the personalized information of the user; and determining information of the matched target object according to the search keyword, and determining scene information of a scene matched with the search keyword according to the preference information and/or the information of the relationship person of the user.
Optionally, the first determining module 904 is further configured to determine scene information of an associated scene associated with the scene; the sending module 906 is configured to feed back the information of the target object, the scene information of the scene, and the scene information of the associated scene to a sender of the search request.
Optionally, the first determining module 904 is configured to determine a corresponding scene tree according to the search keyword, where the scene tree includes scene nodes, category nodes and target object nodes, the scene nodes include at least two levels of nodes, a root node of the scene tree is a scene node, the target object nodes of the scene tree are leaf nodes, and the category nodes are intermediate nodes between the scene node and the target object nodes; determining a node corresponding to the search keyword in the scene tree, and determining scene information of a scene node corresponding to the node as scene information of a scene matched with the search keyword.
Optionally, the search apparatus of this embodiment further includes: a second determining module 908, configured to determine other scene nodes in the scene level where the scene node is located as associated scene nodes.
Optionally, the search apparatus of this embodiment further includes: a second triggering module 910, configured to receive a scenario trigger message, where the scenario trigger message is used to indicate that the scenario information is triggered; acquiring information of lower nodes of nodes corresponding to the scene information according to the triggered node relationship of the scene information in the scene tree; and sending the information of the lower node to the sender.
Optionally, target object nodes belonging to the same category node have the same object label.
The searching apparatus of this embodiment is used to implement the corresponding server-side searching method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the search apparatus of this embodiment is used to implement the search method, and therefore the description is relatively simple, and relevant portions may refer to the description of corresponding portions in the foregoing method embodiments, and are not described herein again.
Example ten
Referring to fig. 14, a block diagram of a search apparatus according to a tenth embodiment of the present invention is shown.
The search apparatus of this embodiment may be provided at a client, and the search apparatus includes: a request generating module 901, configured to receive a commodity search keyword input by a user through a search input box of a shopping platform, and generate a search request according to the commodity search keyword; an information obtaining module 903, configured to obtain information of multiple commodities matched with the commodity search keyword according to the feedback of the search request, and obtain scene information of at least one shopping scene matched with the commodity search keyword; and an information displaying module 905, configured to display the obtained information of the target object and the scene information of the scene.
Optionally, the information obtaining module 903 is configured to obtain, according to the feedback of the search request, information of a plurality of commodities matched with the commodity search keyword, and obtain scene information of at least one shopping scene matched with the commodity search keyword, and commodity category information corresponding to each shopping scene.
Optionally, the information displaying module 905 is configured to display the obtained information of the commodity, and display scene information of the at least one shopping scene by using at least one scene card, where each scene card displays scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card.
Optionally, the search apparatus of this embodiment further includes: a second trigger module 907 for receiving a trigger operation on the displayed commodity category information; and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
Optionally, the search apparatus of this embodiment further includes: an intention module 909 for presenting option information of a target intention person determined according to the personalized information of the user; according to the selection operation of the user on the option information, updating at least one of the following information: scene information of the displayed shopping scene, commodity category information and commodity information.
The searching apparatus of this embodiment is used to implement the corresponding client-side searching method in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the search apparatus of this embodiment is used to implement the search method, and therefore the description is relatively simple, and relevant portions may refer to the description of corresponding portions in the foregoing method embodiments, and are not described herein again.
EXAMPLE eleven
Referring to fig. 15, an electronic device according to a tenth embodiment of the present invention is shown, and in this embodiment, a schematic structural diagram of a terminal device is shown, and a specific embodiment of the present invention does not limit a specific implementation of the terminal device.
As shown in fig. 15, the terminal device may include: a processor (processor)1002, a Communications Interface 1004, a memory 1006, and a Communications bus 1008.
Wherein:
the processor 1002, communication interface 1004, and memory 1006 communicate with each other via a communication bus 1008.
A communication interface 1004 for communicating with other electronic devices, such as other terminal devices or servers.
The processor 1002 is configured to execute the program 1010, and may specifically perform relevant steps in the above embodiment of the client search method.
In particular, the program 1010 may include program code that includes computer operating instructions.
The processor 1002 may be a central processing unit CPU, or an application specific Integrated circuit (asic), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 1006 is used for storing the program 1010. The memory 1006 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 1010 may be specifically configured to cause the processor 1002 to perform the following operations: generating a search request according to a search keyword input by a user; according to the feedback of the search request, acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword; and displaying the acquired information of the target object and the scene information of the scene.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002 to receive a selection operation of scene information of the scene to be shown; obtaining information of a subordinate node of the scene selected by the selecting operation, wherein the subordinate node includes: a lower level sub-scene node, or a lower level category node; and displaying the information of the subordinate node so as to acquire the information of the target object under the scene through the information of the subordinate node.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002 to receive a selection operation of the information of the exposed lower node when the information of the target object in the scene is acquired through the information of the lower node; if the subordinate node selected by the selection operation is the subordinate category node, displaying information of a target object corresponding to the subordinate category node; and if the lower node selected by the selection operation is the lower sub-scene node, displaying the information of the lower node of the lower sub-scene node, and returning to the process of receiving the selection operation of the displayed information of the lower node to continue execution, wherein the lower node of the lower sub-scene node is the sub-scene node or the category node.
In an alternative embodiment, when the subordinate node includes the subordinate sub-scene node, the program 1010 is further configured to enable the processor 1002 to display a personalized information option generated according to the personalized information of the user when displaying the information of the subordinate node; according to the selection operation of the user on the personal information option, determining a subordinate scene node matched with the personal information option selected by the selection operation from the acquired subordinate scene nodes, and displaying the scene information of the matched subordinate scene node; or, displaying the acquired scene information of the subordinate scene node and an individual information option generated according to the individual information of the user; and updating the displayed subordinate sub-scene nodes and displaying the updated scene information of the subordinate sub-scene nodes according to the selection operation of the user on the personal information option.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002 to display scene information of an associated scene associated with the lower level scene node.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002 to obtain the interaction information in a preset format, where the preset format includes at least one of: a text format, an audio and video format, and a picture format; and displaying the interactive information.
In an optional implementation manner, the interaction information is interaction information of a predetermined promotional activity.
In an alternative embodiment, the search keyword is used to indicate a target object to be searched or a category to which the target object belongs; the scene information of the scene is determined according to the search keyword and the scene auxiliary information, wherein the scene auxiliary information comprises at least one of the following: searching time information, searching position information and personalized information of the user.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002, when generating a search request according to a search keyword, to receive a commodity search keyword input by a user through a search input box of a shopping platform, and generate a search request according to the commodity search keyword; when acquiring information of a target object matching the search keyword and scene information of a scene matching the search keyword, acquiring information of a plurality of commodities matching the commodity search keyword and scene information of at least one shopping scene matching the commodity search keyword.
In an alternative embodiment, the program 1010 is further configured to, when acquiring the scene information of at least one shopping scene matching the item search keyword, acquire the scene information of at least one shopping scene matching the item search keyword and the item category information corresponding to each shopping scene.
In an alternative embodiment, the program 1010 is further configured to cause the processor 1002 to display the acquired information of the commodity when displaying the acquired information of the target object and the scene information of the scene, and display the scene information of the at least one shopping scene by using at least one scene card, wherein each scene card displays the scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card.
In an alternative embodiment, the program 1010 is further configured to enable the processor 1002 to receive a trigger operation for the displayed merchandise category information; and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
In an alternative embodiment, the program 1010 is further configured to cause the processor 1002 to present option information of the targeted intent person determined according to the personalized information of the user; according to the selection operation of the user on the option information, updating at least one of the following information: scene information of the displayed shopping scene, commodity category information and commodity information.
For specific implementation of each step in the program 1010, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiment of the client search method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Through the terminal device of the embodiment, different from the conventional mode of obtaining the search result by matching the search keyword with the name or title of the target object, the scheme of the embodiment of the invention not only returns the specific information of the target object, but also returns the scene information of the scene matched with the search keyword. On one hand, the scene information is obtained according to the search keywords, so that the requirement scene of the user can be effectively reflected, and the requirement of the user can be hit with a higher probability; on the other hand, the scene information is not limited to a specific target object and can cover various demand cognition systems obtained by the user from a vertical professional channel or other people, so that the user can obtain more abundant information of the target object through the scene information, and the user cognition is enriched; compared with the traditional method that the user needs to perform initial demand scene cognition and split two nodes for shopping appeal splitting, demand data collection is performed through various vertical professional channels such as hundredths, learners and the like or learners, and then demands are further determined, the scheme provided by the embodiment of the invention can solve the user demands in a one-stop manner, does not need the user to process and screen a large amount of information required by the initial demand scene cognition, and effectively improves the user experience and the adhesion.
Example twelve
Referring to fig. 16, an electronic device according to an eleventh embodiment of the present invention is shown, where the electronic device in this embodiment is a schematic structural diagram of a server, and a specific embodiment of the present invention does not limit a specific implementation of the server.
As shown in fig. 16, the server may include: a processor (processor)1102, a communication Interface 1104, a memory 1106, and a communication bus 1108.
Wherein:
the processor 1102, communication interface 1104, and memory 1106 communicate with one another via a communication bus 1108.
A communication interface 1104 for communicating with other electronic devices, such as terminal devices or other servers.
The processor 1102 is configured to execute the program 1110, and may specifically perform relevant steps in the above search method embodiment.
In particular, the program 1110 can include program code that includes computer operating instructions.
The processor 1102 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 1106 for storing a program 1110. Memory 1106 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 1110 may be specifically configured to cause the processor 1102 to perform the following operations: acquiring a search keyword from a received search request; determining the information of the matched target object and the scene information of the scene according to the search keyword; and feeding back the information of the target object and the scene information to a sender of the search request.
In an alternative embodiment, the program 1110 is further configured to cause the processor 1102 to obtain scene assistance information, wherein the scene assistance information is used for indicating a search scene; the program 1110 is further configured to enable the processor 1102 to determine information of a target object and scene information of a scene according to the search keyword and the scene auxiliary information when determining information of a target object and scene information of a scene matching according to the search keyword.
In an optional embodiment, the scene assistance information includes at least one of: searching time information, searching position information and personalized information of the user.
In an alternative embodiment, when the scene assistant information includes the search time information, the program 1110 is further configured to enable the processor 1102 to determine information of a corresponding holiday scene according to the search time information when determining information of a matched target object and scene information of a scene according to the search keyword and the scene assistant information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the holiday scene.
In an alternative embodiment, when the scene assistant information includes the search position information, the program 1110 is further configured to enable the processor 1102 to determine information of a corresponding geographic range according to the search position information when determining information of a matching target object and scene information of a scene according to the search keyword and the scene assistant information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the geographic range.
In an alternative embodiment, when the scene aid information includes personalized information of the user, the program 1110 is further configured to enable the processor 1102 to determine preference information and/or relationship information of the user according to the personalized information of the user when determining information of a matching target object and scene information of a scene according to the search keyword and the scene aid information; and determining information of the matched target object according to the search keyword, and determining scene information of a scene matched with the search keyword according to the preference information and/or the information of the relationship person of the user.
In an alternative embodiment, the program 1110 is further configured to cause the processor 1102 to determine context information for an associated context associated with the context; the program 1110 is further configured to enable the processor 1102 to feed back the information of the target object, the scene information of the scene, and the scene information of the associated scene to a sender of the search request when feeding back the information of the target object and the scene information to the sender of the search request.
In an optional implementation, the program 1110 is further configured to, when determining, according to the search keyword, information of a target object and scene information of a scene that match, determine, according to the search keyword, a corresponding scene tree according to the search keyword, where the scene tree includes scene nodes, category nodes, and target object nodes, the scene nodes include at least two levels of nodes, a root node of the scene tree is a scene node, the target object nodes of the scene tree are leaf nodes, and the category nodes are intermediate nodes between the scene node and the target object nodes; determining a node corresponding to the search keyword in the scene tree, and determining scene information of a scene node corresponding to the node as scene information of a scene matched with the search keyword.
In an alternative embodiment, the program 1110 is further configured to enable the processor 1102 to determine other scene nodes in the scene level where the scene node is located as associated scene nodes.
In an alternative embodiment, the program 1110 is further configured to cause the processor 1102 to receive a scenario trigger message, where the scenario trigger message is used to indicate that the scenario information is triggered; acquiring information of lower nodes of nodes corresponding to the scene information according to the triggered node relationship of the scene information in the scene tree; and sending the information of the lower node to the sender.
In an alternative embodiment, target object nodes belonging to the same category node have the same object label.
For specific implementation of each step in the program 1110, reference may be made to corresponding steps and corresponding descriptions in units in the above embodiment of the server-side search method, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Through the server of the embodiment, the server side can provide scene information of corresponding scenes through search keywords in the search request, the scenes can include multiple aspects related to the demand scenes of the users, and various demand cognition systems obtained by the users from vertical professional channels or other people can be included, so that the users can obtain corresponding target objects through the scene information, the user demands can be solved in a one-stop mode, a large amount of information required by initial demand scene cognition is not required to be processed and screened by the users, and the user experience and the adhesion degree are effectively improved.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the search methods described herein. Further, when a general-purpose computer accesses code for implementing the search methods shown herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the search methods shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.
Claims (50)
1. A search method, comprising:
generating a search request according to the search keyword;
according to the feedback of the search request, acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword;
and displaying the acquired information of the target object and the scene information of the scene.
2. The method of claim 1, wherein the method further comprises:
receiving a selection operation of the scene information of the displayed scene;
obtaining information of a subordinate node of the scene selected by the selecting operation, wherein the subordinate node includes: a lower level sub-scene node, or a lower level category node;
and displaying the information of the subordinate node so as to acquire the information of the target object under the scene through the information of the subordinate node.
3. The method according to claim 2, wherein the obtaining, by the information of the lower node, information of a target object in the scene includes:
receiving a selection operation of the displayed information of the subordinate node;
if the subordinate node selected by the selection operation is the subordinate category node, displaying information of a target object corresponding to the subordinate category node;
and if the lower node selected by the selection operation is the lower sub-scene node, displaying the information of the lower node of the lower sub-scene node, and returning to the process of receiving the selection operation of the displayed information of the lower node to continue execution, wherein the lower node of the lower sub-scene node is the sub-scene node or the category node.
4. The method of claim 2, wherein when the subordinate node comprises the subordinate sub-scene node, the presenting information of the subordinate node comprises:
displaying personalized information options generated according to the personalized information of the user; according to the selection operation of the user on the personal information option, determining a subordinate scene node matched with the personal information option selected by the selection operation from the acquired subordinate scene nodes, and displaying the scene information of the matched subordinate scene node;
or,
displaying the acquired scene information of the subordinate scene nodes and personalized information options generated according to the personalized information of the user; and updating the displayed subordinate sub-scene nodes and displaying the updated scene information of the subordinate sub-scene nodes according to the selection operation of the user on the personal information option.
5. The method of claim 2, wherein,
the method further comprises the following steps: and displaying scene information of the associated scene associated with the lower-level scene node.
6. The method of any of claims 1-5, wherein the method further comprises:
acquiring interactive information in a preset format, wherein the preset format comprises at least one of the following: a text format, an audio and video format, and a picture format;
and displaying the interactive information.
7. The method of claim 6, wherein the interaction information is interaction information of a predetermined promotional activity.
8. The method of any one of claims 1-5,
the search keyword is used for indicating a target object to be searched or a category to which the target object belongs;
scene information of the scene is determined based on the search keyword and the scene auxiliary information, wherein,
the scene assistance information includes at least one of: searching time information, searching position information and personalized information of the user.
9. A search method, comprising:
receiving a commodity search keyword input by a user through a search input box of a shopping platform, and generating a search request according to the commodity search keyword;
according to the feedback of the search request, acquiring information of a plurality of commodities matched with the commodity search keyword, and acquiring scene information of at least one shopping scene matched with the commodity search keyword;
and displaying the obtained information of the commodities and the scene information of the shopping scene.
10. The method of claim 9, wherein the obtaining scene information of at least one shopping scene matching the item search keyword comprises:
and acquiring scene information of at least one shopping scene matched with the commodity search keyword and commodity category information corresponding to each shopping scene.
11. The method of claim 10, wherein the displaying the obtained information of the merchandise and the scene information of the shopping scene comprises:
displaying the obtained information of the commodity, and displaying scene information of the at least one shopping scene by using at least one scene card, wherein each scene card displays the scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card.
12. The method of claim 11, wherein the method further comprises:
receiving a trigger operation on the displayed commodity category information;
and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
13. The method of claim 12, wherein the method further comprises:
showing the option information of the target intention person determined according to the personalized information of the user;
according to the selection operation of the user on the option information, updating at least one of the following information: scene information of the displayed shopping scene, commodity category information and commodity information.
14. A search method, comprising:
acquiring a search keyword from a received search request;
determining the information of the matched target object and the scene information of the scene according to the search keyword;
and feeding back the information of the target object and the scene information to a sender of the search request.
15. The method of claim 14, wherein,
the method further comprises the following steps: acquiring scene auxiliary information, wherein the scene auxiliary information is used for indicating a search scene;
the determining the information of the matched target object and the scene information of the scene according to the search keyword comprises the following steps: and determining the matched information of the target object and the scene information of the scene according to the search keyword and the scene auxiliary information.
16. The method of claim 15, wherein the scene assistance information comprises at least one of: searching time information, searching position information and personalized information of the user.
17. The method of claim 16, wherein when the scene assistance information includes the search time information, the determining information of the matched target object and scene information of the scene according to the search keyword and the scene assistance information includes:
determining information of a corresponding festival scene according to the search time information;
and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the holiday scene.
18. The method of claim 16, wherein when the scene assistance information includes the search location information, the determining information of the matched target object and scene information of the scene according to the search keyword and the scene assistance information comprises:
determining information of a corresponding geographic range according to the search position information;
and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the geographic range.
19. The method of claim 16, wherein when the scene aid information includes personalized information of the user, the determining information of the matched target object and scene information of the scene according to the search keyword and the scene aid information comprises:
determining preference information and/or relationship information of the user according to the personalized information of the user;
and determining information of the matched target object according to the search keyword, and determining scene information of a scene matched with the search keyword according to the preference information and/or the information of the relationship person of the user.
20. The method of any one of claims 14-19,
the method further comprises the following steps: determining scene information of an associated scene associated with the scene;
the feeding back the information of the target object and the scene information to the sender of the search request includes: and feeding back the information of the target object, the scene information of the scene and the scene information of the associated scene to a sender of the search request.
21. The method according to any one of claims 14-19, wherein the determining information of the matching target object and scene information of the scene according to the search keyword comprises:
determining a corresponding scene tree according to the search keyword, wherein the scene tree comprises scene nodes, category nodes and target object nodes, the scene nodes comprise at least two levels of nodes, a root node of the scene tree is a scene node, the target object nodes of the scene tree are leaf nodes, and the category nodes are intermediate nodes between the scene nodes and the target object nodes;
determining a node corresponding to the search keyword in the scene tree, and determining scene information of a scene node corresponding to the node as scene information of a scene matched with the search keyword.
22. The method of claim 21, wherein the method further comprises:
and determining other scene nodes in the scene level where the scene node is positioned as the associated scene nodes.
23. The method of claim 21, wherein the method further comprises:
receiving a scene trigger message, wherein the scene trigger message is used for indicating that the scene information is triggered;
acquiring information of lower nodes of nodes corresponding to the scene information according to the triggered node relationship of the scene information in the scene tree;
and sending the information of the lower node to the sender.
24. The method of claim 21, wherein target object nodes belonging to the same category node have the same object label.
25. A search apparatus, comprising:
the generating module is used for generating a search request according to the search keyword;
the first acquisition module is used for acquiring information of a target object matched with the search keyword and scene information of a scene matched with the search keyword according to the feedback of the search request;
and the display module is used for displaying the acquired information of the target object and the scene information of the scene.
26. The apparatus of claim 25, wherein the apparatus further comprises:
the lower node module is used for receiving the selection operation of the scene information of the displayed scene; obtaining information of a subordinate node of the scene selected by the selecting operation, wherein the subordinate node includes: a lower level sub-scene node, or a lower level category node; and displaying the information of the subordinate node so as to acquire the information of the target object under the scene through the information of the subordinate node.
27. The apparatus of claim 26, wherein the subordinate node module, when obtaining information of a target object in the scene through the information of the subordinate node:
receiving a selection operation of the displayed information of the subordinate node; if the subordinate node selected by the selection operation is the subordinate category node, displaying information of a target object corresponding to the subordinate category node; and if the lower node selected by the selection operation is the lower sub-scene node, displaying the information of the lower node of the lower sub-scene node, and returning to the process of receiving the selection operation of the displayed information of the lower node to continue execution, wherein the lower node of the lower sub-scene node is the sub-scene node or the category node.
28. The apparatus of claim 26, wherein when the subordinate node comprises the subordinate sub-scene node, the subordinate node module exposes information of the subordinate node by:
displaying personalized information options generated according to the personalized information of the user; according to the selection operation of the user on the personal information option, determining a subordinate scene node matched with the personal information option selected by the selection operation from the acquired subordinate scene nodes, and displaying the scene information of the matched subordinate scene node;
or,
displaying the acquired scene information of the subordinate scene nodes and personalized information options generated according to the personalized information of the user; and updating the displayed subordinate sub-scene nodes and displaying the updated scene information of the subordinate sub-scene nodes according to the selection operation of the user on the personal information option.
29. The apparatus of claim 26,
the lower node module is further configured to display scene information of an associated scene associated with the lower scene node.
30. The apparatus of any one of claims 25-29, wherein the apparatus further comprises:
the interactive module is used for acquiring interactive information in a preset format, wherein the preset format comprises at least one of the following: a text format, an audio and video format, and a picture format;
the display module is further used for displaying the interactive information.
31. The apparatus of claim 30, wherein the interaction information is interaction information of a predetermined promotional activity.
32. The apparatus of any one of claims 25-29,
the search keyword is used for indicating a target object to be searched or a category to which the target object belongs;
the scene information of the scene is determined according to the search keyword and the scene auxiliary information, wherein the scene auxiliary information comprises at least one of the following: searching time information, searching position information and personalized information of the user.
33. A search apparatus, comprising:
the request generation module is used for receiving a commodity search keyword input by a user through a search input box of a shopping platform and generating a search request according to the commodity search keyword;
the information acquisition module is used for acquiring information of a plurality of commodities matched with the commodity search keyword according to the feedback of the search request and acquiring scene information of at least one shopping scene matched with the commodity search keyword;
and the information display module is used for displaying the acquired information of the target object and the scene information of the scene.
34. The apparatus of claim 33, wherein the information obtaining module is configured to obtain information of a plurality of goods matching the goods search keyword, and obtain scene information of at least one shopping scene matching the goods search keyword, and goods category information corresponding to each shopping scene according to the feedback on the search request.
35. The apparatus of claim 34, wherein the information displaying module is configured to display the obtained information of the merchandise and display scene information of the at least one shopping scene using at least one scene card, wherein each scene card displays scene information of one shopping scene; and displaying commodity category information corresponding to the current shopping scene in each scene card.
36. The apparatus of claim 35, wherein the apparatus further comprises:
the second trigger module is used for receiving trigger operation on the displayed commodity category information; and displaying all commodity category information corresponding to the current shopping scene according to the triggering operation, and/or displaying information of a plurality of commodities corresponding to the commodity category information triggered by the triggering operation.
37. The apparatus of claim 36, wherein the apparatus further comprises:
the intention module is used for displaying the option information of the target intention person determined according to the personalized information of the user; according to the selection operation of the user on the option information, updating at least one of the following information: scene information of the displayed shopping scene, commodity category information and commodity information.
38. A search apparatus, comprising:
the second acquisition module is used for acquiring search keywords from the received search request;
the first determining module is used for determining the information of the matched target object and the scene information of the scene according to the search keyword;
and the sending module is used for feeding back the information of the target object and the scene information to a sender of the search request.
39. The apparatus of claim 68,
the second obtaining module is further configured to obtain scene auxiliary information, where the scene auxiliary information is used to indicate a search scene;
and the first determining module is used for determining the information of the matched target object and the scene information of the scene according to the search keyword and the scene auxiliary information.
40. The apparatus of claim 39, wherein the scene assistance information comprises at least one of: searching time information, searching position information and personalized information of the user.
41. The apparatus of claim 40, wherein when the scene assistance information comprises the search time information, the first determining means comprises:
the time information module is used for determining the information of the corresponding festival scene according to the search time information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the holiday scene.
42. The apparatus of claim 40, wherein when the scene assistance information comprises the search location information, the first determining means comprises:
the position information module is used for determining the information of the corresponding geographic range according to the search position information; and determining the information of the matched target object according to the search keyword, and determining the scene information of the scene matched with the search keyword according to the information of the geographic range.
43. The apparatus of claim 40, wherein when the context assistance information comprises personalized information for the user, the first determining module comprises:
the personal information module is used for determining the preference information and/or the relation information of the user according to the personal information of the user; and determining information of the matched target object according to the search keyword, and determining scene information of a scene matched with the search keyword according to the preference information and/or the information of the relationship person of the user.
44. The apparatus of any one of claims 38-43,
the first determining module is further configured to determine scene information of an associated scene associated with the scene;
and the sending module is used for feeding back the information of the target object, the scene information of the scene and the scene information of the associated scene to a sender of the search request.
45. The apparatus according to any one of claims 38 to 43, wherein the first determining module is configured to determine a corresponding scene tree according to the search keyword, wherein the scene tree includes scene nodes, category nodes, and target object nodes, the scene nodes include at least two levels of nodes, a root node of the scene tree is a scene node, the target object nodes of the scene tree are leaf nodes, and the category nodes are intermediate nodes between the scene node and the target object nodes; determining a node corresponding to the search keyword in the scene tree, and determining scene information of a scene node corresponding to the node as scene information of a scene matched with the search keyword.
46. The method of claim 45, wherein the apparatus further comprises:
and the second determining module is used for determining other scene nodes in the scene level where the scene node is located as the associated scene node.
47. The apparatus of claim 45, wherein the apparatus further comprises:
a second trigger module, configured to receive a scenario trigger message, where the scenario trigger message is used to indicate that the scenario information is triggered; acquiring information of lower nodes of nodes corresponding to the scene information according to the triggered node relationship of the scene information in the scene tree; and sending the information of the lower node to the sender.
48. The apparatus of claim 45, wherein target object nodes belonging to the same category node have the same object label.
49. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction which causes the processor to execute the operation corresponding to the search method of any one of claims 1-8, 9-13, or 14-24.
50. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a search method as claimed in any one of claims 1 to 8 or 9 to 13 or 14 to 24.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235863.XA CN111753180A (en) | 2019-03-27 | 2019-03-27 | Search method, search device, electronic equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235863.XA CN111753180A (en) | 2019-03-27 | 2019-03-27 | Search method, search device, electronic equipment and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111753180A true CN111753180A (en) | 2020-10-09 |
Family
ID=72671962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910235863.XA Pending CN111753180A (en) | 2019-03-27 | 2019-03-27 | Search method, search device, electronic equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111753180A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113763123A (en) * | 2021-08-12 | 2021-12-07 | 阿里巴巴(中国)有限公司 | Commodity recommendation and search method, commodity recommendation and search equipment and storage medium |
CN114004190A (en) * | 2022-01-05 | 2022-02-01 | 芯行纪科技有限公司 | Method for multi-level information acquisition and extensible operation based on physical layout |
CN114372195A (en) * | 2021-12-16 | 2022-04-19 | 阿里巴巴(中国)有限公司 | Commodity search processing method and electronic equipment |
WO2023207451A1 (en) * | 2022-04-29 | 2023-11-02 | 北京字节跳动网络技术有限公司 | Search result display method and device, and search request processing method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102591890A (en) * | 2011-01-17 | 2012-07-18 | 腾讯科技(深圳)有限公司 | Method for displaying search information and search information display device |
CN107784029A (en) * | 2016-08-31 | 2018-03-09 | 阿里巴巴集团控股有限公司 | Generation prompting keyword, the method for establishing index relative, server and client side |
-
2019
- 2019-03-27 CN CN201910235863.XA patent/CN111753180A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102591890A (en) * | 2011-01-17 | 2012-07-18 | 腾讯科技(深圳)有限公司 | Method for displaying search information and search information display device |
CN107784029A (en) * | 2016-08-31 | 2018-03-09 | 阿里巴巴集团控股有限公司 | Generation prompting keyword, the method for establishing index relative, server and client side |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113763123A (en) * | 2021-08-12 | 2021-12-07 | 阿里巴巴(中国)有限公司 | Commodity recommendation and search method, commodity recommendation and search equipment and storage medium |
CN114372195A (en) * | 2021-12-16 | 2022-04-19 | 阿里巴巴(中国)有限公司 | Commodity search processing method and electronic equipment |
CN114004190A (en) * | 2022-01-05 | 2022-02-01 | 芯行纪科技有限公司 | Method for multi-level information acquisition and extensible operation based on physical layout |
CN114004190B (en) * | 2022-01-05 | 2022-05-13 | 芯行纪科技有限公司 | Method for multi-level information acquisition and extensible operation based on physical layout |
WO2023207451A1 (en) * | 2022-04-29 | 2023-11-02 | 北京字节跳动网络技术有限公司 | Search result display method and device, and search request processing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10073860B2 (en) | Generating visualizations from keyword searches of color palettes | |
CN111753180A (en) | Search method, search device, electronic equipment and computer storage medium | |
US9792303B2 (en) | Identifying data from keyword searches of color palettes and keyword trends | |
US9898487B2 (en) | Determining color names from keyword searches of color palettes | |
US9922050B2 (en) | Identifying data from keyword searches of color palettes and color palette trends | |
US9607010B1 (en) | Techniques for shape-based search of content | |
CN113536136B (en) | Method, device and equipment for realizing search | |
US20150378999A1 (en) | Determining affiliated colors from keyword searches of color palettes | |
US20150379005A1 (en) | Identifying data from keyword searches of color palettes | |
CN113420247A (en) | Page display method and device, electronic equipment, storage medium and program product | |
CN110636325B (en) | Method and device for sharing push information on live broadcast platform and storage medium | |
WO2018126740A1 (en) | Method and device for pushing information | |
CN111680221A (en) | Information recommendation method, device, equipment and computer readable storage medium | |
US20190325497A1 (en) | Server apparatus, terminal apparatus, and information processing method | |
JP2009252152A (en) | Local information wireless distribution method and apparatus, and the computer-readable recording medium | |
US20140108555A1 (en) | Method and apparatus for identifying network functions based on user data | |
CN108521587A (en) | Short method for processing video frequency, device and mobile terminal | |
CN111582979A (en) | Clothing matching recommendation method and device and electronic equipment | |
US8725795B1 (en) | Content segment optimization techniques | |
CN111752982A (en) | Information processing method and device | |
CN113297468B (en) | Information display, recommendation and processing method, information recommendation system and electronic equipment | |
JP6251465B1 (en) | RECOMMENDED INFORMATION PROVIDING SYSTEM AND RECOMMENDED INFORMATION PROVIDING METHOD | |
CN109685632A (en) | Commodity automation shared system and method Internet-based | |
CN115860869A (en) | Shop information recommendation method, equipment and storage medium | |
JP6891759B2 (en) | Remote customer service program, remote customer service method and remote customer service device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |