CN109670106B - Scene-based object recommendation method and device - Google Patents

Scene-based object recommendation method and device Download PDF

Info

Publication number
CN109670106B
CN109670106B CN201811488831.2A CN201811488831A CN109670106B CN 109670106 B CN109670106 B CN 109670106B CN 201811488831 A CN201811488831 A CN 201811488831A CN 109670106 B CN109670106 B CN 109670106B
Authority
CN
China
Prior art keywords
information
retrieval
scene
recommendation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811488831.2A
Other languages
Chinese (zh)
Other versions
CN109670106A (en
Inventor
王群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201811488831.2A priority Critical patent/CN109670106B/en
Publication of CN109670106A publication Critical patent/CN109670106A/en
Application granted granted Critical
Publication of CN109670106B publication Critical patent/CN109670106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Abstract

The application provides a scene-based object recommendation method and device, wherein the method comprises the following steps: acquiring scene characteristic information in a target scene mode; acquiring one or more retrieval information of the target object; inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information; and generating a recommendation result of the target object according to the search result matched with each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.

Description

Scene-based object recommendation method and device
Technical Field
The application relates to the technical field of voice search, in particular to a scene-based object recommendation method and device.
Background
With the continuous development of internet technology, users can query various information based on the internet to meet the use requirements.
In the related art, as an example of a scenario, when a supermarket purchases a commodity, user requirements for different identities are different, for example, a user in a pregnancy period, a weight-losing period, or a lactation period may know nutrient content, calories, and the like of the commodity in real time through text introduction of a target commodity when the user needs to know the nutrient content, calories, and the like of the commodity, and the process is complicated, and the content is not professional.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the application provides a scene-based object recommendation method and device, and aims to solve the technical problems that in the prior art, the object information acquisition mode is complicated in process, and the content is not professional enough.
In order to achieve the above object, an embodiment of a first aspect of the present application provides a scene-based object recommendation method, including:
acquiring scene characteristic information in a target scene mode;
acquiring one or more retrieval information of the target object;
inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information;
and generating a recommendation result of the target object according to the retrieval result matched with each piece of retrieval information.
According to the scene-based object recommendation method, scene characteristic information in a target scene mode is acquired; acquiring one or more retrieval information of the target object; inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information; and generating a recommendation result of the target object according to the search result matched with each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.
To achieve the above object, a second aspect of the present application provides a scene-based object recommendation apparatus, including:
the first acquisition module is used for acquiring scene characteristic information in a target scene mode;
the second acquisition module is used for acquiring one or more pieces of retrieval information of the target object;
the third acquisition module is used for inquiring a preset information base according to the scene characteristic information and acquiring a retrieval result matched with each piece of retrieval information;
and the generating module is used for generating a recommendation result of the target object according to the retrieval result matched with each piece of retrieval information.
The object recommending device based on the scene obtains the scene characteristic information in the target scene mode; acquiring one or more retrieval information of the target object; inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information; and generating a recommendation result of the target object according to the search result matched with each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.
To achieve the above object, a third aspect of the present application provides a computer device, including: a processor and a memory; wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the scene-based thing recommendation method according to the embodiment of the first aspect.
To achieve the above object, a fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a scene-based thing recommendation method according to the first aspect.
To achieve the above object, a fifth aspect of the present application provides a computer program product, where instructions of the computer program product, when executed by a processor, implement a scene-based thing recommendation method according to the first aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a scene-based object recommendation method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating another scenario-based object recommendation method according to an embodiment of the present application;
FIGS. 3a and 3b are exemplary diagrams of scene-based item recommendations;
fig. 4 is a schematic structural diagram of a scene-based object recommendation apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another scene-based object recommendation apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another scene-based object recommendation apparatus provided in the embodiment of the present application; and
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A scene-based object recommendation method and apparatus according to an embodiment of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of a scene-based object recommendation method according to an embodiment of the present application.
As shown in fig. 1, the scene-based thing recommendation method may include the steps of:
step 101, obtaining scene characteristic information in a target scene mode.
In practical applications, a user needs to obtain a recommendation of things meeting requirements in a specific scene, for example, when the user purchases various foods such as vegetables, fruits, cooked food, seafood, meat and eggs in a supermarket, foods which can be recommended to be eaten and are not recommended to be eaten by women in each specific period such as early pregnancy, middle pregnancy, perinatal period, postpartum period, lactation period are different, and the user can know through text descriptions of various foods in the purchasing process, but the process is complicated, and the contents are not professional.
In order to solve the above problems, the present application provides a scene-based object recommendation method, which earns that corresponding object recommendations are completed for different scenes to meet user requirements.
Firstly, scene characteristic information in a target scene mode is acquired, wherein the target scene mode can be set according to the actual application requirements of a user, such as a mother-infant food mode, a weight-reducing food mode, a diabetes food mode and the like. The scene feature information may include: one or a combination of several of time information, place information, user preference information and climate information.
As an example, reference setting information of the target scene mode is obtained, and the scene feature information in the target scene mode is calculated according to the reference setting information.
For example, the target scene mode is a maternal-infant food mode, the corresponding reference setting information is 1 week of pregnancy, 14 points at 11/20/30/2018 and beijing, and then the time information of the scene feature information in the target scene mode can be calculated according to the reference setting information as 14 points at 11/30/2018, 14 points at the place information is beijing, the user preference information is a food with high nutritional ingredients suitable for the early stage of pregnancy, and the like.
It should be noted that the reference setting information may be automatically updated with the aid of a clock, geographical location information, and the like. For example, as time changes, the information is automatically updated to { 2 weeks of pregnancy, the current time is 12 months, 7 days and 14 days in 2018, the geographic location is Beijing, and the like, so that the accuracy of the recommendation result is further improved.
Step 102, one or more retrieval information of the target object is obtained.
Specifically, the user can select the target object according to the actual application requirement, such as apple, carrot, etc.
There are many ways to obtain one or more search information of the target object, for example,
in the first example, a subject image of an object is captured, the subject image is recognized according to a preset image recognition model, and a subject name of the object is acquired.
For example, the target object is an apple, a subject image of the apple is shot, the subject image is identified according to a preset image identification model, and a subject name of the apple is obtained as retrieval information.
In the second example, the external packaging content of the target object is read, and one or more component information of the target object is obtained.
For example, the target object is a cookie, and the ingredient information of the cookie is obtained as butter, flour and the like as retrieval information by reading the content of the outer package of the cookie.
And 103, inquiring a preset information base according to the scene characteristic information, and acquiring a retrieval result matched with each piece of retrieval information.
And 104, generating a recommendation result of the target object according to the search result matched with each piece of search information.
And setting an information base of the scene characteristic information and the retrieval result corresponding to each retrieval information according to the actual application requirement in advance. And inquiring in a preset information base according to the scene characteristic information under the target scene mode, and acquiring a retrieval result matched with each retrieval information.
For example, the target scene mode is a mother-infant food mode, the time information of the scene characteristic information is 14 o' clock in 11/30/2018, the place information is Beijing, the user preference information is food with high nutritional ingredients suitable for early pregnancy, and the retrieval result matched with the apple can be obtained by querying in a preset information base according to the scene characteristic information, so that the fruit in the season has high nutritional ingredients, the fruit in the early pregnancy can be eaten, and the like.
Further, a recommendation result of the object is generated based on the search result matched with each piece of search information, as an example, a base score of each piece of search information is generated based on the search result matched with each piece of search information, a recommendation score of each piece of search information is generated based on preset weight information and base score corresponding to each piece of search result and scene feature information, and a recommendation result of the object is generated based on the recommendation score of each piece of search information.
In the object recommendation method based on the scene, scene characteristic information in a target scene mode is acquired; acquiring one or more retrieval information of the target object; inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information; and generating a recommendation result of the target object according to the search result matched with each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.
Fig. 2 is a schematic flowchart of another scene-based object recommendation method according to an embodiment of the present application.
As shown in fig. 2, the scene-based thing recommendation method may include the steps of:
step 201, obtaining reference setting information of the target scene mode, and calculating scene characteristic information in the target scene mode according to the reference setting information.
Specifically, the corresponding reference setting information is set for different target scene modes, so that accurate acquisition of scene characteristic information can be facilitated, for example, the reference setting information is 1 week of pregnancy, the current time is 14 hours in 11 months and 30 days in 2018 of Beijing, the geographic location is a Heihai district in Beijing City, and the like.
Step 202, shooting a subject image of the target object, identifying the subject image according to a preset image identification model, and acquiring a subject name of the target object as retrieval information.
The subject image can be obtained by calling the shooting equipment to shoot the target object, for example, the carrot can be seen in a supermarket and the subject image of the carrot can be obtained by directly shooting through a mobile phone.
The image recognition model is preset, and the subject name corresponding to the recognition of the main image can be used. Therefore, the main body name of the carrot is obtained by identifying the main body image of the carrot through the preset image identification model.
Step 203, setting the retrieval branches corresponding to the scene modes, and establishing the knowledge content corresponding to each retrieval branch in the information base according to the retrieval branches corresponding to the scene modes.
And 204, inquiring an information base according to the scene characteristic information, and acquiring a retrieval result corresponding to each retrieval branch and matched with the scene characteristic information and each retrieval information.
Specifically, different retrieval branches can be set for different scene modes to further improve the recommendation accuracy, and meet the user requirements better, for example, a mother-infant food mode corresponds to one retrieval branch, a weight-reducing food mode corresponds to one retrieval branch, and the like, and knowledge contents corresponding to each retrieval branch are established in the information base, for example, a retrieval branch corresponding to a mother-infant food mode is mainly pregnant nutrition contents, a retrieval branch corresponding to a weight-reducing food mode is mainly health weight-reducing food contents, and the like.
Therefore, the information base is queried according to the scene characteristic information, and the retrieval result corresponding to each retrieval branch and matched with the scene characteristic information and each piece of retrieval information can be obtained.
Specifically, the search information is searched in the information base in combination with the scene feature information, and search results corresponding to each search branch and matching the scene feature information and each search information, such as search results for displaying decision food purchase (whether the currently scanned food is suitable for purchase), nutritional analysis (nutritional ingredients contained in the currently scanned food), tabu analysis (tabu ingredients contained in the currently scanned food, which influences are present in the current stage), functional analysis (which is beneficial to promotion of the currently scanned food), and the like, are obtained.
Step 205, generating a basic score of each piece of retrieval information according to the retrieval result matched with each piece of retrieval information, generating a recommendation score of each piece of retrieval information according to preset weight information and basic score corresponding to each retrieval result and scene feature information, and generating a recommendation result of the target object according to the recommendation score of each piece of retrieval information.
For example, the target scene mode is a maternal-infant food mode, the time information of the scene characteristic information is 14 o' clock at 11/30/2018, the location information is Beijing, the user preference information is a food with high nutritional ingredients suitable for early pregnancy, search results matched with butter, flour and cane sugar are obtained by inquiring in a preset information base according to the scene characteristic information, and if the search results can be eaten in early pregnancy and cannot be eaten excessively in early pregnancy, different basic scores are respectively given to butter, flour and cane sugar, finally, a recommendation score of each piece of search information is generated according to preset weight information and basic scores corresponding to each piece of search results and the scene characteristic information, and finally, a recommendation result of a target object is generated according to the recommendation score of each piece of search information.
For example, the scene feature information early pregnancy, middle pregnancy and late pregnancy correspond to different weight information respectively, the base score corresponding to the search information carrot is, for example, 80, then, for example, recommendation scores of the search information carrot are respectively generated to be 40, 50 and 60, and recommendation results of the carrot are generated according to the recommendation scores of the search information carrot 40, 50 and 60. Therefore, the recommendation efficiency and accuracy are further improved through the weight setting, and the user experience is improved.
As an example of a scenario, as shown in fig. 3a, the target scenario mode is a maternal-infant food mode, the target object is a pear, and the retrieval information is a retrieval result of pear matching, such as decision-making food purchase (whether the currently scanned food is suitable for purchase), nutritional analysis (nutritional ingredients contained in the currently scanned food), taboo analysis (taboo ingredients contained in the currently scanned food, which influence is present at the current stage), functional analysis (promotion of which aspect is beneficial to the currently scanned food), revision recommendation (suggesting a preparation method of the food, matching with the food), and the like.
As another example of the scenario, as shown in fig. 3b, the external package content of the butter cookie is read, one or more ingredient information of the butter cookie is obtained, the interest of each ingredient is identified, the search results are displayed item by item, and finally the total recommendation result is given.
It should be noted that, a preset association information base is queried according to the scene feature information, and a recommended object related to the target scene mode is obtained. For example, the target scene mode is a mother-infant food mode, the user preference information is food with high nutritional ingredients suitable for the early pregnancy, and the food such as avocado and walnut is taken as recommended things related to the mother-infant food mode, so that the user requirements are further met, and the user experience is improved.
The object recommending method based on the scene of the embodiment comprises the steps of acquiring reference setting information of a target scene mode, calculating scene characteristic information under the target scene mode according to the reference setting information, shooting a main body image of a target object, identifying the main body image according to a preset image identification model, acquiring a main body name of the target object as retrieval information, setting retrieval branches corresponding to all the scene modes, establishing knowledge content corresponding to each retrieval branch in an information base according to the retrieval branches corresponding to all the scene modes, inquiring the information base according to the scene characteristic information, acquiring retrieval results corresponding to all the retrieval branches, matching with the scene characteristic information and all the retrieval information, generating basic scores of all the retrieval information according to the retrieval results matching with all the retrieval information, generating weight information and basic scores corresponding to all the retrieval results and the scene characteristic information according to preset weight information and basic scores, and generating a recommendation score of each piece of search information, and generating a recommendation result of the target object according to the recommendation score of each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.
In order to implement the above embodiments, the present application further provides a scene-based object recommendation apparatus.
Fig. 4 is a schematic structural diagram of a scene-based object recommendation apparatus according to an embodiment of the present application.
As shown in fig. 4, the scene-based thing recommending apparatus 40 may include: a first acquisition module 410, a second acquisition module 420, a third acquisition module 430, and a generation module 440. Wherein the content of the first and second substances,
the first obtaining module 410 is configured to obtain scene characteristic information in a target scene mode.
The second obtaining module 420 is configured to obtain one or more retrieval information of the target object.
The third obtaining module 430 is configured to query a preset information base according to the scene feature information, and obtain a retrieval result matched with each piece of retrieval information.
And the generating module 440 is configured to generate a recommendation result of the target object according to the search result matched with each piece of search information.
In an embodiment of the present application, the first obtaining module 410 is specifically configured to: acquiring reference setting information of a target scene mode; and calculating scene characteristic information in the target scene mode according to the reference setting information.
In one embodiment of the present application, the scene characteristic information includes: one or a combination of several of time information, place information, user preference information and climate information.
In an embodiment of the present application, the second obtaining module 420 is specifically configured to: shooting a subject image of a target object; and identifying the subject image according to a preset image identification model, and acquiring the subject name of the target object.
In an embodiment of the present application, the second obtaining module 420 is specifically configured to: reading the external packaging content of the target object and acquiring one or more component information of the target object.
In an embodiment of the present application, the generating module 440 is specifically configured to: generating a basic score of each retrieval information according to the retrieval result matched with each retrieval information, and generating a recommendation score of each retrieval information according to preset weight information corresponding to each retrieval result and scene characteristic information and the basic score; and generating a recommendation result of the target object according to the recommendation scores of the retrieval information.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 5, on the basis of the embodiment shown in fig. 4, the scene-based object recommendation apparatus 40 further includes: a setup module 450 and a setup module 460.
The setting module 450 is configured to set a retrieval branch corresponding to each scene mode.
The establishing module 460 is configured to establish knowledge content corresponding to each retrieval branch in the information base according to the retrieval branch corresponding to each scene mode.
The third obtaining module 430 is specifically configured to query the information base according to the scene feature information, and obtain a search result corresponding to each search branch and matching the scene feature information and each search information.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 6, on the basis of the embodiment shown in fig. 4, the scene-based object recommendation apparatus 40 further includes: a fourth acquisition module 470.
The fourth obtaining module 470 is configured to query a preset association information base according to the scene characteristic information, and obtain a recommended object related to the target scene mode.
It should be noted that the foregoing explanation of the embodiment of the method for recommending objects based on a scene is also applicable to the apparatus for recommending objects based on a scene in this embodiment, and the implementation principle is similar, and is not described herein again.
The object recommending device based on the scene obtains the scene characteristic information in the target scene mode; acquiring one or more retrieval information of the target object; inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information; and generating a recommendation result of the target object according to the search result matched with each piece of search information. Therefore, corresponding object recommendation can be completed based on different scenes, the object recommendation efficiency is improved, the user requirements are met, the use by the user is facilitated, and the user experience is improved.
By in order to implement the above embodiments, the present application also provides a computer device, including: a processor and a memory. Wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the scene-based thing recommending method according to the foregoing embodiment.
FIG. 7 is a block diagram of a computer device provided in an embodiment of the present application, illustrating an exemplary computer device 90 suitable for use in implementing embodiments of the present application. The computer device 90 shown in fig. 7 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer device 90 is in the form of a general purpose computer device. The components of computer device 90 may include, but are not limited to: one or more processors or processing units 906, a system memory 910, and a bus 908 that couples the various system components (including the system memory 910 and the processing unit 906).
Bus 908 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 90 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 90 and includes both volatile and nonvolatile media, removable and non-removable media.
The system Memory 910 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 911 and/or cache Memory 912. The computer device 90 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 913 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard disk drive"). Although not shown in FIG. 7, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 908 by one or more data media interfaces. System memory 910 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
Program/utility 914 having a set (at least one) of program modules 9140 may be stored, for example, in system memory 910, such program modules 9140 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of these examples may comprise an implementation of a network environment. Program modules 9140 generally perform the functions and/or methods of embodiments described herein.
The computer device 90 may also communicate with one or more external devices 10 (e.g., keyboard, pointing device, display 100, etc.), with one or more devices that enable a user to interact with the terminal device 90, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 90 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 902. Moreover, computer device 90 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 900. As shown in FIG. 7, network adapter 900 communicates with the other modules of computer device 90 via bus 908. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with computer device 90, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 906 executes various functional applications and data processing by executing programs stored in the system memory 910, for example, implementing the scene-based thing recommendation method mentioned in the foregoing embodiments.
In order to implement the foregoing embodiments, the present application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the scene-based thing recommendation method according to the foregoing embodiments.
In order to implement the foregoing embodiments, the present application also proposes a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for recommending things based on scene as described in the foregoing embodiments is implemented.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A scene-based object recommendation method is characterized by comprising the following steps:
the method for acquiring the scene characteristic information in the target scene mode comprises the following steps of: acquiring reference setting information of the target scene mode; calculating scene characteristic information in the target scene mode according to the reference setting information;
acquiring one or more retrieval information of the target object;
inquiring a preset information base according to the scene characteristic information to obtain a retrieval result matched with each retrieval information;
and generating a recommendation result of the target object according to the search result matched with each piece of search information, and displaying each search result and each recommendation result.
2. The method of claim 1, wherein the scene characteristic information comprises:
one or a combination of several of time information, place information, user preference information and climate information.
3. The method of claim 1, wherein said obtaining one or more retrieved information of the object comprises:
shooting a subject image of the object;
and identifying the subject image according to a preset image identification model, and acquiring the subject name of the target object.
4. The method of claim 1, wherein said obtaining one or more retrieved information of the object comprises:
reading the external packaging content of the object, and acquiring one or more component information of the object.
5. The method of claim 1, further comprising:
setting a retrieval branch corresponding to each scene mode;
establishing knowledge content corresponding to each retrieval branch in the information base according to the retrieval branch corresponding to each scene mode;
the querying a preset information base according to the scene feature information to obtain a retrieval result matched with each piece of retrieval information includes:
and querying the information base according to the scene characteristic information to obtain a retrieval result corresponding to each retrieval branch and matched with the scene characteristic information and each piece of retrieval information.
6. The method of claim 1, wherein generating the recommendation of the object based on the search result matched with each of the search information comprises:
generating a basic score of each piece of retrieval information according to a retrieval result matched with each piece of retrieval information;
generating recommendation scores of the retrieval information according to preset weight information corresponding to the retrieval results and the scene characteristic information and the basic scores;
and generating a recommendation result of the target object according to the recommendation score of each piece of retrieval information.
7. The method of any of claims 1-6, further comprising:
and querying a preset associated information base according to the scene characteristic information to acquire recommended things related to the target scene mode.
8. A scene-based object recommendation apparatus, comprising:
the first acquisition module is specifically used for acquiring reference setting information of the target scene mode and calculating the scene characteristic information in the target scene mode according to the reference setting information;
the second acquisition module is used for acquiring one or more pieces of retrieval information of the target object;
the third acquisition module is used for inquiring a preset information base according to the scene characteristic information and acquiring a retrieval result matched with each piece of retrieval information;
and the generating module is used for generating a recommendation result of the target object according to the search result matched with each piece of search information and displaying each search result and the recommendation result.
9. A computer device comprising a processor and a memory;
wherein the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for implementing the scene-based thing recommending method according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the scene-based thing recommendation method according to any one of claims 1-7.
CN201811488831.2A 2018-12-06 2018-12-06 Scene-based object recommendation method and device Active CN109670106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811488831.2A CN109670106B (en) 2018-12-06 2018-12-06 Scene-based object recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811488831.2A CN109670106B (en) 2018-12-06 2018-12-06 Scene-based object recommendation method and device

Publications (2)

Publication Number Publication Date
CN109670106A CN109670106A (en) 2019-04-23
CN109670106B true CN109670106B (en) 2022-03-11

Family

ID=66143639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811488831.2A Active CN109670106B (en) 2018-12-06 2018-12-06 Scene-based object recommendation method and device

Country Status (1)

Country Link
CN (1) CN109670106B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347914B (en) * 2019-06-06 2024-02-06 创新先进技术有限公司 Data processing method and device
CN111105298B (en) * 2019-12-31 2023-09-26 杭州涂鸦信息技术有限公司 Purchasing recommendation method and system based on intelligent scene of Internet of things
CN113763082A (en) * 2020-09-04 2021-12-07 北京沃东天骏信息技术有限公司 Information pushing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399860A (en) * 2013-07-04 2013-11-20 北京百纳威尔科技有限公司 Content display method and device
CN103778187A (en) * 2013-12-31 2014-05-07 百度(中国)有限公司 Method and device for returning search result in oriented mode
CN105022793A (en) * 2015-06-29 2015-11-04 成都亿邻通科技有限公司 Image object identification method
CN105634881A (en) * 2014-10-30 2016-06-01 腾讯科技(深圳)有限公司 Application scene recommending method and device
CN107016163A (en) * 2017-03-07 2017-08-04 北京小米移动软件有限公司 Floristics recommends method and device
CN107480265A (en) * 2017-08-17 2017-12-15 广州视源电子科技股份有限公司 Data recommendation method, device, equipment and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8934666B2 (en) * 2008-10-10 2015-01-13 Adc Automotive Distance Control Systems Gmbh Method and device for analyzing surrounding objects and/or surrounding scenes, such as for object and scene class segmenting
CN105227973A (en) * 2014-06-27 2016-01-06 中兴通讯股份有限公司 Based on information recommendation method and the device of scene Recognition
CN104268154A (en) * 2014-09-02 2015-01-07 百度在线网络技术(北京)有限公司 Recommended information providing method and device
US9407815B2 (en) * 2014-11-17 2016-08-02 International Business Machines Corporation Location aware photograph recommendation notification
CN104598602B (en) * 2015-01-27 2019-04-26 百度在线网络技术(北京)有限公司 Pass through computer implemented information recommendation method and device based on scene
CN104866530A (en) * 2015-04-27 2015-08-26 宁波网传媒有限公司 Recommendation system and method based on slider scores
CN105142104A (en) * 2015-06-19 2015-12-09 北京奇虎科技有限公司 Method, device and system for providing recommendation information
CN105868360A (en) * 2016-03-29 2016-08-17 乐视控股(北京)有限公司 Content recommendation method and device based on voice recognition
CN106777067A (en) * 2016-11-16 2017-05-31 中国科学院上海高等研究院 Information recommendation method and system
CN106528834B (en) * 2016-11-17 2020-02-04 百度在线网络技术(北京)有限公司 Picture resource pushing method and device based on artificial intelligence
CN106776999A (en) * 2016-12-07 2017-05-31 北京小米移动软件有限公司 Multi-medium data recommends method and device
CN107592451A (en) * 2017-08-31 2018-01-16 努比亚技术有限公司 A kind of multi-mode auxiliary photo-taking method, apparatus and computer-readable recording medium
CN107609198B (en) * 2017-10-20 2020-06-12 咪咕互动娱乐有限公司 Recommendation method and device and computer readable storage medium
CN107920163A (en) * 2017-11-14 2018-04-17 维沃移动通信有限公司 A kind of indicating mode switching method and mobile terminal, cloud server
CN107992602A (en) * 2017-12-14 2018-05-04 北京百度网讯科技有限公司 Search result methods of exhibiting and device
CN108897785A (en) * 2018-06-08 2018-11-27 Oppo(重庆)智能科技有限公司 Search for content recommendation method, device, terminal device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399860A (en) * 2013-07-04 2013-11-20 北京百纳威尔科技有限公司 Content display method and device
CN103778187A (en) * 2013-12-31 2014-05-07 百度(中国)有限公司 Method and device for returning search result in oriented mode
CN105634881A (en) * 2014-10-30 2016-06-01 腾讯科技(深圳)有限公司 Application scene recommending method and device
CN105022793A (en) * 2015-06-29 2015-11-04 成都亿邻通科技有限公司 Image object identification method
CN107016163A (en) * 2017-03-07 2017-08-04 北京小米移动软件有限公司 Floristics recommends method and device
CN107480265A (en) * 2017-08-17 2017-12-15 广州视源电子科技股份有限公司 Data recommendation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN109670106A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109670106B (en) Scene-based object recommendation method and device
Anthimopoulos et al. Computer vision-based carbohydrate estimation for type 1 patients with diabetes using smartphones
US10441112B1 (en) Food preparation system and method using a scale that allows and stores modifications to recipes based on a measured change to one of its ingredients
Jiang et al. Food nutrition visualization on Google glass: Design tradeoff and field evaluation
CN110021404A (en) For handling the electronic equipment and method of information relevant to food
CN109740571A (en) The method of Image Acquisition, the method, apparatus of image procossing and electronic equipment
US20200342977A1 (en) System, computer-readable storage medium, and method
US20210313039A1 (en) Systems and Methods for Diet Quality Photo Navigation Utilizing Dietary Fingerprints for Diet Assessment
US20140214618A1 (en) In-store customer scan process including nutritional information
CN112464013A (en) Information pushing method and device, electronic equipment and storage medium
JP2022514185A (en) Recipe generation based on neural network
CN110020609A (en) A kind of method, system and storage medium for analyzing food
CN110610149A (en) Information processing method and device and computer storage medium
CN107273678A (en) A kind of food nourishment composition based on smart mobile phone automatically analyzes calculating system
CN112902406B (en) Air conditioner and/or fan parameter setting method, control device and readable storage medium
CN106203466A (en) The method and apparatus of food identification
CN112016548B (en) Cover picture display method and related device
CN110062183A (en) Obtain method, apparatus, server, storage medium and the system of feed data
JP2016139319A (en) Health management device and health management program
An et al. We got nuts! use deep neural networks to classify images of common edible nuts
CN111651674A (en) Bidirectional searching method and device and electronic equipment
CN111415328B (en) Method and device for determining article analysis data and electronic equipment
CN112015936B (en) Method, device, electronic equipment and medium for generating article display diagram
US20180189992A1 (en) Systems and methods for generating an ultrasound multimedia product
JP6934001B2 (en) Image processing equipment, image processing methods, programs and recording media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant