CN109189986B - Information recommendation method and device, electronic equipment and readable storage medium - Google Patents

Information recommendation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN109189986B
CN109189986B CN201810993829.4A CN201810993829A CN109189986B CN 109189986 B CN109189986 B CN 109189986B CN 201810993829 A CN201810993829 A CN 201810993829A CN 109189986 B CN109189986 B CN 109189986B
Authority
CN
China
Prior art keywords
information
object image
real
determining
time video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810993829.4A
Other languages
Chinese (zh)
Other versions
CN109189986A (en
Inventor
姚淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810993829.4A priority Critical patent/CN109189986B/en
Publication of CN109189986A publication Critical patent/CN109189986A/en
Application granted granted Critical
Publication of CN109189986B publication Critical patent/CN109189986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

According to the information recommendation method, the information recommendation device, the electronic equipment and the readable storage medium, the real-time video currently shot by the user terminal is obtained to obtain the information of objects and the environment around the user; determining first key information according to the object image in the real-time video, wherein the first key information embodies an object concerned by a user; determining second key information according to the scene information corresponding to the real-time video, wherein the second key information embodies the retrieval direction and the data field; according to the first key information and the second key information, current recommendation information is obtained, strong correlation information recommendation can be performed by combining a scene where the user is located and an object concerned by the user, the pertinence of information recommendation is improved, and high user experience is achieved.

Description

Information recommendation method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information recommendation method and apparatus, an electronic device, and a readable storage medium.
Background
With the rapid development of network technology, a large amount of publicly available information can be acquired by users to increase the convenience of the users in daily life. Due to the huge data volume and the diverse requirements of users, information which best meets the current requirements of the users needs to be provided for the users in an information recommendation mode.
The existing information recommendation method is mainly used for performing association recommendation according to keywords input by a user, for example, when the user inputs keywords tomatoes, the method information of tomato egg-flower soup and the tomato planting method information are recommended to the user.
However, in the existing information recommendation method, the recommendation result mainly depends on the keywords input by the user, and the user may have difficulty in determining the most appropriate keywords, which affects the accuracy of the recommendation result. In addition, due to the uncertainty in selecting the data fields, the type of recommended results may be far from what the user needs, for example, the practice of tomato bouillon soup and the tomato planting method are two completely different data fields. Therefore, the existing information recommendation method is not highly reliable.
Disclosure of Invention
The invention provides an information recommendation method, an information recommendation device, electronic equipment and a readable storage medium, which improve user experience and reliability of information recommendation.
According to a first aspect of the present invention, there is provided an information recommendation method, including:
acquiring a real-time video currently shot by a user terminal;
determining first key information according to an object image in the real-time video;
determining second key information according to scene information corresponding to the real-time video;
and obtaining current recommendation information according to the first key information and the second key information.
Optionally, in a possible implementation manner of the first aspect, the determining first key information according to an object image in the real-time video includes:
receiving a selection instruction input by a user for the object image;
determining the object image indicated by the selection instruction as a target object image;
and determining first key information according to the target object image.
Optionally, in another possible implementation manner of the first aspect, before the receiving a selection instruction input by a user for the object image, the method further includes:
in the real-time video, the object image is highlighted to a user.
Optionally, in yet another possible implementation manner of the first aspect, the receiving a selection instruction input by a user for the object image includes:
acquiring a finger suspension moving track in the real-time video;
and determining a selection instruction input by a user aiming at the object image according to the position relation between the finger suspension movement track and the object image.
Optionally, in yet another possible implementation manner of the first aspect, the determining first key information according to the target object image includes:
determining semantic information of the target object image as first semantic information;
and taking the first semantic information and synonym information of the first semantic information as first key information.
Optionally, in yet another possible implementation manner of the first aspect, before the determining the first key information according to the image of the object in the real-time video, the method further includes:
and carrying out object identification on the video frame in the real-time video to obtain the object image.
Optionally, in another possible implementation manner of the first aspect, before determining second key information according to scene information corresponding to the real-time video, the method further includes:
acquiring current position information of the user terminal;
and determining scene information corresponding to the real-time video according to the current position information.
Optionally, in another possible implementation manner of the first aspect, before determining second key information according to scene information corresponding to the real-time video, the method further includes:
determining semantic information of an object image in the real-time video as second semantic information;
and determining scene information corresponding to the real-time video according to the second semantic information.
Optionally, in yet another possible implementation manner of the first aspect, scene information corresponding to the real-time video is determined according to the second semantic information;
determining a plurality of candidate scene information according to the second semantic information;
and determining scene information corresponding to the real-time video in the plurality of candidate scene information according to the first key information.
According to a second aspect of the present invention, there is provided an information recommendation apparatus comprising:
the acquisition module is used for acquiring a real-time video currently shot by the user terminal;
the first processing module is used for determining first key information according to an object image in the real-time video;
the second processing module is used for determining second key information according to the scene information corresponding to the real-time video;
and the recommending module is used for obtaining current recommending information according to the first key information and the second key information.
Optionally, in a possible implementation manner of the second aspect, the first processing module is configured to:
receiving a selection instruction input by a user for the object image; determining the object image indicated by the selection instruction as a target object image; and determining first key information according to the target object image.
Optionally, in another possible implementation manner of the second aspect, before the receiving of the selection instruction input by the user for the object image, the first processing module is further configured to:
in the real-time video, the object image is highlighted to a user.
Optionally, in yet another possible implementation manner of the second aspect, the first processing module is specifically configured to:
acquiring a finger suspension moving track in the real-time video; and determining a selection instruction input by a user aiming at the object image according to the position relation between the finger suspension movement track and the object image.
Optionally, in yet another possible implementation manner of the second aspect, the first processing module is specifically configured to:
determining semantic information of the target object image as first semantic information; and taking the first semantic information and synonym information of the first semantic information as first key information.
Optionally, in yet another possible implementation manner of the second aspect, before the determining the first key information according to the object image in the real-time video, the first processing module is further configured to:
and carrying out object identification on the video frame in the real-time video to obtain the object image.
Optionally, in yet another possible implementation manner of the second aspect, before the determining, according to the scene information corresponding to the real-time video, second key information, the second processing module is further configured to:
acquiring current position information of the user terminal; and determining scene information corresponding to the real-time video according to the current position information.
Optionally, in yet another possible implementation manner of the second aspect, before the determining, according to the scene information corresponding to the real-time video, second key information, the second processing module is further configured to:
determining semantic information of an object image in the real-time video as second semantic information; and determining scene information corresponding to the real-time video according to the second semantic information.
Optionally, in yet another possible implementation manner of the second aspect, the second processing module is specifically configured to;
determining a plurality of candidate scene information according to the second semantic information; and determining scene information corresponding to the real-time video in the plurality of candidate scene information according to the first key information.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising: a memory, a processor and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the information recommendation method according to the first aspect of the present invention and various possible designs of the first aspect.
According to a fourth aspect of the present invention, there is provided a readable storage medium having stored therein a computer program for implementing the information recommendation method of the first aspect of the present invention and various possible designs of the first aspect when executed by a processor.
According to the information recommendation method, the information recommendation device, the electronic equipment and the readable storage medium, the real-time video currently shot by the user terminal is obtained to obtain the information of objects and the environment around the user; determining first key information according to the object image in the real-time video, wherein the first key information embodies an object concerned by a user; determining second key information according to the scene information corresponding to the real-time video, wherein the second key information embodies the retrieval direction and the data field; according to the first key information and the second key information, current recommendation information is obtained, strong correlation information recommendation can be performed by combining a scene where the user is located and an object concerned by the user, the pertinence of information recommendation is improved, and high user experience is achieved.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
fig. 2 is a schematic flow chart of an information recommendation method according to an embodiment of the present invention;
FIG. 3 is an example of an object image of a kitchen scene provided by an embodiment of the invention;
FIG. 4 is a flowchart of an alternative embodiment of step S102 in FIG. 2 according to an embodiment of the present invention;
FIG. 5 is an exemplary embodiment of an input select instruction provided by an embodiment of the present invention;
FIG. 6 is an example embodiment of another input selection instruction provided by an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an information recommendation apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention;
fig. 9 is an example of the electronic device shown in fig. 8 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of A, B, C comprises, "comprises A, B and/or C" means that any 1 or any 2 or 3 of A, B, C comprises.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information.
As used herein, "if" may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
In the embodiment of the present invention, a video frame refers to a picture constituting a video. The video can be regarded as a sequence formed by a plurality of video frames in sequence, the playing of the video can be understood as sequentially displaying the video frames in the sequence, and because the display frequency of the video frames is greater than the value range which can be recognized by human eyes, a dynamic continuously-changing video picture seen by human eyes is formed.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention. The application scenario shown in fig. 1 may include a user terminal 1 and a server 2, where the number of the user terminals 1 may be 1 or more. And the user terminal 1 may specifically be an electronic device having a photographing function, such as a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, and a personal digital assistant, wherein further may be an AR wearable device, such as AR glasses and an AR helmet. In the scenario shown in fig. 1, the user terminal 1 may capture a video frame in front of the user and then obtain recommendation information to the server 2 after local operation, or the user terminal 1 may transmit the captured video frame to the server 2, and the server 2 returns recommendation information to the user terminal 1 according to the video frames.
In one implementation, the information recommendation process may be mainly processed by the user terminal 1. For example, it may be that the user terminal 1 configures a computer program and related parameters for information recommendation so that the user terminal 1 processes video frames when it captures them, and then obtains the recommendation information in the local database of the user terminal 1.
In another implementation, the process of information recommendation may be primarily dependent on the server 2 for processing. For example, the server 2 may receive real-time videos sent from the user terminal 1 in real time when receiving a recommendation information acquisition request of a user, acquire recommendation information according to the real-time videos, and return the recommendation information to the user terminal 1. Since the server 2 generally has a relatively strong data processing capability and analysis capability, and the data storage capacity is generally large, a plurality of user terminals 1 can be connected for centralized processing, and the processing efficiency is relatively high.
The steps of the information recommendation method in the present invention may also be executed by the user terminal 1 and the server 2 in combination. For example, a part of the steps in the method according to the following embodiment of the present invention may be implemented by the user terminal 1, and another part of the steps may be implemented by the server 2. The present invention does not limit whether the execution subject of the information recommendation method is a single individual or a distributed system.
Referring to fig. 2, which is a flowchart illustrating an information recommendation method according to an embodiment of the present invention, an execution subject of the method shown in fig. 2 may be a software and/or hardware device, such as the user terminal and/or the server shown in fig. 1. The method shown in fig. 2 includes steps S101 to S104, which are specifically as follows:
and S101, acquiring a real-time video currently shot by the user terminal.
The method can be understood as actively acquiring the currently shot real-time video from a camera of the user terminal in real time; or when the user opens the intelligent information recommendation function or acquires preset trigger information, acquiring the currently shot real-time video from the camera of the user terminal.
Optionally, when the real-time video is acquired, object identification may be performed on a video frame in the real-time video to obtain an object image. Specifically, a real-time video is first captured and parsed into video frames to be analyzed. It is understood that the real-time video is retrieved from the cache of the user terminal or the cache of the server. The number of video frames parsed from the real-time video may be plural. Then, the process of acquiring the object image in the video frame can be acquired in a picture identification and picture classification manner. For example, the video frames are classified by semantic segmentation algorithm (e.g., FCN algorithm) or example segmentation algorithm (e.g., Mask RCNN algorithm) based on pixel points, and the category and position of the object image in each video frame are identified.
Optionally, when the object image is acquired, the attribute information of the object image may also be displayed in a floating display manner above the object image. The attribute information may be understood as the name, composition, action, and/or formation of the object corresponding to the object image. For example, for a tomato image, a nutrient data list of tomatoes is acquired and then displayed in suspension in the vicinity of the tomato image. For another example, if a tomato image is recognized, the most frequently used recipe information mainly including tomatoes is displayed in a floating manner in the vicinity thereof. Therefore, the attribute information of each object in the real-time video can be provided for the user, and the user experience is improved.
S102, determining first key information according to the object image in the real-time video.
Optionally, in order to facilitate the user in making clear which real objects have been identified, the object images may be highlighted to the user in the real-time video. It can be understood that the real-time video is highlighted and then synchronously displayed in real time to the user.
Fig. 3 is a diagram illustrating an example of an object image of a kitchen scene according to an embodiment of the present invention. In the example shown in fig. 3, the obtaining of the real-time video of the kitchen after the user comes to the kitchen and the recognizing of the object image in the kitchen may include: an electric cooker 31, a steamer 32, an electric microwave oven 33, a saucepan 34, chicken 36 and fish 36. In fig. 3, the recognized object image is marked and displayed by a ring-shaped aperture to realize the highlighting of the object image. Alternatively, it may be displayed in a frame-shaped mark or in a special color mark.
The specific manner of determining the first key information may be various, and referring to fig. 4, is a schematic flowchart of an alternative embodiment of step S102 in fig. 2 according to an embodiment of the present invention. The method shown in fig. 4 includes steps S201 to S203, which are specifically as follows:
s201, receiving a selection instruction input by a user aiming at the object image.
The selection instruction can be input by the user in various ways, such as voice input or click selection of an object image on a touch display screen, or moving a finger in front of a lens to form an input selection instruction at a floating position of the finger image.
Fig. 5 is a diagram illustrating an example of an input selection instruction according to an embodiment of the present invention. In the implementation manner shown in fig. 5, the AR glasses are taken as an execution subject for illustration, for example, a user wears the AR glasses, then stretches out the right hand to draw a circle in a real-time video displayed in front of the user, and circles out the electric cooker 31 and the chicken 36, the AR glasses acquire a finger-hanging movement track in the real-time video, and determine a selection instruction input by the user for the object image according to a position relationship between the finger-hanging movement track and the object image. For example, if the finger hanging movement track in fig. 5 has an overlapped area with the object image of the rice cooker 31 and the chicken 36, it is determined that the selection instruction is to select the rice cooker 31 and the chicken 36. The selection instruction can be determined according to a track formed by a closed loop when the closed loop of the finger suspension movement track is determined. Or the object image which is intersected by the finger hanging movement track is used as the object image indicated by the selection instruction.
Fig. 6 is a diagram illustrating another example of an input selection instruction according to an embodiment of the present invention. In fig. 6, the user slides the image area of the electric rice cooker 31 and the chicken 36 with the fingers, and the object image scanned from the time when the finger image appears to the time when the finger image leaves is taken as the object image indicated by the selection instruction. The input mode of the selection instruction can be various, and is not listed here.
And S202, determining the object image indicated by the selection instruction as a target object image.
S203, determining first key information according to the target object image.
The semantic information of the target object image may be directly used as the first key information, for example, "rice cooker" and "chicken" corresponding to the rice cooker 31 and the chicken 36 may be used as the first key information.
The semantic information of the target object image can be determined as first semantic information. For example, "rice cooker" and "chicken" corresponding to the rice cooker 31 and the chicken 36 may be understood as the first semantic information. Then, the first semantic information and the synonym information of the first semantic information are used as first key information. For example, synonyms for "electric cooker" include "electric cooker", and synonyms for "chicken" include "chicken", then "electric cooker, chicken" is taken as the first key information.
S103, determining second key information according to the scene information corresponding to the real-time video.
Optionally, the scene information corresponding to the real-time video may be obtained first, and then the second key information may be determined. Specifically, the semantic information of the object image in the real-time video may be determined as the second semantic information. For example, the second semantic information acquired in fig. 3 includes: electric cooker, steamer, electric microwave oven, saucepan, chicken and fish. And then determining scene information corresponding to the real-time video according to the second semantic information. The scene information may be determined to be "kitchen scene" according to "electric rice cooker, steamer, electric microwave oven, saucepan, chicken, and fish, for example".
One implementation manner of determining the scene information corresponding to the real-time video may be: firstly, according to the second semantic information, a plurality of candidate scene information are determined. For example, the candidate scene information "kitchen scene", "balcony scene" may be obtained by also recognizing an object image such as a table, a chair, a curtain, or the like. And then determining scene information corresponding to the real-time video in the plurality of candidate scene information according to the first key information. For example, the first key information obtained in fig. 5 and 6 may be "rice cooker, chicken", so that the "kitchen scene" is finally used as the scene information corresponding to the real-time video in the alternative scene information "kitchen scene" and "balcony scene". Therefore, the scene information corresponding to the real-time video is determined by combining the first key information, and the accuracy of the scene information can be improved.
Optionally, in another implementation manner, scene information corresponding to a real-time video may be determined in combination with geographic position information, and specifically, current position information of the user terminal is obtained first; and then determining scene information corresponding to the real-time video according to the current position information. For example, if the current location information of the user terminal is determined to be the kitchen in the home according to the positioning information, it is directly determined that the scene information corresponding to the real-time video is "kitchen".
In this embodiment, the steps S102 and S103 are not limited by the described operation sequence, and the steps S102 and S103 may be performed in other sequences or simultaneously.
And S104, obtaining current recommendation information according to the first key information and the second key information.
In one implementation, the first key information and the second key information may be directly used as search terms for searching, and the search result is used as current recommendation information and then displayed to the user.
In another implementation, various combinations of the first key information and the second key information may be combined into a search list and then displayed to the user. And after the user selects one from the search formula list, searching in the search formula selected by the user, taking the search result as current recommendation information, and then displaying the current recommendation information to the user. By adding the interactive operation of the user, the requirement matching with the user can be realized.
In another implementation, various combinations of the first key information and the second key information may be combined to obtain a search list, and then a piece of preference information may be obtained from the user history browsing information or the history browsing information of other users. And selecting a final retrieval formula from the retrieval formula list according to the preference information, retrieving by using the final retrieval formula, taking a retrieval result as current recommendation information, and then displaying to the user. By generating the preference information, the accuracy of the recommendation information can be improved.
According to the information recommendation method provided by the embodiment, the information of objects and the environment around the user is obtained by acquiring the real-time video currently shot by the user terminal; determining first key information according to the object image in the real-time video, wherein the first key information embodies an object concerned by a user; determining second key information according to the scene information corresponding to the real-time video, wherein the second key information embodies the retrieval direction and the data field; according to the first key information and the second key information, current recommendation information is obtained, strong correlation information recommendation can be performed by combining a scene where the user is located and an object concerned by the user, the pertinence of information recommendation is improved, and high user experience is achieved.
Referring to fig. 7, which is a schematic structural diagram of an information recommendation apparatus according to an embodiment of the present invention, the information recommendation apparatus 50 shown in fig. 7 mainly includes:
the acquisition module 51 is used for acquiring a real-time video currently shot by the user terminal;
the first processing module 52 is configured to determine first key information according to an object image in the real-time video;
the second processing module 53 is configured to determine second key information according to scene information corresponding to the real-time video;
and the recommending module 54 is configured to obtain current recommended information according to the first key information and the second key information.
The information recommendation apparatus 50 in the embodiment shown in fig. 7 can be correspondingly used to perform the steps in the method embodiment shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
Optionally, the first processing module 52 is configured to:
receiving a selection instruction input by a user for the object image; determining the object image indicated by the selection instruction as a target object image; and determining first key information according to the target object image.
Optionally, before the receiving of the selection instruction input by the user for the object image, the first processing module 52 is further configured to:
in the real-time video, the object image is highlighted to a user.
Optionally, the first processing module 52 is specifically configured to:
acquiring a finger suspension moving track in the real-time video; and determining a selection instruction input by a user aiming at the object image according to the position relation between the finger suspension movement track and the object image.
Optionally, the first processing module 52 is specifically configured to:
determining semantic information of the target object image as first semantic information; and taking the first semantic information and synonym information of the first semantic information as first key information.
Optionally, before the determining the first key information according to the object image in the real-time video, the first processing module 52 is further configured to:
and carrying out object identification on the video frame in the real-time video to obtain the object image.
Optionally, before the determining the second key information according to the scene information corresponding to the real-time video, the second processing module 53 is further configured to:
acquiring current position information of the user terminal; and determining scene information corresponding to the real-time video according to the current position information.
Optionally, before the determining the second key information according to the scene information corresponding to the real-time video, the second processing module 53 is further configured to:
determining semantic information of an object image in the real-time video as second semantic information; and determining scene information corresponding to the real-time video according to the second semantic information.
Optionally, the second processing module 53 is specifically configured to;
determining a plurality of candidate scene information according to the second semantic information; and determining scene information corresponding to the real-time video in the plurality of candidate scene information according to the first key information.
Referring to fig. 8, which is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, the electronic device 60 includes: a processor 61, memory 62 and computer programs; wherein
A memory 62 for storing the computer program, which may also be a flash memory (flash). The computer program is, for example, an application program, a functional module, or the like that implements the above method.
A processor 61 for executing the computer program stored by the memory to implement the steps of the above method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 62 may be separate or integrated with the processor 61.
When the memory 62 is a device independent of the processor 61, the electronic device 60 may further include:
a bus 63 for connecting the memory 62 and the processor 61.
Fig. 9 is a diagram illustrating an example of the electronic device shown in fig. 8 according to an embodiment of the present invention. On the basis of the embodiment shown in fig. 9, the electronic device may specifically be the terminal device 800 shown in fig. 9. For example, the terminal device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
With continued reference to fig. 9, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user, in some embodiments, the screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
The present invention also provides a readable storage medium, in which a computer program is stored, which, when being executed by a processor, is adapted to implement the methods provided by the various embodiments described above.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the readable storage medium may also reside as discrete components in a communication device. The readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the device may read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the above embodiments of the electronic device, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. An information recommendation method, comprising:
acquiring a real-time video currently shot by a user terminal;
determining first key information according to an object image in the real-time video;
determining semantic information of an object image in the real-time video as second semantic information;
determining a plurality of candidate scene information according to the second semantic information;
determining scene information corresponding to the real-time video in the multiple candidate scene information according to the first key information;
determining second key information according to scene information corresponding to the real-time video;
and obtaining current recommendation information according to the first key information and the second key information.
2. The method of claim 1, wherein determining first key information from the image of the object in the real-time video comprises:
receiving a selection instruction input by a user for the object image;
determining the object image indicated by the selection instruction as a target object image;
and determining first key information according to the target object image.
3. The method according to claim 2, before the receiving a selection instruction input by a user for the object image, further comprising:
in the real-time video, the object image is highlighted to a user.
4. The method according to claim 2, wherein the receiving of the selection instruction input by the user for the object image comprises:
acquiring a finger suspension moving track in the real-time video;
and determining a selection instruction input by a user aiming at the object image according to the position relation between the finger suspension movement track and the object image.
5. The method of claim 2, wherein determining first key information from the target object image comprises:
determining semantic information of the target object image as first semantic information;
and taking the first semantic information and synonym information of the first semantic information as first key information.
6. The method according to any one of claims 1 to 5, wherein before determining the first key information from the image of the object in the real-time video, the method further comprises:
and carrying out object identification on the video frame in the real-time video to obtain the object image.
7. An information recommendation apparatus, comprising:
the acquisition module is used for acquiring a real-time video currently shot by the user terminal;
the first processing module is used for determining first key information according to an object image in the real-time video;
the second processing module is used for determining second key information according to the scene information corresponding to the real-time video;
the recommendation module is used for obtaining current recommendation information according to the first key information and the second key information;
before the second processing module determines second key information according to the scene information corresponding to the real-time video, the second processing module is further configured to:
determining semantic information of an object image in the real-time video as second semantic information; determining scene information corresponding to the real-time video according to the second semantic information;
the second processing module is specifically configured to;
determining a plurality of candidate scene information according to the second semantic information; and determining scene information corresponding to the real-time video in the plurality of candidate scene information according to the first key information.
8. The apparatus of claim 7, wherein the first processing module is configured to:
receiving a selection instruction input by a user for the object image; determining the object image indicated by the selection instruction as a target object image; and determining first key information according to the target object image.
9. The apparatus of claim 8, wherein the first processing module, prior to the receiving of the selection instruction input by the user for the object image, is further configured to:
in the real-time video, the object image is highlighted to a user.
10. The apparatus of claim 8, wherein the first processing module is specifically configured to:
acquiring a finger suspension moving track in the real-time video; and determining a selection instruction input by a user aiming at the object image according to the position relation between the finger suspension movement track and the object image.
11. The apparatus of claim 8, wherein the first processing module is specifically configured to:
determining semantic information of the target object image as first semantic information; and taking the first semantic information and synonym information of the first semantic information as first key information.
12. The apparatus according to any one of claims 7 to 11, wherein the first processing module, before determining the first key information according to the object image in the real-time video, is further configured to:
and carrying out object identification on the video frame in the real-time video to obtain the object image.
13. An electronic device, comprising: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the information recommendation method of any one of claims 1 to 6.
14. A readable storage medium, in which a computer program is stored, which, when being executed by a processor, is adapted to carry out the information recommendation method according to any one of claims 1 to 6.
CN201810993829.4A 2018-08-29 2018-08-29 Information recommendation method and device, electronic equipment and readable storage medium Active CN109189986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810993829.4A CN109189986B (en) 2018-08-29 2018-08-29 Information recommendation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810993829.4A CN109189986B (en) 2018-08-29 2018-08-29 Information recommendation method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN109189986A CN109189986A (en) 2019-01-11
CN109189986B true CN109189986B (en) 2020-07-28

Family

ID=64916831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810993829.4A Active CN109189986B (en) 2018-08-29 2018-08-29 Information recommendation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN109189986B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729293A (en) * 2019-01-21 2019-05-07 深圳市敢为软件技术有限公司 Display methods, device and the storage medium of video related information
CN110322569B (en) * 2019-07-03 2023-03-31 百度在线网络技术(北京)有限公司 Multi-modal AR processing method, device, equipment and readable storage medium
CN111048180B (en) * 2019-12-05 2024-02-02 上海交通大学医学院 Dietary intake investigation analysis system, method and terminal
CN111031398A (en) * 2019-12-10 2020-04-17 维沃移动通信有限公司 Video control method and electronic equipment
CN111260497A (en) * 2020-01-08 2020-06-09 黄莹 Mobile terminal based operation guidance system and method in industrial environment
CN115225916A (en) * 2021-04-15 2022-10-21 北京字节跳动网络技术有限公司 Video processing method, device and equipment
CN113923252B (en) * 2021-09-30 2023-11-21 北京蜂巢世纪科技有限公司 Image display device, method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078899A1 (en) * 2010-09-27 2012-03-29 Fontana James A Systems and methods for defining objects of interest in multimedia content
CN104239465B (en) * 2014-09-02 2018-09-07 百度在线网络技术(北京)有限公司 A kind of method and device scanned for based on scene information
CN105488044A (en) * 2014-09-16 2016-04-13 华为技术有限公司 Data processing method and device
CN106980612A (en) * 2016-01-15 2017-07-25 夏普株式会社 Information recommendation system and method
CN105718555A (en) * 2016-01-19 2016-06-29 中国人民解放军国防科学技术大学 Hierarchical semantic description based image retrieving method
ES2648368B1 (en) * 2016-06-29 2018-11-14 Accenture Global Solutions Limited Video recommendation based on content
CN106777071B (en) * 2016-12-12 2021-03-05 北京奇虎科技有限公司 Method and device for acquiring reference information by image recognition
CN107016163B (en) * 2017-03-07 2021-04-27 北京小米移动软件有限公司 Plant species recommendation method and device
CN107845025A (en) * 2017-11-10 2018-03-27 天脉聚源(北京)传媒科技有限公司 The method and device of article in a kind of recommendation video
CN108388836B (en) * 2018-01-25 2022-02-11 北京一览科技有限公司 Method and device for acquiring video semantic information

Also Published As

Publication number Publication date
CN109189986A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109189986B (en) Information recommendation method and device, electronic equipment and readable storage medium
US11520824B2 (en) Method for displaying information, electronic device and system
US11120078B2 (en) Method and device for video processing, electronic device, and storage medium
CN107341185B (en) Information display method and device
CN105517112B (en) Method and device for displaying WiFi network information
CN106095465B (en) Method and device for setting identity image
CN111753135B (en) Video display method, device, terminal, server, system and storage medium
CN107784045B (en) Quick reply method and device for quick reply
CN107315487B (en) Input processing method and device and electronic equipment
CN111783001A (en) Page display method and device, electronic equipment and storage medium
CN110688527A (en) Video recommendation method and device, storage medium and electronic equipment
CN106484138B (en) A kind of input method and device
WO2020119254A1 (en) Method and device for filter recommendation, electronic equipment, and storage medium
CN106550252A (en) The method for pushing of information, device and equipment
CN106572268B (en) Information display method and device
CN104811904B (en) Contact person's setting method and device
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
WO2022095860A1 (en) Fingernail special effect adding method and device
CN107992839A (en) Person tracking method, device and readable storage medium storing program for executing
CN110019897B (en) Method and device for displaying picture
CN112000878A (en) Article information query method, device, system, electronic equipment and storage medium
CN112115341A (en) Content display method, device, terminal, server, system and storage medium
CN107729439A (en) Obtain the methods, devices and systems of multi-medium data
CN111615007A (en) Video display method, device and system
US11284127B2 (en) Method and apparatus for pushing information in live broadcast room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant