CN113472834A - Object pushing method and device - Google Patents

Object pushing method and device Download PDF

Info

Publication number
CN113472834A
CN113472834A CN202010345804.0A CN202010345804A CN113472834A CN 113472834 A CN113472834 A CN 113472834A CN 202010345804 A CN202010345804 A CN 202010345804A CN 113472834 A CN113472834 A CN 113472834A
Authority
CN
China
Prior art keywords
account
identification information
target object
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010345804.0A
Other languages
Chinese (zh)
Inventor
冯谨强
高雪松
孙菁
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN202010345804.0A priority Critical patent/CN113472834A/en
Publication of CN113472834A publication Critical patent/CN113472834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an object pushing method and device, which are used for solving the technical problems that when the existing electronic equipment pushes an object, the whole object pushing process consumes a long time and the accuracy of the pushed object is not high. According to the method and the device, the attention degree of the account to the target object can be determined according to the identification information of the account and the image of the account containing the identification information, the attention value of the attribute type of the target object is updated according to the attention degree of the stored account, and then other objects of the attribute type of the target object can be pushed to the account when the updated attention value meets the pushing condition, so that the object can be pushed to the account quickly and accurately.

Description

Object pushing method and device
Technical Field
The present application relates to the field of object pushing technologies, and in particular, to an object pushing method, an object pushing apparatus, an object pushing system, an object pushing device, an object pushing apparatus, and an object pushing medium.
Background
Currently, when an electronic device pushes an object to an account, it is necessary to perform big data analysis according to accumulated data of the object clicking frequency, the object staying time and the like of the account collected for a long time, so as to slowly know the type and degree of the object interested by the account, and perform object pushing based on the object interested by the account. However, the time consumption of the whole object pushing process is relatively long, and the interest degree of the account on the object is judged only according to the operation behaviors of the account on the object, such as the click frequency of the account on the object, the stay time of the object and the like, so that the problem that the object is pushed to the account possibly has inaccuracy is solved, because:
firstly, when objects in the same object type in the same electronic device are watched by different accounts, objects of interest of the objects in the same object type are different due to different accounts, and object pushing is performed only according to operation behaviors such as click frequency and dwell time, so that the object pushing range is wider and more, the objects are more disordered, and the objects are pushed inaccurately.
In addition, when an account clicks a certain object, something suddenly leaves, the object continues to play at the moment, and the method of judging whether the account is interested in the object or not according to the staying time at the object and pushing the object mistakenly considers that the account is interested in the object, so that certain interference may be caused on the object to be pushed, and the object to be pushed is not accurate enough.
Secondly, when the account watches an object in a new object type, the account may click to view the object in sequence or randomly, and whether the account is interested in the object is judged according to the click frequency, and the object clicked by the account is mistakenly considered as an object interested by the account in a pushing mode, so that the pushed object is not accurate enough.
Disclosure of Invention
The application provides an object pushing method, an object pushing device, an object pushing system, an object pushing device and an object pushing medium, which are used for solving the technical problems that when the existing electronic equipment pushes an object, the whole object pushing process consumes a long time and the accuracy of the pushed object is not high.
In a first aspect, the present application provides an object pushing method, including:
receiving an image containing an account for viewing a currently displayed target object;
according to the image, determining identification information of an account for watching, when the account in which the identification information is stored watches the attention value of the attribute type of the target object, according to the image of the account containing the identification information, determining the attention degree of the account containing the identification information to the target object, and updating the stored attention value according to the attention degree;
and when the updated attention value meets the pushing condition, acquiring other objects of the attribute category, and after the target object is played, pushing the information of the other objects to display equipment for displaying.
In a second aspect, the present application further provides an object pushing apparatus, including:
a receiving unit configured to receive an image including an account for viewing a currently displayed target object;
the processing unit is used for determining identification information of an account for watching according to the image, determining the attention degree of the account containing the identification information to the target object according to the image of the account containing the identification information when the account containing the identification information watches the attention value of the attribute type of the target object, and updating the stored attention value according to the attention degree;
and the pushing unit is used for acquiring other objects of the attribute category when the updated attention value meets a pushing condition, and pushing the information of the other objects to display equipment after the target object is played.
In a third aspect, the present application further provides an object pushing system, where the system includes any one of the above object pushing apparatuses applied to an electronic device, and a display device for displaying a target object and other objects.
In a fourth aspect, the present application further provides an electronic device, where the electronic device at least includes a processor and a memory, and the processor is configured to implement the steps of any one of the object pushing methods when executing a computer program stored in the memory.
In a fifth aspect, the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of any one of the object pushing methods described above.
According to the method and the device, the attention degree of the account to the target object can be determined according to the identification information of the account and the image of the account containing the identification information, the attention value of the attribute type of the target object is updated according to the attention degree of the stored account, and then other objects of the attribute type of the target object can be pushed to the account when the updated attention value meets the pushing condition, so that the object can be pushed to the account quickly and accurately.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an object pushing process according to some embodiments of the present application;
fig. 2 is a schematic diagram of a second included angle between the orientation of the face of the account and the plane of the display device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of object pushing provided by some embodiments of the present application;
FIG. 4 is a schematic diagram of a face box detected in an image according to some embodiments of the present application;
FIG. 5 is a schematic view of a face image before and after correction based on 5 facial keypoints according to some embodiments of the present application;
fig. 6 is a schematic structural diagram of an object pushing apparatus according to some embodiments of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
fig. 8 is a schematic structural diagram of an object pushing system according to some embodiments of the present application.
Detailed Description
In order to quickly and accurately push an object for an account, the application provides an object pushing method, device, system, equipment and medium.
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In practical applications, the account may view various types of objects displayed on the display device, such as a video class, an audio class, a game class, a photo class, a novel class, and the like, and the objects displayed on the display device may be obtained through the electronic device. When the account views the target object on the display device, the electronic device may determine the attention degree of the account to the target object, so as to determine the attention value of the account to the attribute category of the target object, so as to determine whether to push information of other objects in the attribute category to the account after the target object is played.
Fig. 1 is a schematic diagram of an object pushing process provided in some embodiments of the present application, where the process includes the following steps:
s101: receiving an image containing an account for viewing a currently displayed target object;
in the application, the object pushing method is applied to the electronic equipment, and the electronic equipment can be a smart television, a smart housekeeping server and the like. The electronic device may acquire various types of objects, such as video class objects, audio class objects, game class objects, picture class objects, novel class objects, and so forth. When a certain object is viewed by the account, the viewed object is the currently displayed target object.
When the account views the target object, the electronic device may receive an image containing the account viewing the currently displayed target object. The electronic device is a server located locally, the image can be acquired by an image acquisition device, the image acquisition device acquires an image of the account and then sends the image to the electronic device, and the electronic device can determine the identification information of the account through the image of the account. The image acquisition device may be a camera or an intelligent sensor, and the image acquisition device may be disposed near the display device, for example, in the middle of a frame on the display device, or directly above the display device, so as to acquire an image of an account for viewing an object displayed in the display device.
S102: and determining identification information of an account for watching according to the image, determining the attention degree of the account containing the identification information to the target object according to the image of the account containing the identification information when the account containing the identification information watches the attention value of the attribute type of the target object, and updating the stored attention value according to the attention degree.
After receiving the image of the account, the electronic device extracts a network model based on the face features according to the image of the account, and determines the identification information of the account. After determining the identification information of the viewing account, an attribute category of the target object may be identified, which may reflect a style of the target object.
Specifically, the attribute category of the target object may be content included in an attribute tag of the target object, and the electronic device may identify the attribute category of the target object from the attribute tag of the target object. For example, if the target object is a video, and the content contained in the attribute tag of the target object may be comedy, drama of martial arts, family drama, or drama of children, etc., the attribute category of the target object may be identified by the content of the attribute tag, that is: comedy, drama, martial action, family or children's drama, etc.;
if the target object is audio and the content contained in the attribute tag of the target object may be music, commentary, or audio-video works, the attribute category of the target object may be identified by the content of the attribute tag, that is: music, commentary, vocals, etc.;
if the target object is a novel, and the content contained in the attribute tag of the target object can be urban modern, suspensory, alien, history, etc., the attribute category of the target object can be identified through the content of the attribute tag, that is: urban modern, suspicion, strange, history, etc.;
if the target object is a picture and the content contained in the attribute tag of the target object may be sad, fresh, etc., the attribute category of the target object may be identified through the content of the attribute tag, that is: sadness, freshness, etc.
In order to push the object for the account, for each account of the identification information, the attention value of the account for the object of different attribute categories is stored in advance. Generally, if the account has viewed an object of a certain attribute category, the account has a relatively high interest value for the object of the attribute category, and if the account has not viewed an object of a certain attribute category, the account has a relatively low interest value for the object of the attribute category. The size of the attention value may reflect the degree of interest of the account in the object of the corresponding attribute category.
Since the electronic device does not know in advance which account views which attribute type of object, for convenience of processing, after the identification information of the account and the attribute type of the target object are determined, it can be determined whether the account storing the identification information is stored, if not, it is considered that the account has not viewed the object in the attribute type of the target object before, and regarding the account storing the identification information, the initial value of the attention value is used as the attention value of the account stored by itself for the object in the attribute type of the target object.
If the electronic device has an account with the identification information stored in advance, whether the electronic device has the attention value of the account with the identification information stored in advance for the object of the attribute category is judged, if the electronic device has no attention value stored in advance, the electronic device considers that the account with the identification information has no object in the attribute category before, and the initial value of the attention value is used as the attention value of the account stored in the electronic device for the object of the attribute category.
S103: and when the updated attention value meets the pushing condition, acquiring other objects of the attribute category, and after the target object is played, pushing the information of the other objects to display equipment for displaying.
Because the attention value of the account of the identification information to the object of the attribute category is determined, and the account is still watching the object of the attribute category, which indicates that the account has a higher interest level in the object of the attribute category, the attention value of the account of the identification information to the object of the attribute category needs to be updated.
When the attention value of the account of the identification information to the object of the attribute type is updated, the attention value of the account of the identification information to the object of the attribute type and the attention degree of the account of the identification information to the target object are updated according to the currently stored attention value of the account of the identification information to the object of the attribute type.
When determining the attention of the account to the target object, specifically, the attention may be determined according to the number of times that the account comments on the target object while viewing the target object, or may be determined according to the duration of time that the account of the collected identification information is located before the display device while viewing the target object, or may be a combination of the above determination methods. The reason is that in the process of watching the target object, the more the number of times of comments on the target object is increased, which indicates that the target object is more interested by the account, the greater the corresponding attention degree is; in the process of viewing the target object, if the longer the time before the account is located on the display device, the more interested the account is in the target object, the greater the corresponding attention degree is. Conversely, the less interested the account is in the target object, and correspondingly, the smaller the attention degree is.
After the attention degree of the account of the identification information to the target object is determined, the attention value of the account of the identification information, which is stored by the account, to the object of the attribute class of the target object is updated, and when the attention value is updated, the attention value can be determined according to the product of the currently stored attention value and the corresponding weight value and the sum of the products of the attention degree and the corresponding weight value.
The electronic device determines whether the attention value meets a pushing condition according to the updated attention value, where the pushing condition may be whether the updated attention value is greater than a set pushing threshold, and when the updated attention value is greater than the set pushing threshold, it is determined that the updated attention value meets the pushing condition, which indicates that the account of the identification information is interested in the object of the attribute category of the target object.
It is to be understood that if the updated attention value is not greater than the set push threshold, it indicates that the account of the identification information is not interested in the object of the attribute category of the target object to a high degree, and other objects of the attribute category of the target object may not be pushed to the account of the identification information.
When the electronic device pushes another object for the account, after the target object is played, information of the other object may be pushed to the display device for display, where the information of the other object includes: the names, pictures, content descriptions, etc. of other objects may make the account aware of some information of its content, so that the account may decide whether to view the other objects through the information of the other objects displayed in the display device. After determining that other objects of the attribute category of the target object are pushed to the account of the identification information, the electronic device performs the process of pushing the other objects, which belongs to the prior art and is not described in detail in this application.
In addition, in the present application, when the object is pushed, the types of the pushed other objects are the same as the object type of the currently displayed object, for example, if the currently displayed object is a video, the pushed other objects are also videos; the currently displayed object is audio, and the other pushed objects are also audio.
According to the method and the device, the identification information of the account is determined, the attention degree of the account to the target object is determined according to the image of the account containing the identification information, the stored attention value of the account to the attribute type of the target object is updated according to the attention degree, and then other objects of the attribute type of the target object can be pushed to the account when the updated attention value meets the pushing condition, so that the object can be pushed to the account quickly and accurately.
In order to better push an object to an account, on the basis of the above embodiment, in the present application, determining the attention degree of the account containing the identification information to the target object according to the image of the account containing the identification information includes:
and determining an expression change value and/or a concentration value of the account containing the identification information when the target object is viewed by the account containing the identification information according to the image of the account containing the identification information, and determining the attention degree of the account containing the identification information to the target object according to the determined expression change value and/or concentration value.
Because facial expressions, such as expressions of excitement, surprise, calmness, anger and fear, are also part of the account language, the facial expressions are psychological and physiological responses of the account and can be used for expressing the interest degree of the account in the object, but at the same time, because the account views different objects, the interest degree of the account in the target object cannot be expressed by only one expression, for example, when the account views videos with the attribute type of crime, angry expressions can appear, and the expressions do not represent that the account is not interested in the object with the attribute type, but rather the account is interested in the object and focuses on the expression of the object, and in addition, the account expressions are changed along with the continuous playing of the object. Therefore, in the application, the attention degree of the account to the target object can be identified according to the expression change condition of the account in the process of watching the target object and the corresponding expression change value.
In addition, in the process of viewing the object by the account, if the face of the account faces the area where the display device is located, it indicates that the account is focusing on viewing the object and also indicates that the account is interested in the object, whereas if the face of the account does not face the area where the display device is located, it indicates that the account is not viewing the object and also indicates that the account is not interested in the object. Therefore, in the application, the attention degree of the account on the target object can be identified according to the duration of the account watching the target object in the process of watching the target object.
In the application, the electronic device may determine the expression change value and/or the concentration value of the account when the account watches the target object according to the acquired image of the account when the account watches the target object.
In order to better acquire the concentration value of the account when the account views the target object, on the basis of the above embodiment, in the present application, determining the concentration value of the account containing the identification information when the account views the target object according to the image of the account containing the identification information includes:
acquiring images in a second time period meeting a second preset condition, acquiring a second included angle between the face orientation of the account of the identification information in each image and the plane of the display device based on a deflection angle model which is trained in advance, judging whether the second included angle is within a preset second angle threshold range or not for each second included angle, if so, determining the image acquisition time corresponding to the second included angle, watching the target object by the account of the identification information, and determining the concentration value according to a third time period and the second time period when the target object is watched by the account of the identification information.
In the application, the image of the account acquired in the second time period meeting the second preset condition in the playing process of the target object can be selected to determine whether the account focuses on the concentration degree of the target object in the time length. The second time period may be a time period including set content, which is labeled in advance according to the content of the target object, for example, if the target object is a video, the second time period may be a time period corresponding to a more highlight part of the video, and a second duration corresponding to the second time period is a duration between a start time and an end time of the more highlight part of the video. If the target object is a picture or a novel, it may be determined how long the object is viewed is the second time period according to the average total length of viewing the picture or the novel, for example, if the target object is a picture, and the total length of viewing the picture is 1 minute, then the time period from 10s to 30s after the object is viewed may be the second time period, and the second time period corresponds to the second time period of 20 s.
In order to more accurately determine the second time period satisfying the second preset condition, on the basis of the foregoing embodiments, in this application, the determining the second time period satisfying the second preset condition includes:
and dividing the resource duration corresponding to the target object into a plurality of time periods according to a preset time interval, and determining each time period as a second time period meeting a second preset condition.
In order to accurately obtain the attention of the account to the target object, in the application, the resource duration corresponding to the target object may be divided into 1 or more time periods according to a preset time interval, and each time period is determined as a second time period meeting a second preset condition.
In each second time period meeting a second preset condition, the electronic device may obtain, according to the obtained image and if the image includes an account, a second included angle between the face orientation of the account in the image and the plane where the display device is located based on the deflection angle model that is trained in advance, specifically, the second included angle is an included angle between the face orientation of the account and the xy plane based on a pre-trained deflection angle model, where a straight line perpendicular to the ground is taken as an x axis, a straight line parallel to the ground is taken as a y axis, and the second included angle is obtained based on the deflection angle model that is trained in advance.
Fig. 2 is a schematic diagram of a second included angle between the orientation of the face of the account and the plane of the display device according to some embodiments of the present application, where 21 is the position of the face of the account, 22 is the orientation of the face of the account, 23 is the plane of the display device (when the image capturing device and the display device are on the same plane or on planes parallel to each other, 23 is the plane of the image capturing device), the second included angle between the orientation of the face of the account and the plane of the display device is 24 or 25, and specifically, whether the second included angle between the orientation of the face of the account and the plane of the display device is 24 or the second included angle between the orientation of the face of the account and the plane of the display device is 25 may be flexibly selected according to needs.
And after a second included angle between the face orientation in the image and the plane of the display device is acquired, judging whether the second included angle is within a preset second angle threshold range, if so, determining that the face orientation of the account in the image corresponding to the second included angle is towards the area of the display device, indicating the image acquisition time corresponding to the second included angle, and the account is focusing on watching the target object, otherwise, determining that the account is not focusing on watching the target object.
Because the acquisition time of each image is determined, when the second included angle is determined to be within the preset second angle threshold range, the image corresponding to the second included angle is marked, if a plurality of continuous images are marked, the plurality of continuous marked images are used as an image group, and the account focuses on watching the target object in the time period corresponding to the image group. And determining a time period corresponding to the image group, namely a third sub-time period according to the acquisition time of the first marked image and the acquisition time of the last marked image in the image group. In a second time period, there may be a plurality of third sub-time periods, and after each third sub-time period is determined, the time lengths of the plurality of third sub-time periods located in the second time period may be added for each second time period, so as to obtain a third time length for the account to concentrate on viewing the target object in the second time period.
And when a third time length for the account to concentrate on watching the target object in the second time period is obtained for each second time period, determining the concentration value of the account in the second time period according to the ratio of the third time length to the second time length in the second time period. Specifically, the following formula can be used:
Figure BDA0002470009200000111
wherein z isiFor the concentration value of the account in the ith second time period, t3iFor a third duration, t, of the ith second time period during which the account is focused on viewing the target object2iA second duration of the ith second time period, ciAnd the weight value is the weight value corresponding to the ith second time period.
And after the concentration value of the account in each second time period is obtained, adding the products of the concentration value of the account in each second time period and the corresponding weight value to obtain the concentration value of the account when the target object is watched. Specifically, the following formula can be used:
Figure BDA0002470009200000112
wherein z isiFor the concentration value of the account in the ith second time period, the target object has n second time periods, and z is the concentration value of the account when viewing the target object.
In order to accurately determine the attention degree of the account to the target object, on the basis of the foregoing embodiments, in the present application, the determining, according to the image of the account including the identification information, the expression change value of the account including the identification information when the target object is viewed includes:
the method comprises the steps of obtaining images collected within a first time period meeting a first preset condition, obtaining a second label value of facial expression of an account of identification information in each image based on an expression recognition network model which is trained in advance, determining the number of times of expression change of the account according to the second label value of the facial expression corresponding to each image, and determining the expression change value according to the number of times of expression change and the first time period.
In the application, the image of the account acquired in the first time period meeting the first preset condition in the playing process of the target object can be selected to determine the expression change value of the account in the first time period. In order to accurately determine the expression change value, in the present application, on the basis of the foregoing embodiments, the acquiring an image acquired in a first time period that meets a first preset condition includes:
and acquiring a first included angle between the face orientation of the account of the identification information in each image and the plane of the display device based on a deflection angle model which is trained in advance, and if the first included angles of a plurality of continuous images are all located in a preset first angle threshold range, taking the plurality of continuous images as the acquired images which are acquired in a first time period meeting a first preset condition.
Specifically, the electronic device may obtain, according to the obtained image, a first included angle between the face orientation of the account in the image and a plane where the display device is located based on a deflection angle model that is trained in advance if the image includes the account, and if the first included angles of the continuous multiple images are all within a preset first angle threshold range, use the continuous multiple images as the obtained image that is collected within a first time period that satisfies a first preset condition. The process in which the angle between the face orientation of the account in the image and the plane in which the display device is located is obtained is described in the above embodiment, and the first time period, i.e., the third sub-time period in which the account in the above embodiment is focused on viewing the target object, is described.
After the images within the first time period meeting the first preset condition are acquired, the electronic device may sequentially acquire a second label value of the facial expression of the account in each image based on an expression recognition network model which is trained in advance, and determine the expression change times of the account within the first time period for watching the target object by judging whether the second label value of the facial expression of each image is changed compared with the second label value of the facial expression of an adjacent image according to the second label value of the facial expression corresponding to each image.
After each third sub-period is determined, the number of expression changes of the account in the first period of time for viewing the target object is counted for each third sub-period of time, that is, for each first period of time, and then the number of expression changes of the account in the plurality of first periods of time in the second period of time may be added for each second period of time to obtain the number of expression changes of the account for focusing on viewing the target object in the second period of time. In addition, the time lengths of the plurality of first time periods in the second time period are added to obtain the first time length of the account which is focused on watching the target object in the second time period.
And after the first time length for watching the target object by the account in the second time period and the expression change times for watching the target object by the account in the second time period are obtained, determining the expression change value of the account in the second time period according to the ratio of the expression change times to the first time length. Specifically, the following formula can be used:
when in use
Figure BDA0002470009200000121
When the temperature of the water is higher than the set temperature,
Figure BDA0002470009200000122
when in use
Figure BDA0002470009200000123
When b is greater thani=di
Wherein, biIs the expression change value of the account in the ith second time period, t1iFocusing on a first duration, m, of viewing the target object for the account within the ith second time periodiNumber of expression changes for the account to focus on viewing the target object in the ith second time period, diAnd the weight value is the weight value corresponding to the ith second time period.
And after the expression change value of the account in each second time period is obtained, adding the products of the expression change value of the account in each second time period and the corresponding weight value to obtain the expression change value of the account when the target object is watched. Specifically, the following formula can be used:
Figure BDA0002470009200000131
wherein, biAnd b is the expression change value of the account when the target object is watched by the account.
Wherein the time unit of the first duration may be minutes.
In order to better push an object to an account, on the basis of the foregoing embodiments, in this application, the determining, according to the determined expression change value and/or concentration value, a degree of attention of the account of the identification information to the target object includes:
and when the attention degree of the account of the identification information to the target object is determined according to the determined expression change value and the concentration value, determining the attention degree of the account of the identification information to the target object according to a first weight value corresponding to the concentration value and a second weight value corresponding to the expression change value.
In determining the attention degree, the expression change value of the account may be directly used as the attention degree, or the concentration value may be directly used as the attention degree.
In order to accurately determine the attention, when the attention of an account to a target object is determined according to a determined expression change value and a concentration value, a first weight value corresponding to the concentration value and a second weight value corresponding to the expression change value are preset, and the attention of the account of the identification information to the target object is determined according to the sum of a first product of the first weight value and the concentration value and a second product of the second weight value and the expression change value, wherein the first weight value and the second weight value are positive numbers smaller than 1, and the sum of the first weight value and the second weight value is 1.
Specifically, the first weight value and the second weight value may be the same or different in size, and may be flexibly adjusted as needed in specific applications.
In order to push an object to an account more accurately, in the present application, on the basis of the foregoing embodiments, updating the stored attention value according to the stored attention value and the attention degree includes:
and determining and storing the updated attention value according to the third weight value corresponding to the stored attention value and the fourth weight value corresponding to the attention degree.
In the application, in order to more accurately perform object pushing for the account, after the attention degree of the account to the target object is obtained, the attention value stored by the user can be updated, and in order to more accurately perform object pushing for the account, when the updated attention value is determined, the updated attention value can be determined according to the currently stored attention value and the attention degree of the account to the target object.
In order to accurately determine an updated attention value, when the updated attention value is determined according to a currently stored attention value and an attention degree of an account to a target object, a third weight value corresponding to the currently stored attention value and a fourth weight value corresponding to the attention degree of the account to the target object are preset, and the updated attention value is determined according to a sum of a first product of the third weight value and the currently stored attention value and a second product of the fourth weight value and the attention degree, wherein the third weight value and the fourth weight value are positive numbers smaller than 1, and the sum of the third weight value and the fourth weight value is 1.
Specifically, the third weight value and the fourth weight value may be the same or different in size, and may be flexibly adjusted as needed in specific applications.
As illustrated in a specific embodiment, the total attention degree is 1, and the resource duration corresponding to the target object is divided into 10 time periods according to a preset time interval, that is, there are 10 second time periods, the same weight value is preset for the attention degree of each second time period and is 0.1, the preset weight value c for the concentration value of each second time period is 0.05, and the preset weight value d for the expression change value of each second time period is 0.05.
Then, during each second time period, focus on the value ziComprises the following steps:
Figure BDA0002470009200000141
each second time period, in particular when
Figure BDA0002470009200000142
Time, expression change value biComprises the following steps:
Figure BDA0002470009200000143
when in use
Figure BDA0002470009200000144
Time, expression change value biComprises the following steps: di
The updated attention value s1Comprises the following steps:
Figure BDA0002470009200000145
wherein s is0And alpha is a third weight value corresponding to the currently stored attention value, and beta is a fourth weight value corresponding to the attention degree.
In some embodiments, the third weight value and the fourth weight value are flexibly adjusted according to different currently stored attention values, for example, when s is an initial value of the attention value, the third weight value may be 0.1, and the fourth weight value may be 0.9; when s is not the initial value of the attention value, the third weight value may be 0.3 and the fourth weight value may be 0.7.
And updating and saving the currently saved attention value by using the calculated attention degree of the account to the target object.
In order to more accurately perform subsequent object pushing, the updated attention value can be updated again according to the currently stored attention value and the calculated attention degree of the account to the target object. Specifically, the updated attention value may be determined according to the third weight value corresponding to the stored attention value and the fourth weight value corresponding to the attention degree, and then the attention value updated again may be determined according to the currently stored attention value and the updated attention value.
In some embodiments, in order to perform subsequent object pushing more accurately, based on the above embodiments, in the present application, an updated attention value is determined and stored according to a fifth weight value corresponding to the stored attention value and a sixth weight value corresponding to the attention degree.
When the updated attention value is determined, the updated attention value may be acquired through two updates. The specific first updating may adopt the method in the above embodiment, when determining the updated attention value again according to the stored attention value and the updated attention value, a fifth weight value corresponding to the currently stored attention value and a sixth weight value corresponding to the updated attention value are preset, and the updated attention value is determined according to a sum of a first product of the fifth weight value and the currently stored attention value and a second product of the sixth weight value and the updated attention value. Specifically, the following formula can be used:
s2=e×s0+f×s1wherein s is2For the updated attention value, s0For currently saved values of interest, s1And f is a sixth weight value corresponding to the updated attention value.
Wherein the fifth weight value and the sixth weight value are both positive numbers less than 1, and the sum of the fifth weight value and the sixth weight value is 1. Specifically, the fifth weight value and the sixth weight value may be the same or different in size, and may be flexibly adjusted as needed in specific applications. In some embodiments, the fifth weight value may be 0.6 and the sixth weight value may be 0.4.
In order to better obtain the expression change value, on the basis of the above embodiments, in the present application, the training process of the expression recognition network model includes:
and training the expression recognition network model aiming at each sample image in the training set and the first label value of the facial expression corresponding to each sample image.
In the present application, an expression recognition network model is trained in advance, and a plurality of sample images each including a corresponding facial expression are stored in a training set of the model. In order to reflect the interest degree of the account in the target object, a plurality of different facial expressions are selected in advance to train an expression recognition network model, and the facial expressions can include: exciting, surprise, calm, anger, fear, etc., while also setting a first label value of facial expression corresponding to the sample image according to the facial expression contained in the sample image, respectively, for the sample image, wherein the first label value of facial expression is used to identify the facial expression contained in the sample image, wherein different facial expressions correspond to different first label values of facial expression, for example, if the facial expression contains excitement, surprise, calmness, anger, fear, etc., 00, 01, 02, 03, 04, respectively, may be used as the first label value of corresponding facial expression, respectively.
And training the expression recognition network model aiming at each sample image in the training set and the first label value of the facial expression corresponding to each sample image.
In some embodiments, facial expressions with obvious differences among facial expressions may be selected, the expression recognition network model may include six layers, and the training of the expression recognition network model may be assisted by a Loss layer (Softmax Loss).
After the expression recognition network model is trained, a second label value of the facial expression corresponding to the facial expression contained in the image can be output according to the input image.
In the following, a specific embodiment is described as a process of object pushing provided by the present application, and fig. 3 is a schematic diagram of object pushing provided by some embodiments of the present application, as shown in fig. 3:
s301: the electronic equipment receives the image of the account acquired and sent by the image acquisition equipment.
S302: and determining the identification information of the account for watching according to the image of the account.
S303: and inquiring the attention value of the attribute category of the target object viewed by the account which stores the identification information.
S304: and determining the concentration value of the account containing the identification information on the target object according to the image of the account containing the identification information.
S305: and acquiring an image acquired within a first time period meeting a first preset condition, and determining the expression change value of the account containing the identification information to the target object according to the image of the account containing the identification information.
S306: and determining the attention degree of the account of the identification information to the target object according to the determined expression change value and the attention value, and determining and storing the updated attention value according to the stored attention value corresponding to the third weight value and the fourth weight value corresponding to the attention degree.
S307: and when the updated attention value meets the pushing condition, acquiring other objects of the attribute category of the target object, and after the target object is played, pushing the information of the other objects to display equipment for displaying.
The following describes a specific process of determining facial features of an account by an electronic device according to a specific embodiment:
(1) after the electronic equipment acquires the image of the account, the image is corrected.
In order to ensure the accuracy of face detection, an end-to-end face detection network model is designed by referring to a target detection algorithm (fast R-CNN) based on deep learning and an algorithm (SSD) for detecting a target in an image based on a single deep neural network. At the beginning of the network model, a larger sampling step size is set, the size of the network input image is rapidly reduced, specifically, at the convolutional layer (Conv1), the pooling layer (Pool1), the convolutional layer (Conv2) and the pooling layer (Pool2), the set step sizes are 4, 2 and 2 respectively, and the image input is reduced by 32 times after passing through two convolutional layers and two pooling layers. To balance the advantages and disadvantages of convolution speed and loss of useful information, convolution kernel sizes were designed to be 7 × 7, 5 × 5, 3 × 3 in Conv1, Conv2 and all pooling layers (Pool layers), respectively. In order to increase the multi-scale design on the width of the network model, the human face detection network model adopts an inclusion model, as the model comprises a plurality of different convolution branches, the perception can be further diversified, the Loss function uses a two-classification Loss function (Softmax Loss) for classification, namely, whether the human face is judged, and a Loss function (Smooth L1) is used for regression.
After the network model is designed, a face detection network model is obtained through training, a face in an image is detected based on the face detection network model, face frame information in the image is obtained, the face frame information comprises the number of face frames and position information of each face frame in the image, and fig. 4 is a schematic diagram of the face frames detected in the image according to some embodiments of the present application.
And after the face frame position is detected, extracting the characteristics of each face, and confirming the account identity or clustering the accounts based on the face characteristics. Through account identity confirmation, the defect that accurate pushing cannot be performed on an account in the conventional method can be overcome.
The specific method for identifying the account identity is as follows: and intercepting a face region image based on a face frame, inputting the face image into a face key point extraction model to obtain 5 key points, and performing face correction by using five points including two eyeball points, a nose tip point and two mouth corner points. Firstly, a standard face image is set, then five key points (including two eyeball points, a nose tip point and two mouth corner points) of the standard face image are detected, then the face to be recognized with the 5 key points also detected is subjected to two-dimensional affine transformation, the process comprises a series of transformation such as translation, scaling and rotation, and under the condition that the five key points of the standard face and the five key points of the face to be recognized are known, a transformation matrix Q can be solved.
Let (x)i,yi) Is the coordinate of the ith key point on the standard face, (x)i’,yi') is the coordinate of the ith key point corresponding to the face to be recognized, then:
Figure BDA0002470009200000181
the above equation includes the following two equations:
Figure BDA0002470009200000182
in the above equation, if five key points are selected on the standard face and their spatial coordinates (x) are knowni,yi) The key points corresponding to the face to be recognized are locatedLabel (x)i’,yi'), Q matrix elements can be solved by direct linear transformation, and for n (n ═ 5) key points, there are 2n linear equations for Q matrix elements, and the following equations are written in matrix form:
Figure BDA0002470009200000191
the number of unknown elements in the above equation is 6, and is recorded as 6-dimensional vector h, and the above formula is abbreviated as:
k is the left 2n 6 matrix in the right side of the formula in matrix form; h ═ a, b, c, d, e, f) is an unknown 6-dimensional vector; u is a 2 n-dimensional vector on the left side in the formula; k and U are known vectors, and for the above formula, the solution of the linear equation can be obtained by using the least square method, wherein h is equal to (K)TK)-1KTU。
Therefore, the vector h, namely the transformation matrix, can be obtained by only taking 3 key points in the key points of the face, and the number of equations exceeds the number of unknowns by adopting 5 key points in the key points of the face, so that the influence caused by errors is reduced by solving with a least square method.
Fig. 5 is a schematic diagram of a face image before and after correction based on 5 facial key points according to some embodiments of the present application, where a middle face (the middle face shown in the figure) is a standard face, a left face (left and right faces shown in the figure) is a face before correction, and a right face (left and right faces shown in the figure) is a face after correction.
(2) And determining the face characteristics of the account of the object which is being watched based on the face characteristic extraction network model.
After the face image is corrected, a face feature extraction network is required to be input, wherein the network consists of a plurality of layers of convolution layers and pooling layers and aims to extract the deep features of the face and further distinguish faces with different identities better. In the training stage, the face feature extraction network needs to access an A-Softmax layer, and aims to introduce an angle space, increase the distance between face classes and reduce the distance between face classes.
After extracting the face features based on the trained model, the electronic device compares the determined face features (a) of the account with the face features (b) of the account in the database stored in the electronic device to obtain the similarity between the identification information of the account and the identification information of the account in the database, if the similarity is greater than a similarity threshold, the identification information of the account and the account corresponding to the identification information in the database with the similarity greater than the similarity threshold are considered to be the same account, otherwise, the identification information of the account and the account are not considered to be the same account.
When the determined face features (a) of the account are compared with the face features (b) of the account in the stored database, a cosine similarity formula is adopted for comparison, and the cosine similarity formula is as follows:
Figure BDA0002470009200000201
wherein i is a feature dimension, x is a face feature a, and y is a face feature b.
The process of determining the face features of the account in the application is the prior art, and is not described herein again.
Based on the same technical concept, the application further provides an object pushing device, and the object pushing device can implement the process executed by the electronic device in the foregoing embodiment.
Fig. 6 is a schematic structural diagram of an object pushing apparatus according to some embodiments of the present application, and on the basis of the foregoing embodiments, the present application further provides an object pushing apparatus, where the apparatus includes:
a receiving unit 601, configured to receive an image including an account for viewing a currently displayed target object;
a processing unit 602, configured to determine, according to the image, identification information of an account for viewing, and when an account in which the identification information is stored views an attention value of an attribute type of the target object, determine, according to an image of an account including the identification information, a degree of attention of the account in which the identification information is stored to the target object, and update the stored attention value according to the degree of attention;
a pushing unit 603, configured to obtain other objects of the attribute category when the updated attention value meets a pushing condition, and push information of the other objects to a display device for display after the target object is played.
In some embodiments, the processing unit 602 is specifically configured to determine, according to an image of an account including the identification information, an expression change value and/or a concentration value of the account including the identification information when the target object is viewed, and determine, according to the determined expression change value and/or concentration value, a degree of attention of the account including the identification information to the target object.
In some embodiments, the processing unit 602 is specifically configured to obtain images acquired within a first time period that meets a first preset condition, obtain a second label value of facial expression of an account of the identification information in each image based on an expression recognition network model that is trained in advance, determine the number of times of expression change of the account according to the second label value of facial expression corresponding to each image, and determine the expression change value according to the number of times of expression change and the first time period.
In some embodiments, the processing unit 602 is specifically configured to obtain, based on a deflection angle model that is trained in advance, a first included angle between a face orientation of the account of the identification information in each image and a plane where the display device is located, and if the first included angles of consecutive images are all within a preset first angle threshold range, take the consecutive images as images acquired within a first time period that meets a first preset condition.
In some embodiments, the processing unit 602 is specifically configured to obtain images within a second time period that meets a second preset condition, obtain, based on a deflection angle model that is trained in advance, a second included angle between a face orientation of the account of the identification information in each image and a plane where the display device is located, determine, for each second included angle, whether the second included angle is within a preset second angle threshold range, if so, determine that the account of the identification information views the target object at an image acquisition time corresponding to the second included angle, and determine the concentration value according to a third time period and the second time period when the account of the identification information views the target object.
In some embodiments, the processing unit 602 is specifically configured to divide the resource duration corresponding to the target object into a plurality of time periods according to a preset time interval, and determine each time period as a second time period meeting the second preset condition.
In some embodiments, the processing unit 602 is specifically configured to, when determining the attention degree of the account of the identification information to the target object according to the determined expression change value and concentration value, determine the attention degree of the account of the identification information to the target object according to a first weight value corresponding to the concentration value and a second weight value corresponding to the expression change value.
In some embodiments, the processing unit 602 is specifically configured to determine and store the updated attention value according to a third weight value corresponding to the stored attention value and a fourth weight value corresponding to the attention degree.
In some embodiments, the processing unit 602 is specifically configured to train the expression recognition network model for each sample image in the training set and the first label value of the facial expression corresponding to each sample image.
For the concepts, explanations, detailed descriptions and other steps related to the object pushing device in the present application, please refer to the descriptions of the foregoing methods or other embodiments, which are not repeated herein.
Fig. 7 is a schematic structural diagram of an electronic device according to some embodiments of the present application, and on the basis of the foregoing embodiments, the present application further provides an electronic device, including: the system comprises a processor 701, a communication interface 702, a memory 703 and a communication bus 704, wherein the processor 701, the communication interface 702 and the memory 703 complete mutual communication through the communication bus 704;
the memory 703 has stored therein a computer program, which, when executed by the processor 701, causes the processor 701 to perform the steps of the method described above, wherein the electronic device performs the corresponding functions.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 702 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
On the basis of the foregoing embodiments, the present application provides a computer-readable storage medium, in which a computer program executable by an electronic device is stored, and computer-executable instructions are used for causing a computer to execute the procedures performed by the foregoing method parts.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
Fig. 8 is a schematic structural diagram of an object pushing system according to some embodiments of the present application, and on the basis of the foregoing embodiments, the present application further provides an object pushing system, including: the object pushing apparatus 100 applied to the electronic device and the display device 200 for displaying the target object and other objects in any of the above embodiments are provided.
The object pushing apparatus 100 is configured to receive an image including an account for viewing a currently displayed target object; according to the image, determining identification information of an account for watching, when the account in which the identification information is stored watches the attention value of the attribute type of the target object, according to the image of the account containing the identification information, determining the attention degree of the account containing the identification information to the target object, and updating the stored attention value according to the attention degree; and when the updated attention value meets the pushing condition, acquiring other objects of the attribute category, and after the target object is played, pushing information of the other objects to the display device 200 for display.
For a detailed description of the object pushing apparatus 100, reference is made to the above description, which is not repeated herein.
The display device 200 is a device for displaying a target object and other objects in the prior art, and is not described herein again.
According to the method and the device, the attention degree of the account to the target object can be determined according to the identification information of the account and the image of the account containing the identification information, the stored attention value of the attribute type of the target object is updated according to the attention degree, and then other objects of the attribute type of the target object can be pushed to the account when the updated attention value meets the pushing condition, so that the object can be pushed to the account quickly and accurately.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An object pushing method, characterized in that the method comprises:
receiving an image containing an account for viewing a currently displayed target object;
according to the image, determining identification information of an account for watching, when the account in which the identification information is stored watches the attention value of the attribute type of the target object, according to the image of the account containing the identification information, determining the attention degree of the account containing the identification information to the target object, and updating the stored attention value according to the attention degree;
and when the updated attention value meets the pushing condition, acquiring other objects of the attribute category, and after the target object is played, pushing the information of the other objects to display equipment for displaying.
2. The method according to claim 1, wherein the determining the attention degree of the account containing the identification information to the target object according to the image of the account containing the identification information comprises:
and determining an expression change value and/or a concentration value of the account containing the identification information when the target object is viewed by the account containing the identification information according to the image of the account containing the identification information, and determining the attention degree of the account containing the identification information to the target object according to the determined expression change value and/or concentration value.
3. The method of claim 2, wherein the determining, according to the image of the account containing the identification information, the expression change value of the account containing the identification information when the target object is viewed comprises:
the method comprises the steps of obtaining images collected within a first time period meeting a first preset condition, obtaining a second label value of facial expression of an account of identification information in each image based on an expression recognition network model which is trained in advance, determining the number of times of expression change of the account according to the second label value of the facial expression corresponding to each image, and determining the expression change value according to the number of times of expression change and the first time period.
4. The method according to claim 3, wherein the acquiring images acquired within a first time period satisfying a first preset condition comprises:
and acquiring a first included angle between the face orientation of the account of the identification information in each image and the plane of the display device based on a deflection angle model which is trained in advance, and if the first included angles of a plurality of continuous images are all located in a preset first angle threshold range, taking the plurality of continuous images as the acquired images which are acquired in a first time period meeting a first preset condition.
5. The method of claim 2, wherein determining the concentration value of the account of the identification information when viewing the target object according to the image of the account containing the identification information comprises:
acquiring images in a second time period meeting a second preset condition, acquiring a second included angle between the face orientation of the account of the identification information in each image and the plane of the display device based on a deflection angle model which is trained in advance, judging whether the second included angle is within a preset second angle threshold range or not for each second included angle, if so, determining the image acquisition time corresponding to the second included angle, watching the target object by the account of the identification information, and determining the concentration value according to a third time period and the second time period when the target object is watched by the account of the identification information.
6. The method of claim 5, wherein determining a second time period that satisfies a second preset condition comprises:
and dividing the resource duration corresponding to the target object into a plurality of time periods according to a preset time interval, and determining each time period as a second time period meeting a second preset condition.
7. The method according to any one of claims 2-6, wherein the determining the interest level of the account of the identification information on the target object according to the determined expression change value and/or concentration value comprises:
and when the attention degree of the account of the identification information to the target object is determined according to the determined expression change value and the concentration value, determining the attention degree of the account of the identification information to the target object according to a first weight value corresponding to the concentration value and a second weight value corresponding to the expression change value.
8. The method of claim 1, wherein updating the saved attention value according to the attention comprises:
and determining and storing the updated attention value according to the third weight value corresponding to the stored attention value and the fourth weight value corresponding to the attention degree.
9. The method of claim 3, wherein the training process for the expression recognition network model comprises:
and training the expression recognition network model aiming at each sample image in the training set and the first label value of the facial expression corresponding to each sample image.
10. An electronic device, characterized in that the electronic device comprises at least a processor and a memory, the processor being configured to implement the steps of the object pushing method according to any of claims 1-9 when executing a computer program stored in the memory.
CN202010345804.0A 2020-04-27 2020-04-27 Object pushing method and device Pending CN113472834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010345804.0A CN113472834A (en) 2020-04-27 2020-04-27 Object pushing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010345804.0A CN113472834A (en) 2020-04-27 2020-04-27 Object pushing method and device

Publications (1)

Publication Number Publication Date
CN113472834A true CN113472834A (en) 2021-10-01

Family

ID=77865844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010345804.0A Pending CN113472834A (en) 2020-04-27 2020-04-27 Object pushing method and device

Country Status (1)

Country Link
CN (1) CN113472834A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129988A (en) * 2022-06-29 2022-09-30 北京达佳互联信息技术有限公司 Information acquisition method and device, electronic equipment and storage medium
CN117499477A (en) * 2023-11-16 2024-02-02 北京易华录信息技术股份有限公司 Information pushing method and system based on large model training

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140053105A1 (en) * 2011-10-19 2014-02-20 Panasonic Corporation Display control device, integrated circuit, and display control method
CN106056405A (en) * 2016-05-27 2016-10-26 上海青研科技有限公司 Advertisement directional-pushing technology based on virtual reality visual interest area
CN108345676A (en) * 2018-02-11 2018-07-31 广东欧珀移动通信有限公司 Information-pushing method and Related product
CN108921585A (en) * 2018-05-15 2018-11-30 北京七鑫易维信息技术有限公司 A kind of advertisement sending method, device, equipment and storage medium
CN109003135A (en) * 2018-07-20 2018-12-14 云南航伴科技有限公司 Intelligent advertisement matching supplying system and method based on recognition of face
CN109559193A (en) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and the medium of intelligent recognition
CN110321477A (en) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 Information recommendation method, device, terminal and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110971659A (en) * 2019-10-11 2020-04-07 贝壳技术有限公司 Recommendation message pushing method and device and storage medium
CN110989846A (en) * 2020-03-04 2020-04-10 支付宝(杭州)信息技术有限公司 Information processing method, device, equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140053105A1 (en) * 2011-10-19 2014-02-20 Panasonic Corporation Display control device, integrated circuit, and display control method
CN106056405A (en) * 2016-05-27 2016-10-26 上海青研科技有限公司 Advertisement directional-pushing technology based on virtual reality visual interest area
CN108345676A (en) * 2018-02-11 2018-07-31 广东欧珀移动通信有限公司 Information-pushing method and Related product
CN108921585A (en) * 2018-05-15 2018-11-30 北京七鑫易维信息技术有限公司 A kind of advertisement sending method, device, equipment and storage medium
CN109003135A (en) * 2018-07-20 2018-12-14 云南航伴科技有限公司 Intelligent advertisement matching supplying system and method based on recognition of face
CN109559193A (en) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and the medium of intelligent recognition
CN110321477A (en) * 2019-05-24 2019-10-11 平安科技(深圳)有限公司 Information recommendation method, device, terminal and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110971659A (en) * 2019-10-11 2020-04-07 贝壳技术有限公司 Recommendation message pushing method and device and storage medium
CN110989846A (en) * 2020-03-04 2020-04-10 支付宝(杭州)信息技术有限公司 Information processing method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129988A (en) * 2022-06-29 2022-09-30 北京达佳互联信息技术有限公司 Information acquisition method and device, electronic equipment and storage medium
CN117499477A (en) * 2023-11-16 2024-02-02 北京易华录信息技术股份有限公司 Information pushing method and system based on large model training

Similar Documents

Publication Publication Date Title
CN110119711B (en) Method and device for acquiring character segments of video data and electronic equipment
CN109359636B (en) Video classification method, device and server
CN109145784B (en) Method and apparatus for processing video
CN110309795B (en) Video detection method, device, electronic equipment and storage medium
CN101281540B (en) Apparatus, method and computer program for processing information
CN110909205B (en) Video cover determination method and device, electronic equipment and readable storage medium
EP3473016B1 (en) Method and system for automatically producing video highlights
Saba et al. Analysis of vision based systems to detect real time goal events in soccer videos
CN111522996B (en) Video clip retrieval method and device
CN110276309B (en) Video processing method, video processing device, computer equipment and storage medium
CN109063611B (en) Face recognition result processing method and device based on video semantics
CN109508406B (en) Information processing method and device and computer readable storage medium
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN110941738B (en) Recommendation method and device, electronic equipment and computer-readable storage medium
CN109871490A (en) Media resource matching process, device, storage medium and computer equipment
CN112199582B (en) Content recommendation method, device, equipment and medium
CN113761253A (en) Video tag determination method, device, equipment and storage medium
CN113539304A (en) Video strip splitting method and device
US20240153270A1 (en) System and method for merging asynchronous data sources
CN112417970A (en) Target object identification method, device and electronic system
CN113472834A (en) Object pushing method and device
CN110569447B (en) Network resource recommendation method and device and storage medium
CN112104914B (en) Video recommendation method and device
Liu et al. Automated player identification and indexing using two-stage deep learning network
CN116704591A (en) Eye axis prediction model training method, eye axis prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211001

RJ01 Rejection of invention patent application after publication