CN117499477A - Information pushing method and system based on large model training - Google Patents

Information pushing method and system based on large model training Download PDF

Info

Publication number
CN117499477A
CN117499477A CN202311523268.9A CN202311523268A CN117499477A CN 117499477 A CN117499477 A CN 117499477A CN 202311523268 A CN202311523268 A CN 202311523268A CN 117499477 A CN117499477 A CN 117499477A
Authority
CN
China
Prior art keywords
orientation
target
face
user
face orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311523268.9A
Other languages
Chinese (zh)
Other versions
CN117499477B (en
Inventor
史晓蒙
吕晓鹏
魏健康
张伟
田其鹏
周亮
郝维佳
倪志云
王凌
赵阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing E Hualu Information Technology Co Ltd
Original Assignee
Beijing E Hualu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing E Hualu Information Technology Co Ltd filed Critical Beijing E Hualu Information Technology Co Ltd
Priority to CN202311523268.9A priority Critical patent/CN117499477B/en
Priority claimed from CN202311523268.9A external-priority patent/CN117499477B/en
Publication of CN117499477A publication Critical patent/CN117499477A/en
Application granted granted Critical
Publication of CN117499477B publication Critical patent/CN117499477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of information pushing, and particularly discloses an information pushing method and system based on large model training, wherein the method comprises the following steps: based on information acquisition permission granted by a user, acquiring a state photo of the user when using a target APP, and obtaining a user state picture set, wherein the state photo carries shooting time; extracting a target picture set from the user state picture set according to target push information; analyzing and identifying the target picture set, and confirming the attention level of the user; adjusting the next pushing level of the target pushing information according to the user attention level; the invention realizes the real-time monitoring of the user attention level, adjusts the next pushing level according to the user attention level, avoids the trouble to the user with low attention, causes the dislike of the user, can also push the user with high attention deeply, and meets the attention requirement of the user.

Description

Information pushing method and system based on large model training
Technical Field
The invention relates to the technical field of information pushing, in particular to an information pushing method and system based on large model training.
Background
With the rapid development of artificial intelligence technology, large model training is becoming an important research direction in the field of artificial intelligence. Large model training refers to training a deep learning model using large-scale data sets and computational resources, thereby improving the accuracy and generalization ability of the model.
At present, the information pushing modes comprise short message pushing, telephone pushing, public number pushing, pushing in APP and the like, along with the wide popularization and use of the Internet, computers, mobile phones and the like, the information pushing in APP becomes one of important means for marketing popularization, advertisement pushing information which is fixedly pushed is usually mixed in the process of watching a video screen, the advertisement pushing information cannot be skipped, the pushing information of all users is the same content, the dislike of the users who are not interested in the pushing information is caused, and meanwhile, the deep pushing of the interested users is not achieved.
Disclosure of Invention
The invention aims to provide an information pushing method and system based on large model training, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an information pushing method based on large model training, the method comprising the following steps:
based on information acquisition permission granted by a user, acquiring a state photo of the user when using a target APP, and obtaining a user state picture set, wherein the state photo carries shooting time;
extracting a target picture set from the user state picture set according to target pushing information, wherein the target picture set comprises an initial state picture set, a time-of-progress state picture set and an end state picture set;
analyzing and identifying the target picture set, and confirming the attention level of the user;
and adjusting the next pushing level of the target pushing information according to the user attention level.
As a further technical solution of the present invention, the step of extracting the target picture set from the user status picture set according to the target push information includes:
the method comprises the steps of obtaining pushing time of target pushing information, and generating a target picture obtaining period taking the pushing time of the target pushing information as a center, wherein the target picture obtaining period comprises an initial period, a time period when the target picture is carried out and an end period;
traversing the initial period, the time period when the state photo is carried and the end period in sequence based on the shooting time carried by the state photo;
inserting the state photo into an initial state photo set when the shooting time carried by the state photo is matched with the initial period;
when the shooting time carried by the state photo is matched with the running time period, inserting the state photo into a running time state picture set;
and inserting the state photo into an ending state picture set when the shooting time carried by the state photo is matched with the ending time period.
As a further technical solution of the present invention, the step of analyzing and identifying the target picture set and confirming the attention of the user includes:
respectively identifying the pictures in the target picture set based on an image identification technology, and acquiring an initial face orientation set, a face orientation set during progress and an end face orientation set of a user, wherein the face orientation categories comprise face up, face down, face left, face right, face forward and no face;
respectively counting the actual duty ratio of each face orientation category in the initial face orientation set, the face orientation set during progress and the end face orientation set;
confirming the habit face orientation and habit orientation rate according to the initial face orientation set and the end face orientation set;
acquiring the sum of actual occupation ratios corresponding to the custom face orientation categories in the face orientation set during the process, and obtaining the sum as the actual orientation ratio;
obtaining an orientation deviation value according to the actual orientation rate and the habit orientation rate, wherein the orientation deviation value is an absolute value of a difference value between the actual orientation rate and the habit orientation rate;
and comparing the orientation deviation value with a attention degree grade table, and matching attention degree grades of a user, wherein the attention degree grades in the attention degree grade table are a plurality of grades, and each attention degree grade corresponds to a section of preset deviation value.
As a further technical solution of the present invention, the step of determining the custom face orientation and the custom orientation rate according to the initial face orientation set and the end face orientation set includes:
acquiring the maximum value of the actual ratio of the initial face orientation to the concentrated single face orientation as a first custom face orientation and a first custom orientation rate;
acquiring the maximum value of the actual ratio of the single face orientation in the ending face orientation set as a second custom face orientation and a second custom orientation rate;
when the first habit face orientation and the second habit face orientation are the same, confirming that the habit face orientation is not changed, judging that the first habit face orientation is the habit face orientation, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate;
when the first habit face orientation and the second habit face orientation are different, confirming that the habit face orientation changes, judging that the first habit face orientation and the second habit face orientation are both habit face orientations, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate.
As a further technical solution of the present invention, the step of adjusting the push level of the target push information according to the user attention level includes:
traversing a push information grading table based on the attention degree grade, and matching the next push level of the target push information, wherein the push level and the attention degree grade in the push information grading table are set in a plurality of pairs, the push level comprises deep push, medium push and simple push, and each push level of the target push information corresponds to different push time lengths;
pushing target pushing information according to the matched next pushing level.
It is another object of an embodiment of the present invention to provide an information pushing system based on large model training, the system including:
the image acquisition module is used for acquiring a state image of a user when the user uses the target APP based on the information acquisition permission granted by the user to obtain a user state image set, wherein the state image carries shooting time;
the picture extraction module is used for extracting a target picture set from the user state picture set according to target pushing information, wherein the target picture set comprises an initial state picture set, a time-in-progress state picture set and an end state picture set;
the picture analysis module is used for analyzing and identifying the target picture set and confirming the attention level of the user;
and the pushing adjustment module is used for adjusting the next pushing level of the target pushing information according to the attention level of the user.
As a further technical solution of the present invention, the picture extraction module includes:
the device comprises a target time period generation unit, a target image acquisition unit and a target image processing unit, wherein the target time period generation unit is used for acquiring the push time of target push information and generating a target image acquisition time period taking the push time of the target push information as a center, the target image acquisition time period comprises an initial time period, a time period when the target image acquisition time period is performed and an end time period, and the initial time period, the time period when the target image acquisition time period and the end time period are sequentially connected;
the shooting time traversing unit is used for traversing the initial period, the time period when the shooting is performed and the end period in sequence based on the shooting time carried by the state photo;
a first picture matching unit, configured to insert the status picture into an initial status picture set when a photographing time carried by the status picture matches the initial period;
a second picture matching unit, configured to insert the status picture into a running status picture set when the shooting time carried by the status picture matches the running time period;
and a third picture matching unit, configured to insert the status picture into an end status picture set when the shooting time carried by the status picture matches the end period.
As a further technical solution of the present invention, the picture analysis module includes:
the face orientation recognition unit is used for respectively recognizing the pictures in the target picture set based on an image recognition technology, and acquiring an initial face orientation set, a face orientation set during progress and an end face orientation set of a user, wherein the face orientation categories comprise face up, face down, face left, face right, face forward and no face;
the face orientation proportion statistics unit is used for respectively counting the actual proportion of each face orientation category in the initial face orientation set, the face orientation set during progress and the face orientation set at the end;
a habit orientation determining unit, configured to determine a habit face orientation and a habit orientation rate according to the initial face orientation set and the end face orientation set;
the actual orientation rate statistics unit is used for acquiring the sum of the actual occupation ratios corresponding to the custom face orientation categories in the face orientation set during the process, and the sum is called the actual orientation rate;
an orientation deviation value calculation unit, configured to obtain an orientation deviation value according to the actual orientation rate and the habit orientation rate, where the orientation deviation value is an absolute value of a difference between the actual orientation rate and the habit orientation rate;
the user attention matching unit is used for comparing the orientation deviation value with an attention level list and matching the attention level of the user, wherein the attention level in the attention level list is a plurality of levels, and each attention level corresponds to a section of preset deviation value.
Compared with the prior art, the invention has the beneficial effects that: according to the information pushing method and system based on large model training, before pushing information, when pushing information and user state photos in corresponding time periods after pushing information can be obtained, target pictures are analyzed through an image recognition technology, custom face orientation and custom orientation rate when the pushing information is not watched can be obtained, meanwhile, actual orientation rate when the pushing information is watched can be obtained, attention level of a user for the pushing information can be obtained through calculation through the custom orientation rate and the actual orientation rate, the next pushing level of the pushing information is adjusted according to the attention level of the user, real-time monitoring of the attention level of the user is achieved, the next pushing level is adjusted according to the attention level of the user, disturbance to users with low attention is avoided, user objection is caused, meanwhile, deep pushing of the user with high attention is also achieved, and attention demands of the user are met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flow diagram of an information pushing method based on large model training.
Fig. 2 is a flow chart of the steps of picture classification in the information push method based on large model training.
FIG. 3 is a flow chart of the steps for analyzing a user's attention level in a large model training based information push method.
Fig. 4 is a block diagram of a large model training based information pushing system.
Fig. 5 is a block diagram of a picture analysis module in the information pushing system based on large model training.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides an information pushing method based on large model training, where the method includes the following steps:
step S100, based on information acquisition permission granted by a user, acquiring a state photo of the user when the user uses a target APP, and obtaining a user state photo set, wherein the state photo carries shooting time, the target APP comprises an information pushing function, and the target APP can be specific viewing software such as a messenger video screen, a mango TV and the like;
step S200, extracting a target picture set from the user state picture set according to target push information, wherein the target picture set comprises an initial state picture set, a time-of-progress state picture set and an end state picture set, and the target push information comprises a push level, push time and a push information body corresponding to the push level;
step S300, analyzing and identifying the target picture set, and confirming the attention level of the user;
step S400, adjusting the next pushing level of the target pushing information according to the user attention level.
The information pushing method provided by the embodiment of the invention is mainly used for advertisement information pushing methods encountered by users in the processes of watching a video screen, reading novels and the like, the pushing depths of the advertisement information encountered by the users in the processes of watching the video screen and reading the novels are generally the same for different users, and the user requirements cannot be met.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of extracting the target picture set from the user status picture set according to the target push information includes:
step S201, obtaining push time of target push information, and generating a target picture acquisition period taking the push time of the target push information as a center, wherein the target picture acquisition period comprises an initial period, a proceeding period and a finishing period, the initial period, the proceeding period and the finishing period are sequentially connected, and the initial period, the proceeding period and the finishing period are periods corresponding to before the push information, the push information and after the push information;
step S202, traversing the initial period, the time period when the state photo is carried out and the end period in sequence based on the shooting time carried by the state photo;
step S203, when the shooting time carried by the state photo is matched with the initial period, inserting the state photo into an initial state picture set;
step S204, when the shooting time carried by the state photo is matched with the running time period, inserting the state photo into a running state picture set;
step S205, when the shooting time carried by the state photo is matched with the ending time period, the state photo is inserted into an ending state picture set;
in addition, when the shooting time carried by the status photo does not match with the initial period, the proceeding period and the ending period, the status photo is inserted into the standby data set.
In this embodiment, the initial period, the time period when the state photograph is performed, and the end period are sequentially traversed through the shooting time carried by the state photograph, so as to implement classification of the state photograph.
As shown in fig. 3, as a preferred embodiment of the present invention, the analyzing identifies the target picture set, and the step of confirming the attention of the user includes:
step S301, respectively identifying the pictures in the target picture set based on an image identification technology, and acquiring an initial face orientation set, a proceeding face orientation set and an ending face orientation set of a user, wherein the face orientation categories comprise face up, face down, face left, face right, face forward and no face, and the face orientation refers to the direction of the face towards a device screen containing a target APP;
step S302, respectively counting the actual proportion of each face orientation category in the initial face orientation set, the face orientation set at the time of proceeding and the face orientation set at the end, wherein the actual proportion is the face orientation rate of each face orientation;
step S303, confirming the habit face orientation and habit orientation rate according to the initial face orientation set and the end face orientation set;
step S304, obtaining the sum of the actual duty ratios corresponding to the custom face orientation categories in the face orientation set during the progress, and the sum is called the actual orientation rate;
step S305, according to the actual orientation rate and the habit orientation rate, obtaining an orientation deviation value, wherein the orientation deviation value is an absolute value of a difference value between the actual orientation rate and the habit orientation rate;
step S306, comparing the orientation deviation value with a attention level list, and matching attention levels of users, wherein the attention levels in the attention level list are a plurality of levels, and each attention level corresponds to a section of preset deviation value.
In this embodiment, the recognition of the face orientation in the target picture set is implemented based on the image recognition technology, so that the face orientations in the initial period, the time period when the target picture is performed, and the end period can be respectively recognized, the face orientations and the face orientation rates before, during and after the target push information is watched are monitored, so that the orientation deviation value can be further calculated, and the attention level of the user to the target push information can be obtained through the orientation deviation value.
As a preferred embodiment of the present invention, the step of determining the custom face orientation and the custom orientation rate according to the initial face orientation set and the end face orientation set includes:
acquiring the maximum value of the actual ratio of the initial face orientation to the concentrated single face orientation as a first custom face orientation and a first custom orientation rate;
acquiring the maximum value of the actual ratio of the single face orientation in the ending face orientation set as a second custom face orientation and a second custom orientation rate;
when the first habit face orientation and the second habit face orientation are the same, confirming that the habit face orientation is not changed, judging that the first habit face orientation is the habit face orientation, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate;
when the first habit face orientation and the second habit face orientation are different, confirming that the habit face orientation changes, judging that the first habit face orientation and the second habit face orientation are both habit face orientations, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate.
Statistics of custom face orientation and custom orientation rate are achieved in the embodiment.
As a preferred embodiment of the present invention, the step of adjusting the push level of the target push information according to the user attention level includes:
and traversing a push information grading table based on the attention degree grade, and matching the next push level of the target push information, wherein the push level and the attention degree grade in the push information grading table are set in a plurality of pairs, the push level comprises deep push, medium push and simple push, and each push level of the target push information corresponds to a push information body with different push time lengths. In the invention, the push information body corresponding to each push level is preset content.
Pushing target pushing information according to the matched next pushing level.
In the embodiment of the invention, the push depth of advertisement information encountered in the existing process of watching a video screen and reading a novel is generally the same for different users, in particular, the push depth and the push level of the advertisement information are generally fixed in different sets of television plays or different chapters of the novel.
As shown in fig. 4, another object of an embodiment of the present invention is to provide an information pushing system based on large model training, the system includes:
the picture acquisition module 100 is configured to acquire a state picture of a user when the user uses a target APP based on information acquisition rights granted by the user, so as to obtain a user state picture set, where the state picture carries shooting time;
the picture extraction module 200 is configured to extract a target picture set from the user state picture set according to target push information, where the target picture set includes an initial state picture set, a time-of-day state picture set, and an end state picture set;
the picture analysis module 300 is used for analyzing and identifying the target picture set and confirming the attention level of the user;
the push adjustment module 400 is configured to adjust a next push level of the target push information according to a user attention level.
As a preferred embodiment of the present invention, the picture extraction module 200 includes:
the device comprises a target time period generation unit, a target image acquisition unit and a target image processing unit, wherein the target time period generation unit is used for acquiring the push time of target push information and generating a target image acquisition time period taking the push time of the target push information as a center, the target image acquisition time period comprises an initial time period, a time period when the target image acquisition time period is performed and an end time period, and the initial time period, the time period when the target image acquisition time period and the end time period are sequentially connected;
the shooting time traversing unit is used for traversing the initial period, the time period when the shooting is performed and the end period in sequence based on the shooting time carried by the state photo;
a first picture matching unit, configured to insert the status picture into an initial status picture set when a photographing time carried by the status picture matches the initial period;
a second picture matching unit, configured to insert the status picture into a running status picture set when the shooting time carried by the status picture matches the running time period;
and a third picture matching unit, configured to insert the status picture into an end status picture set when the shooting time carried by the status picture matches the end period.
As shown in fig. 5, as a preferred embodiment of the present invention, the picture analysis module 300 includes:
a face direction recognition unit 301, configured to respectively recognize the pictures in the target picture set based on an image recognition technology, obtain an initial face direction set, a proceeding face direction set, and an ending face direction set of the user, where the face direction categories include face up, face down, face left, face right, face forward, and no face;
a face-to-face ratio statistics unit 302, configured to respectively count actual ratios of face-to-face categories in the initial face-to-face set, the on-going face-to-face set, and the end face-to-face set;
a custom orientation determining unit 303, configured to determine a custom face orientation and a custom orientation rate according to the initial face orientation set and the end face orientation set;
an actual orientation rate statistics unit 304, configured to obtain a sum of actual duty ratios corresponding to the custom face orientation categories in the face orientation set during the proceeding, which is referred to as an actual orientation rate;
an orientation deviation value calculating unit 305, configured to obtain an orientation deviation value according to the actual orientation rate and the habit orientation rate, where the orientation deviation value is an absolute value of a difference between the actual orientation rate and the habit orientation rate;
the user attention matching unit 306 is configured to compare the orientation deviation value with an attention level table, and match user attention levels, where the attention levels in the attention level table are a plurality of levels, and each attention level corresponds to a preset deviation value.
The functions which can be realized by the information pushing method based on the big model training are all completed by computer equipment, the computer equipment comprises one or more processors and one or more memories, at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to realize the functions of the information pushing method based on the big model training.
The processor takes out instructions from the memory one by one, analyzes the instructions, then completes corresponding operation according to the instruction requirement, generates a series of control commands, enables all parts of the computer to automatically, continuously and cooperatively act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
For example, a computer program may be split into one or more modules, one or more modules stored in memory and executed by a processor to perform the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
It will be appreciated by those skilled in the art that the foregoing description of the service device is merely an example and is not meant to be limiting, and may include more or fewer components than the foregoing description, or may combine certain components, or different components, such as may include input-output devices, network access devices, buses, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device described above, and which connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used for storing computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as an information acquisition template display function, a product information release function, etc.), and the like; the storage data area may store data created according to the use of the berth status display system (e.g., product information acquisition templates corresponding to different product types, product information required to be released by different product providers, etc.), and so on. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The modules/units integrated in the terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may implement all or part of the modules/units in the system of the above-described embodiments, or may be implemented by instructing the relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the functions of the respective system embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. An information pushing method based on large model training is characterized by comprising the following steps:
based on information acquisition permission granted by a user, acquiring a state photo of the user when using a target APP, and obtaining a user state photo set, wherein the state photo carries shooting time, and the target APP comprises an information pushing function;
extracting a target picture set from the user state picture set according to target push information, wherein the target picture set comprises an initial state picture set, a time-of-progress state picture set and an end state picture set, and the target push information comprises a push level, push time and a push information body corresponding to the push level;
analyzing and identifying the target picture set, and confirming the attention level of the user;
and adjusting the next pushing level of the target pushing information according to the user attention level.
2. The method for pushing information based on large model training according to claim 1, wherein the step of extracting the target picture set from the user state picture set according to the target pushing information comprises:
the method comprises the steps of obtaining pushing time of target pushing information, and generating a target picture obtaining period taking the pushing time of the target pushing information as a center, wherein the target picture obtaining period comprises an initial period, a time period when the target picture is carried out and an end period;
traversing the initial period, the time period when the state photo is carried and the end period in sequence based on the shooting time carried by the state photo;
inserting the state photo into an initial state photo set when the shooting time carried by the state photo is matched with the initial period;
when the shooting time carried by the state photo is matched with the running time period, inserting the state photo into a running time state picture set;
and inserting the state photo into an ending state picture set when the shooting time carried by the state photo is matched with the ending time period.
3. The method for pushing information based on large model training according to claim 2, wherein the step of analyzing and identifying the target picture set and confirming the attention of the user comprises:
respectively identifying the pictures in the target picture set based on an image identification technology, and acquiring an initial face orientation set, a face orientation set during progress and an end face orientation set of a user, wherein the face orientation categories comprise face up, face down, face left, face right, face forward and no face;
respectively counting the actual duty ratio of each face orientation category in the initial face orientation set, the face orientation set during progress and the end face orientation set;
confirming the habit face orientation and habit orientation rate according to the initial face orientation set and the end face orientation set;
acquiring the sum of actual occupation ratios corresponding to the custom face orientation categories in the face orientation set during the process, and obtaining the sum as the actual orientation ratio;
obtaining an orientation deviation value according to the actual orientation rate and the habit orientation rate, wherein the orientation deviation value is an absolute value of a difference value between the actual orientation rate and the habit orientation rate;
and comparing the orientation deviation value with a attention degree grade table, and matching attention degree grades of a user, wherein the attention degree grades in the attention degree grade table are a plurality of grades, and each attention degree grade corresponds to a section of preset deviation value.
4. A method of pushing information based on large model training according to claim 3, wherein the step of determining a custom face orientation and a custom orientation rate from the initial face orientation set and the end face orientation set comprises:
acquiring the maximum value of the actual ratio of the initial face orientation to the concentrated single face orientation as a first custom face orientation and a first custom orientation rate;
acquiring the maximum value of the actual ratio of the single face orientation in the ending face orientation set as a second custom face orientation and a second custom orientation rate;
when the first habit face orientation and the second habit face orientation are the same, confirming that the habit face orientation is not changed, judging that the first habit face orientation is the habit face orientation, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate;
when the first habit face orientation and the second habit face orientation are different, confirming that the habit face orientation changes, judging that the first habit face orientation and the second habit face orientation are both habit face orientations, and the habit orientation rate is the average value between the first habit orientation rate and the second habit orientation rate.
5. The method for pushing information based on large model training according to claim 4, wherein the step of adjusting the pushing level of the target pushing information according to the user attention level comprises:
traversing a push information hierarchical table based on the attention level, and matching the next push level of the target push information, wherein the push level and the attention level in the push information hierarchical table are set in a plurality of pairs, and the push level comprises deep push, medium push and simple push;
pushing target pushing information according to the matched next pushing level.
6. An information pushing system based on large model training, the system comprising:
the image acquisition module is used for acquiring a state image of a user when the user uses the target APP based on the information acquisition permission granted by the user to obtain a user state image set, wherein the state image carries shooting time;
the picture extraction module is used for extracting a target picture set from the user state picture set according to target pushing information, wherein the target picture set comprises an initial state picture set, a time-in-progress state picture set and an end state picture set;
the picture analysis module is used for analyzing and identifying the target picture set and confirming the attention level of the user;
and the pushing adjustment module is used for adjusting the next pushing level of the target pushing information according to the attention level of the user.
7. The information pushing system based on large model training of claim 6, wherein the picture extraction module comprises:
the device comprises a target time period generation unit, a target image acquisition unit and a target image processing unit, wherein the target time period generation unit is used for acquiring the push time of target push information and generating a target image acquisition time period taking the push time of the target push information as the center, and the target image acquisition time period comprises an initial time period, a time period when the target image acquisition time period is performed and an end time period;
the shooting time traversing unit is used for traversing the initial period, the time period when the shooting is performed and the end period in sequence based on the shooting time carried by the state photo;
a first picture matching unit, configured to insert the status picture into an initial status picture set when a photographing time carried by the status picture matches the initial period;
a second picture matching unit, configured to insert the status picture into a running status picture set when the shooting time carried by the status picture matches the running time period;
and a third picture matching unit, configured to insert the status picture into an end status picture set when the shooting time carried by the status picture matches the end period.
8. The information pushing system based on large model training of claim 7, wherein the picture analysis module comprises:
the face orientation recognition unit is used for respectively recognizing the pictures in the target picture set based on an image recognition technology, and acquiring an initial face orientation set, a face orientation set during progress and an end face orientation set of a user, wherein the face orientation categories comprise face up, face down, face left, face right, face forward and no face;
the face orientation proportion statistics unit is used for respectively counting the actual proportion of each face orientation category in the initial face orientation set, the face orientation set during progress and the face orientation set at the end;
a habit orientation determining unit, configured to determine a habit face orientation and a habit orientation rate according to the initial face orientation set and the end face orientation set;
the actual orientation rate statistics unit is used for acquiring the sum of the actual occupation ratios corresponding to the custom face orientation categories in the face orientation set during the process, and the sum is called the actual orientation rate;
an orientation deviation value calculation unit, configured to obtain an orientation deviation value according to the actual orientation rate and the habit orientation rate, where the orientation deviation value is an absolute value of a difference between the actual orientation rate and the habit orientation rate;
the user attention matching unit is used for comparing the orientation deviation value with an attention level list and matching the attention level of the user, wherein the attention level in the attention level list is a plurality of levels, and each attention level corresponds to a section of preset deviation value.
CN202311523268.9A 2023-11-16 Information pushing method and system based on large model training Active CN117499477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311523268.9A CN117499477B (en) 2023-11-16 Information pushing method and system based on large model training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311523268.9A CN117499477B (en) 2023-11-16 Information pushing method and system based on large model training

Publications (2)

Publication Number Publication Date
CN117499477A true CN117499477A (en) 2024-02-02
CN117499477B CN117499477B (en) 2024-06-07

Family

ID=

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140053105A1 (en) * 2011-10-19 2014-02-20 Panasonic Corporation Display control device, integrated circuit, and display control method
JP2016076259A (en) * 2015-12-21 2016-05-12 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN106407361A (en) * 2016-09-07 2017-02-15 北京百度网讯科技有限公司 Method and device for pushing information based on artificial intelligence
WO2017101323A1 (en) * 2015-12-15 2017-06-22 乐视控股(北京)有限公司 Method and device for image capturing and information pushing and mobile phone
CN106971006A (en) * 2017-04-27 2017-07-21 暴风集团股份有限公司 The personalized push method and system of a kind of competitive sports information
WO2018090447A1 (en) * 2016-11-16 2018-05-24 深圳Tcl数字技术有限公司 Advertisement quality assessment method and device
CN108234591A (en) * 2017-09-21 2018-06-29 深圳市商汤科技有限公司 The content-data of identity-based verification device recommends method, apparatus and storage medium
CN108600325A (en) * 2018-03-27 2018-09-28 努比亚技术有限公司 A kind of determination method, server and the computer readable storage medium of push content
CN109559193A (en) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and the medium of intelligent recognition
CN109670456A (en) * 2018-12-21 2019-04-23 北京七鑫易维信息技术有限公司 A kind of content delivery method, device, terminal and storage medium
WO2019218851A1 (en) * 2018-05-15 2019-11-21 北京七鑫易维信息技术有限公司 Advertisement pushing method, apparatus and device, and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN110677448A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Associated information pushing method, device and system
WO2020238023A1 (en) * 2019-05-24 2020-12-03 平安科技(深圳)有限公司 Information recommendation method and apparatus, and terminal and storage medium
CN113472834A (en) * 2020-04-27 2021-10-01 海信集团有限公司 Object pushing method and device
CN114663700A (en) * 2022-03-10 2022-06-24 支付宝(杭州)信息技术有限公司 Virtual resource pushing method, device and equipment
CN115082041A (en) * 2022-07-20 2022-09-20 深圳市必提教育科技有限公司 User information management method, device, equipment and storage medium
CN116233556A (en) * 2023-03-22 2023-06-06 网易有道信息技术(北京)有限公司 Video pushing method and device, storage medium and electronic equipment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140053105A1 (en) * 2011-10-19 2014-02-20 Panasonic Corporation Display control device, integrated circuit, and display control method
WO2017101323A1 (en) * 2015-12-15 2017-06-22 乐视控股(北京)有限公司 Method and device for image capturing and information pushing and mobile phone
JP2016076259A (en) * 2015-12-21 2016-05-12 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN106407361A (en) * 2016-09-07 2017-02-15 北京百度网讯科技有限公司 Method and device for pushing information based on artificial intelligence
WO2018090447A1 (en) * 2016-11-16 2018-05-24 深圳Tcl数字技术有限公司 Advertisement quality assessment method and device
CN106971006A (en) * 2017-04-27 2017-07-21 暴风集团股份有限公司 The personalized push method and system of a kind of competitive sports information
CN108234591A (en) * 2017-09-21 2018-06-29 深圳市商汤科技有限公司 The content-data of identity-based verification device recommends method, apparatus and storage medium
CN108600325A (en) * 2018-03-27 2018-09-28 努比亚技术有限公司 A kind of determination method, server and the computer readable storage medium of push content
WO2019218851A1 (en) * 2018-05-15 2019-11-21 北京七鑫易维信息技术有限公司 Advertisement pushing method, apparatus and device, and storage medium
CN110677448A (en) * 2018-07-03 2020-01-10 百度在线网络技术(北京)有限公司 Associated information pushing method, device and system
CN109559193A (en) * 2018-10-26 2019-04-02 深圳壹账通智能科技有限公司 Product method for pushing, device, computer equipment and the medium of intelligent recognition
CN109670456A (en) * 2018-12-21 2019-04-23 北京七鑫易维信息技术有限公司 A kind of content delivery method, device, terminal and storage medium
WO2020238023A1 (en) * 2019-05-24 2020-12-03 平安科技(深圳)有限公司 Information recommendation method and apparatus, and terminal and storage medium
CN110633664A (en) * 2019-09-05 2019-12-31 北京大蛋科技有限公司 Method and device for tracking attention of user based on face recognition technology
CN113472834A (en) * 2020-04-27 2021-10-01 海信集团有限公司 Object pushing method and device
CN114663700A (en) * 2022-03-10 2022-06-24 支付宝(杭州)信息技术有限公司 Virtual resource pushing method, device and equipment
CN115082041A (en) * 2022-07-20 2022-09-20 深圳市必提教育科技有限公司 User information management method, device, equipment and storage medium
CN116233556A (en) * 2023-03-22 2023-06-06 网易有道信息技术(北京)有限公司 Video pushing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN106326391B (en) Multimedia resource recommendation method and device
CN110225366B (en) Video data processing and advertisement space determining method, device, medium and electronic equipment
CN109784304B (en) Method and apparatus for labeling dental images
CN111754267B (en) Data processing method and system based on block chain
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN110633423B (en) Target account identification method, device, equipment and storage medium
CN109214501B (en) Method and apparatus for identifying information
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN113326821B (en) Face driving method and device for video frame image
CN113301376B (en) Live broadcast interaction method and system based on virtual reality technology
CN107729491B (en) Method, device and equipment for improving accuracy rate of question answer search
CN114390368A (en) Live video data processing method and device, equipment and readable medium
CN117499477B (en) Information pushing method and system based on large model training
CN110348367B (en) Video classification method, video processing device, mobile terminal and medium
CN111626922A (en) Picture generation method and device, electronic equipment and computer readable storage medium
CN117499477A (en) Information pushing method and system based on large model training
CN111666884A (en) Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN109034085B (en) Method and apparatus for generating information
CN111479168A (en) Method, device, server and medium for marking multimedia content hot spot
CN115905862A (en) Missing data processing method and system based on generation countermeasure network
CN109040774B (en) Program information extraction method, terminal equipment, server and storage medium
CN113343069A (en) User information processing method, device, medium and electronic equipment
CN111259689B (en) Method and device for transmitting information
CN112507884A (en) Live content detection method and device, readable medium and electronic equipment
CN112487175A (en) Exhibitor flow control method, exhibitor flow control device, server and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant