CN116781965B - Virtual article synthesis method, apparatus, electronic device, and computer-readable medium - Google Patents

Virtual article synthesis method, apparatus, electronic device, and computer-readable medium Download PDF

Info

Publication number
CN116781965B
CN116781965B CN202311076432.6A CN202311076432A CN116781965B CN 116781965 B CN116781965 B CN 116781965B CN 202311076432 A CN202311076432 A CN 202311076432A CN 116781965 B CN116781965 B CN 116781965B
Authority
CN
China
Prior art keywords
virtual article
page
information
synthesis
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311076432.6A
Other languages
Chinese (zh)
Other versions
CN116781965A (en
Inventor
刘子正
王凯
钱达
王瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kilakila Technology Co ltd
Original Assignee
Shenzhen Kilakila Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kilakila Technology Co ltd filed Critical Shenzhen Kilakila Technology Co ltd
Priority to CN202311076432.6A priority Critical patent/CN116781965B/en
Publication of CN116781965A publication Critical patent/CN116781965A/en
Application granted granted Critical
Publication of CN116781965B publication Critical patent/CN116781965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

Embodiments of the present disclosure disclose virtual article synthesis methods, apparatuses, electronic devices, and computer-readable media. One embodiment of the method comprises the following steps: acquiring a target play control set; analyzing the original page request to generate an original virtual article head page; acquiring a virtual article information set; displaying a virtual article synthesis page; determining at least one virtual article composition control; acquiring the number of target value information possession and the number of target value fragments possession; synthesizing at least one virtual article to obtain at least one piece of virtual article information; performing feature recognition on the virtual article synthesized page to obtain synthesized position information, and performing background segmentation on the virtual article synthesized page; and combining the pre-virtual article synthesis Jing Yemian, the virtual article synthesis background page and at least one piece of virtual article information to obtain at least one synthesized special effect. According to the embodiment, the page response time can be shortened and the user experience is improved by calling the original page.

Description

Virtual article synthesis method, apparatus, electronic device, and computer-readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a virtual article synthesis method, apparatus, electronic device, and computer readable medium.
Background
With the development of network technology, real-time video playing becomes an increasingly popular entertainment mode. Combining and gifting virtual articles in the real-time video interaction process can represent acceptance of originators, and combining the virtual articles involves different page interactions and display of the virtual articles. For virtual article synthesis, the following methods are generally adopted: the method comprises the steps of detecting clicking a virtual article synthesis interaction component, acquiring a whole virtual article synthesis page from a server, rendering the virtual article synthesis page to obtain interaction among the completed web pages, collecting the total interaction operation in the pages, and directly covering the synthesized virtual article special effects on a real-time video page for display.
However, the inventor finds that when the above manner is adopted to perform page interaction, the following technical problems often exist:
firstly, because the page needs to be acquired from the server, a large amount of communication resources are occupied, the waste of the communication resources is caused, and when the network signal is not good, the interaction time is longer, the interaction flow is complicated, the white screen condition is easy to appear, the page is directly covered on the real-time video playing page for displaying, the real-time video playing content is shielded, and the user experience is lower.
Secondly, as the whole virtual article synthesized page is obtained and rendered, the unchanged part in the page interaction process is updated and stored for a plurality of times, so that the waste of storage resources and communication resources is caused, the front-end page is in a white screen state for a long time, the page interaction effect is affected, and the user experience is lower.
Thirdly, because the interactive operation data in the page is collected in a full quantity, a large amount of useless data can be collected and stored, so that storage resources are wasted, a large amount of computing resources are needed for analyzing the data, the waste of the computing resources and the analysis accuracy are low, the user preference cannot be accurately known, the page updating iteration is slow, and the user experience is low.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose virtual article synthesis methods, apparatus, electronic devices, and computer readable media to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a virtual article synthesis method, including: responding to the detection of the selection operation acting on the real-time video playing head page, and obtaining a target playing control set corresponding to the selected real-time video playing page, wherein the real-time video playing head page has the real-time video playing page set; generating a native page request in response to detecting a selection operation acting on any one of the target play controls in the target play control set, and analyzing the native page request to generate a native virtual article head page, wherein the native virtual article head page comprises: a virtual article type information set, wherein the virtual article type information set is displayed in a carousel form, and the original virtual article head page is displayed in a pop-up layer form; responding to the selection operation of any virtual article type information acting on the original virtual article head page, and acquiring a virtual article information set corresponding to the selected virtual article type information; displaying a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set; in response to detecting a selection operation acting on the set of virtual article synthesis controls, determining at least one virtual article synthesis control selected; acquiring the value information possession quantity corresponding to the target user and the value fragment possession quantity corresponding to the virtual article type information as the target value information possession quantity and the target value fragment possession quantity; responding to the detection that the number of the target value information possession is greater than or equal to the number of the value information possession corresponding to the at least one virtual article synthesis control, and the number of the target value fragment possession is greater than or equal to the number of the value fragment possession corresponding to the at least one virtual article synthesis control, synthesizing the at least one virtual article corresponding to the at least one virtual article synthesis control, and obtaining at least one piece of virtual article information; performing feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and performing background segmentation on the virtual article synthesis page to obtain a Jing Yemian virtual article synthesis front page and a virtual article synthesis background page; and combining the virtual article synthesis foreground page, the virtual article synthesis background page and each piece of virtual article information in the at least one piece of virtual article information to obtain at least one synthesis special effect, and displaying each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
In a second aspect, some embodiments of the present disclosure provide a virtual article synthesizing apparatus, including: the first acquisition unit is configured to respond to detection of a selection operation acting on a real-time video playing head page, and acquire a target playing control set corresponding to the selected real-time video playing page, wherein the real-time video playing head page has a real-time video playing page set; the generating unit is configured to respond to detection of selection operation acting on any one of the target playing controls in the target playing control set, generate a native page request, and parse the native page request to generate a native virtual article head page, wherein the native virtual article head page comprises: a virtual article type information set, wherein the virtual article type information set is displayed in a carousel form, and the original virtual article head page is displayed in a pop-up layer form; a second acquisition unit configured to acquire a virtual article information set corresponding to the selected virtual article type information in response to detection of a selection operation of any one of the virtual article type information acting on the original virtual article head page; the display unit is configured to display a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set; a determining unit configured to determine at least one virtual article composition control selected in response to detecting a selection operation acting on the virtual article composition control set; a third acquisition unit configured to acquire a value information possession number corresponding to a target user, a value fragment possession number corresponding to the virtual article type information, as a target value information possession number and a target value fragment possession number; the synthesizing unit is configured to synthesize the at least one virtual article corresponding to the at least one virtual article synthesizing control to obtain at least one virtual article information in response to detecting that the number of the target value information owners is greater than or equal to the number of value information owners corresponding to the at least one virtual article synthesizing control and the number of the target value fragment owners is greater than or equal to the number of value fragment owners corresponding to the at least one virtual article synthesizing control; the feature recognition unit is configured to perform feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and perform background segmentation on the virtual article synthesis page to obtain a Jing Yemian before virtual article synthesis and a virtual article synthesis background page; and the combination processing unit is configured to perform combination processing on each virtual article information in the virtual article synthesis foreground page, the virtual article synthesis background page and the at least one virtual article information to obtain at least one synthesis special effect, and display each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: according to the virtual article synthesizing method, the page response time and the loading time can be shortened, the interaction flow is simplified by calling the original page and displaying the virtual gift synthesizing special effect, and therefore user experience can be improved. Specifically, the reason for the low perception of the relevant user experience is that: because the page is required to be acquired from the server, a large amount of communication resources are occupied, the communication resources are wasted, when the network signal is not good, the interaction time is long, the interaction flow is complicated, the white screen condition is easy to appear, the page is directly covered on the real-time video playing page for displaying, the real-time video playing content is shielded, and the user experience is low. Based on this, the virtual article synthesis method of some embodiments of the present disclosure may first obtain, in response to detecting a selection operation acting on a real-time video playing front page, a target playing control set corresponding to the selected real-time video playing front page, where the real-time video playing front page has the real-time video playing page set. The target playing control set can be obtained, and the original virtual article head page can be conveniently called through the target playing control. Secondly, in response to detecting a selection operation acting on any one target play control in the target play control set, generating a native page request, and analyzing the native page request to generate a native virtual article head page, wherein the native virtual article head page comprises: and a virtual article type information set, wherein the virtual article type information set is displayed in a carousel form, and the original virtual article head page is displayed in a pop-up layer form. Here, the original page is called by the webpage, so that the page response and loading time can be shortened, the interaction flow is simplified, the waste of communication resources is reduced, and the user experience is improved. And then, in response to detecting a selection operation of any virtual article type information acting on the original virtual article head page, acquiring a virtual article information set corresponding to the selected virtual article type information. Here, the obtained virtual article information set may be used for a user to subsequently compose a virtual article. And then displaying a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set. Here, the virtual article synthesis page may be used for the user to synthesize corresponding virtual article information. Subsequently, in response to detecting a selection operation acting on the set of virtual article synthesis controls, determining at least one virtual article synthesis control selected. Here, the at least one virtual article synthesis control may be used to determine at least one virtual article information that a user needs to synthesize, which may improve the user's interaction interests and improve the user's viscosity. Then, the value information possession number corresponding to the target user and the value fragment possession number corresponding to the virtual article type information are obtained as the target value information possession number and the target value fragment possession number. Here, acquiring the value information user number and the value fragment possession number of the target user facilitates subsequent determination of whether or not the virtual article synthesizing operation can be performed. And then, in response to detecting that the number of the target value information possession is greater than or equal to the number of value information possession corresponding to the at least one virtual article synthesis control and the number of the target value fragments possession is greater than or equal to the number of value fragments possession corresponding to the at least one virtual article synthesis control, synthesizing the at least one virtual article corresponding to the at least one virtual article synthesis control to obtain at least one piece of virtual article information. The obtained at least one piece of virtual article information can facilitate the user to support favorite real-time video players, so that the activity of the user can be improved. And then, carrying out feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and carrying out background segmentation on the virtual article synthesis page to obtain a Jing Yemian before virtual article synthesis and a virtual article synthesis background page. Here, determining the synthesized position information can avoid the coverage of the synthesized special effects on the real-time video playing, and influence the user experience. And finally, carrying out combination processing on each virtual article information in the virtual article synthesis foreground page, the virtual article synthesis background page and the at least one virtual article information to obtain at least one synthesis special effect, and displaying each synthesis special effect in the at least one synthesis special effect according to the synthesis position information. Here, the generated composite special effects may exhibit the participation process of the target user, so that user viscosity and experience may be increased. Therefore, the virtual article synthesizing method can shorten page response and loading time, simplify interaction flow, display synthesized special effects at target positions and improve special effect display effects while not affecting playing effects by calling the original virtual article synthesizing page and synthesizing and special effect displaying the virtual articles.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a virtual article synthesis method according to the present disclosure;
FIG. 2 is a schematic structural view of some embodiments of a virtual article synthesizing apparatus according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of virtual article synthesis methods according to the present disclosure. The virtual article synthesizing method comprises the following steps:
Step 101, in response to detecting a selection operation acting on a real-time video playing head page, a target playing control set corresponding to the selected real-time video playing page is obtained.
In some embodiments, the executing body of the virtual article synthesizing method may obtain the target play control set corresponding to the selected real-time video play page in response to detecting the selection operation acting on the real-time video play head page. The real-time video playing page set exists on the front real-time video playing page. The real-time video playing head page may be a page for displaying a real-time video playing page set. The selection operation may be, but is not limited to, at least one of: click, slide, hover. The real-time video playing page may be a real-time video playing page determined by a real-time video playing identifier. For example, the real-time video playing page may be a live page. The real-time video playing identifier may be a unique identifier for characterizing a real-time video playing page. For example, the real-time video playing identifier may be a name of the real-time video playing. The target playing control in the target playing control set may be a control for performing page skip in the real-time video playing page, which is developed based on a native language. The native language may be a development language that runs directly on the native operating system. The target play control may be, but is not limited to, at least one of: chart control, text control, navigation bar control.
Step 102, in response to detecting a selection operation acting on any one of the target play controls in the target play control set, generating a native page request, and analyzing the native page request to generate a native virtual article head page.
In some embodiments, the executing body may generate a native page request in response to detecting a selection operation acting on any one of the target playback controls in the target playback control set, and parse the native page request to generate a native virtual article head page. The native page request may be a request for calling a native virtual article head page stored in a local memory. The initial virtual article head page may be a page written in a development language defined by a development platform. And displaying the original virtual article head page in a pop-up layer mode. The original virtual article head page is covered above the real-time video playing page with a preset coverage rate, and the page level of the original virtual article head page is higher than that of the real-time video playing page. The preset coverage rate can represent the coverage ratio of the original virtual article head page to the real-time video playing page. For example, the above-described preset coverage may be 0.75. The home page of the original virtual article can be closed by clicking or sliding. The clicking may be detecting that clicking the portion of the home page of the original virtual article that is not covered by the live video play page. The sliding may be a height at which a downward sliding is detected that exceeds half the height of the real-time video playback page. The original virtual article head page comprises: virtual article type information set. The virtual article type information in the virtual article type information set may be information of a virtual article type. The virtual article type may be a type of a divided virtual article. The native virtual article head page may be a page for displaying a virtual article type information set. The virtual article type information set is displayed in a carousel form. The virtual article type information in the virtual article type information set may include, but is not limited to, at least one of: category audio information, a text introduction set of virtual item category, a virtual item category name, and a virtual item category icon. The native virtual article home page may also be generated in several ways: first, the method is obtained by jumping through a primary icon link of a primary virtual article head page of a real-time video playing page. And secondly, jumping to the virtual article display head page through the real-time video playing page, and jumping to the original virtual article head page through the virtual article head page. The virtual article display head page can be a display page of virtual article information owned by a target user. Thirdly, jumping to a virtual gift display page of the real-time video player through the real-time video player, and jumping to a primary virtual article head page through clicking a selection control of a virtual article which is not received by the real-time video player.
As an example, the execution body may first generate, by selecting any one of the target playing controls, the native page request of the real-time video playing page to call the native virtual article head page. Then, the native page request is parsed by a call parser, and a target call function corresponding to the native page request is determined. And finally, calling the home page of the native virtual article through the target calling function, so as to obtain the home page of the native virtual article.
In some optional implementations of some embodiments, the executing body may generate a native page request in response to detecting a selection operation acting on any one of the target playing controls in the target playing control set, and parse the native page request to generate a native virtual article home page, and may include the steps of:
the first step, the selected real-time video playing page is subjected to block processing, and a video block page set is obtained. The above-mentioned blocking processing may be blocking the selected real-time video playing page by using the page tag of the real-time video playing page. For example, the page tag may include at least one of: text labels, image labels, hyperlink labels.
And secondly, packaging the video block page set and the native page request into an instruction request.
And thirdly, requesting middleware through a preset message, and sending the instruction request to a native request processor. The preset message request middleware may be a predetermined middleware for encapsulating a communication channel between a webpage and a native page and distributing instructions. The above web page may be a page developed by HTML (Hyper Text Markup Language ). The above-described native request processor may be a processor for processing native requests.
And fourthly, controlling the original request processor to analyze the instruction request to obtain an analyzed instruction request.
And fifthly, calling a native block page set according to the parsed instruction request and the video block page set. The above-mentioned original block page set may be a block page set obtained by blocking a first page of an original virtual article by page elements.
As an example, the execution body may first determine a set of calling functions corresponding to the parsed instruction request. Wherein, the calling function in the calling function set comprises a calling identifier. And then, calling the native blocking page set through the calling identification set included in the calling function set.
And sixthly, carrying out matching processing on the video block page set and the original block page set to obtain a matched block page set. Wherein, the matching block pages in the matching block page set include: video chunking pages and native chunking pages with the same page tag. The matching may be matching the video block page and the native block page by a page tag.
Seventh, each matching block page in the matching block page set is identified to generate matching block pages with different matching block page contents, and the matching block pages are used as target matching block pages to obtain a target matching block page set. Wherein, the identification process may include: character recognition and image recognition.
And eighth step, replacing the video block page set according to the target matching block page set to obtain a primary virtual article head page.
As an example, the execution body may first replace the video block page set in the target matching block page set with the native block page set in the target matching block page set, to obtain a replacement block page set. And secondly, splicing the video block pages with the same content as the matched block pages with the replacing block page set to obtain the initial virtual article head page.
In some optional implementations of some embodiments, the executing body may perform a blocking process on the selected real-time video playing page to obtain a video blocking page set, and may include the following steps:
the first step, the page tag element identification is carried out on the selected real-time video playing page, and a page tag set is obtained. The page tag in the page tag set may be a page code tag corresponding to the real-time video playing page. The page code tag may be an element that blocks a page. The identification of the page tag element may be that a keyword is identified for a page code corresponding to the real-time video playing page.
And secondly, determining path information of each page tag in the page tag set and the starting tag of the real-time video playing page to obtain a tag path information set. The start tag may be a header tag in the page code. The tag path information in the tag path information set may represent information of a nesting relationship of tags in the page code. The tag path information may be information of a path from an element tag to a header tag.
And thirdly, generating a page tag structure tree through the tag path information set. The page tag structure tree may be a DOM (Document Object Mode, document object model) tree composed of page tags and representing a page layout of the real-time video playing page.
And fourthly, determining the noise weight of the visual block corresponding to each page tag node in the page tag structure tree, and obtaining a noise weight set. Wherein the noise weights may characterize the duty cycle of the noise data contained by the visual block. The noise data may be advertisement information and a navigation bar menu. The visual block may be a page area corresponding to a page tag node. Each noise weight in the set of noise weights may be obtained by: first, the number of links included in the real-time video play page and the number of links included in the visual block are determined as the total number of links and the number of visual links. And secondly, determining the text quantity included in the real-time video playing page and the text quantity included in the visual block as the total text quantity and the visual text quantity. And determining the path distance corresponding to the page tag structure tree and the path distance corresponding to the visual block as the total path distance and the visual path distance. And then, determining the ratio of the visual path distance to the total path distance, and taking the logarithm of the comparison value based on the number of the page tag nodes to obtain a first weight value. Then, a difference between the preset threshold value and the ratio of the total number of links to the number of visual links is determined as a second weight value. Wherein, the preset threshold may be 1. Then, a ratio of the total text amount to the visual text amount is determined as a third weight value. And finally, determining the product of the first weight value, the second weight value and the third weight value as a noise weight.
And fifthly, screening noise weights larger than or equal to a preset noise weight threshold value from the noise weight set to obtain a target noise weight set. The preset noise weight threshold may be a threshold for distinguishing whether the noise visual block is a noise visual block. For example, the preset noise weight threshold may be 0.7.
And sixthly, removing the page tag node set corresponding to the target noise weight set from the page tag structure tree to obtain a denoised page tag structure tree.
And seventhly, determining the similarity of each denoised page tag node in the denoised page tag structure tree, and obtaining a similarity value set. The similarity may be a similarity determined by paths of the respective denoised page tag nodes.
And eighth step, clustering the denoised page label nodes according to the similarity value set to obtain clustering clusters. The clusters in the cluster set may be clusters formed by a denoising page label node set with a similarity difference smaller than a preset similarity difference threshold. The predetermined similarity difference threshold may be 0.6.
As an example, the execution subject may first determine, as the target similarity difference set, a difference value of each of the similarity values in the similarity value set. And then, determining the target similarity difference value smaller than or equal to a preset similarity threshold value in the target similarity difference set as a clustering cluster to obtain the clustering cluster.
And ninth, performing blocking processing on the selected real-time video playing page through the clustering cluster to obtain a video blocking page set. In practice, the executing body may execute the following determining step for each cluster of the cluster clusters in the first step: first, determining whether a denoised page tag node of a brother node exists in the cluster. And then, in response to determining that the denoising page label node of the brother node exists in the cluster, fusing at least one denoising page label node of the existing brother node to obtain a fused node set. And finally, determining the fused node set and the unfused node set as node sets to be segmented. The unfused node in the unfused node set may be a denoised page tag node without a sibling node. And secondly, determining the page set corresponding to the obtained plurality of node sets to be segmented as a video block page set.
The first to ninth steps and related content thereof are taken as an invention point of the embodiments of the present disclosure, which solves the second technical problem mentioned in the background art, namely that the constant part in the page interaction process is updated and stored for multiple times due to the fact that the whole virtual article synthesized page is obtained and rendered, so that the waste of storage resources and communication resources is caused, the front-end page is in a white screen state for a long time, the page interaction effect is affected, and the user experience is low. Factors that lead to a lower user experience tend to be as follows: because the whole virtual article synthesized page is obtained and rendered, the unchanged part in the page interaction process is updated and stored for multiple times, so that the waste of storage resources and communication resources is caused, the front-end page is in a white screen state for a long time, the page interaction effect is affected, and the user experience is lower. If the above factors are solved, the effect of improving the user experience can be achieved. To achieve this effect, the present disclosure first performs page tag recognition on a real-time video playback page to obtain a page tag set. Through the page tag set, the page tag structure tree is constructed, the accuracy of constructing the page tag structure tree can be improved, the structure of the real-time video playing page can be comprehensively known, and the structural difference between the pages is shielded. Secondly, determining the page tag nodes in the page tag set as the page tag nodes of the noise nodes through the noise weight, removing the noise nodes, determining the weight of the visual block from the page visual and content characteristics, and improving the accuracy of subsequent denoising due to more comprehensive consideration. Then, the similarity of the denoised page tag structure tree is determined, the denoised page tag structure tree is clustered through the similarity, and the blocking accuracy can be improved through the clustering of the similarity. And finally, the clustering clusters are used for carrying out blocking processing on the selected real-time video playing pages to obtain a video blocking page set, and the blocking processing can avoid repeated loading of the blocking pages with the same content, reduce the waste of communication resources and improve the loading speed. Therefore, repeated updating of redundant data can be reduced by constructing the page tag structure tree, so that waste of communication and storage resources is reduced, white screen appearing in page interaction is reduced, and user experience is improved.
In some optional implementations of some embodiments, the executing body may generate a native page request in response to detecting a selection operation acting on any one of the target playing controls in the target playing control set, and parse the native page request to generate a native virtual article home page, and may include the steps of:
in the first step, a target native page request is generated in response to detection of a selection operation of a first target playing control in the target playing control set, and the target native page request is parsed to generate a virtual article fragment page. The first target playing control may be a native link icon control of the virtual article fragment page. The target native page request may be a request to generate the virtual item fragment page. The virtual item fragment page may be a page for displaying virtual item fragments owned by the target user. The virtual article fragments can represent different types of fragments corresponding to different virtual article type information owned by a target user. The virtual article fragment may include: the different kinds of fragment icons and the corresponding numbers of the different kinds of fragment icons.
And secondly, responding to the detection of the selection operation of any virtual article fragment in the virtual article fragment page, generating a native page request, and analyzing the native page request to generate a native virtual article head page.
In some optional implementations of some embodiments, the executing body may generate a native page request in response to detecting a selection operation acting on any one of the target playing controls in the target playing control set, and parse the native page request to generate a native virtual article home page, and may include the steps of:
and in the first step, in response to detecting the selection operation of the second target playing control in the target playing control set, displaying the real-time video player information of the selected real-time video playing page. The real-time video player information comprises real-time video player identity information and virtual article graphic control. The real-time video player information can represent information of the real-time video player. The real-time video player may be a person who starts real-time video playing. For example, the real-time video player identity information may be a network nickname of the real-time video player. The virtual item graphical control may be an icon link control with a jump link.
And secondly, responding to the detection of the selection operation acting on the virtual article graphic control, and displaying a target virtual article page corresponding to the real-time video player identity information. Wherein, the target virtual article page has a virtual article selection control set. The target virtual article page may be a page for displaying a virtual article information set corresponding to the real-time video player. The target virtual article page comprises a virtual article information set which is received by a real-time video player and a virtual article information set which is not received by the real-time video player. The virtual item selection control in the virtual item selection control set may be a control for selecting, by the target user, any virtual item not received in the virtual item information set not received by the real-time video player.
And thirdly, responding to detection of selection operation acting on any virtual article selection control in the virtual article selection control set, generating a native page request, analyzing the native page request and generating a native virtual article head page.
Step 103, responding to the selection operation of any virtual article type information acting on the home page of the original virtual article, and acquiring a virtual article information set corresponding to the selected virtual article type information.
In some embodiments, the executing body may obtain the virtual article information set corresponding to the selected virtual article type information in response to detecting a selection operation of any one of the virtual article type information acting on the home page of the native virtual article. The virtual article information in the virtual article information set may be information of a virtual article (for example, a gift for viewing) located under certain virtual article type information. The virtual article information may include, but is not limited to, at least one of: virtual item identification, virtual item image. The virtual item identification may be a unique identification of the virtual item. For example, the virtual item identifier may be a number of the virtual item. The virtual articles include defined virtual articles and generic virtual articles. The virtual article may be a holiday definition virtual article defined according to holidays.
And 104, displaying a virtual article synthesis page corresponding to the selected virtual article information set.
In some embodiments, the executing body may display a virtual article synthesis page corresponding to the selected virtual article information set. And the virtual article synthesis page is provided with a corresponding virtual article synthesis control set. The virtual article synthesizing page may be a page for displaying a virtual article information set corresponding to the selected virtual article type information. The virtual article synthesis page comprises: virtual article icons, value information possession quantity, value fragment possession quantity and virtual article composition controls corresponding to each virtual article in the virtual article information set. The virtual article synthesizing control can be used for selecting a control of a virtual article needing synthesizing. The value information possession quantity may characterize the value of the virtual item. For example, the value information possession quantity may be a quantity of red beans characterizing the value of the virtual item. The value chip possession number may characterize the number of category chips corresponding to the category of virtual item required to compose the virtual item. The kinds of fragments corresponding to different virtual article kinds are distinguished by different colors.
In response to detecting a selection operation on the set of virtual article synthesis controls, the selected at least one virtual article synthesis control is determined 105.
In some embodiments, the executing entity may determine the selected at least one virtual article composition control in response to detecting a selection operation on the set of virtual article composition controls.
And 106, acquiring the value information possession quantity corresponding to the target user and the value fragment possession quantity corresponding to the virtual article type information as the target value information possession quantity and the target value fragment possession quantity.
In some embodiments, the executing entity may acquire the value information possession number corresponding to the target user and the value fragment possession number corresponding to the virtual item type information as the target value information possession number and the target value fragment possession number. The target user may be a user who synthesizes virtual article information. The target value information possession amount may be the number of red beans of the characterization value possessed by the target user. The target value piece ownership number may be a number of pieces of a category corresponding to each piece of virtual article category information in the set of virtual article category information owned by the target user.
And step 107, in response to detecting that the number of the target value information owners is greater than or equal to the number of the value information owners corresponding to the at least one virtual article compositing control and the number of the target value fragments owners is greater than or equal to the number of the value fragments owners corresponding to the at least one virtual article compositing control, compositing the at least one virtual article corresponding to the at least one virtual article compositing control to obtain at least one piece of virtual article information.
In some embodiments, the executing body may synthesize the at least one virtual item corresponding to the at least one virtual item synthesis control to obtain the at least one virtual item information in response to detecting that the target value information possession number is greater than or equal to the value information possession number corresponding to the at least one virtual item synthesis control and the target value fragment possession number is greater than or equal to the value fragment possession number corresponding to the at least one virtual item synthesis control. The composition may be to pop up a composition popup, where the composition popup has a determination control and a cancellation control. And when the selection operation on the determination control is detected, obtaining at least one piece of virtual article information.
In some optional implementations of some embodiments, the executing body may synthesize the at least one virtual item corresponding to the at least one virtual item synthesis control to obtain at least one virtual item information in response to detecting that the target value information possession number is greater than or equal to the value information possession number corresponding to the at least one virtual item synthesis control, and the target value fragment possession number is greater than or equal to the value fragment possession number corresponding to the at least one virtual item synthesis control, where the method may include:
the first step, for each virtual article composition control in the at least one virtual article composition control, performing the following determining steps:
and determining the product of the value fragment possession number corresponding to the virtual article synthesis control and a preset value fragment threshold value as the synthesized value fragment possession number of the virtual article information corresponding to the virtual article synthesis control. The preset value chip threshold may be a preset value of percentage of the number of the value chips. The composite value shard possession quantity may represent a composite value shard possession quantity commission for the composite virtual good.
And a second substep, determining the sum of the value chip possession number corresponding to the virtual article synthesis control and the synthesized value chip possession number as the target value chip possession number of the virtual article synthesis control. Wherein the target value chip possession number may be a total value chip possession number required to compose a virtual article.
And a third substep, determining the value information possession quantity corresponding to the virtual article synthesis control as the target value information possession quantity.
And secondly, determining the product of the obtained at least one target value fragment possession quantity and the quantity of the virtual article synthesis controls included in the at least one virtual article synthesis control as the value fragment possession quantity corresponding to the at least one virtual article synthesis control.
And thirdly, determining the product of the obtained at least one target value information possession quantity and the quantity of the virtual article synthesis controls included in the at least one virtual article synthesis control as the value information possession quantity corresponding to the at least one virtual article synthesis control.
And fourthly, synthesizing at least one virtual article corresponding to the at least one virtual article synthesis control to obtain at least one virtual article information in response to detecting that the number of the target value information possession is greater than or equal to the number of value information possession corresponding to the at least one virtual article synthesis control and the number of the target value fragments possession is greater than or equal to the number of value fragments possession corresponding to the at least one virtual article synthesis control.
And step 108, performing feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to at least one piece of virtual article information, and performing background segmentation on the virtual article synthesis page to obtain a Jing Yemian virtual article synthesis front page and a virtual article synthesis background page.
In some embodiments, the executing body may perform feature recognition on the virtual article synthesis page to obtain synthesis location information corresponding to the at least one piece of virtual article information, and perform background segmentation on the virtual article synthesis page to obtain a pre-virtual article synthesis Jing Yemian and a virtual article synthesis background page. The synthesized position information may be used to display information of a virtual article information synthesized special effect. The virtual article composition foreground page may be a page including a target article in the virtual article composition page. The virtual article synthesis background page may be a page obtained by removing the virtual article synthesis foreground page from the virtual article synthesis page.
As an example, the execution subject may first perform feature recognition on the image corresponding to the virtual article synthesis page by using a deep learning model to obtain synthesis position information corresponding to at least one virtual article. The deep learning model may be a CNN (Convolutional Neural Networks, convolutional neural network) model. And then, carrying out background segmentation on the image corresponding to the virtual article synthesis page by utilizing a watershed algorithm to obtain Jing Yemian before virtual article synthesis and a virtual article synthesis background page.
Step 109, combining the pre-virtual article synthesis Jing Yemian, the virtual article synthesis background page and each piece of virtual article information in the at least one piece of virtual article information to obtain at least one synthesized special effect, and displaying each synthesized special effect in the at least one synthesized special effect according to the synthesis position information.
In some embodiments, the execution body may perform a combination process on each of the virtual item composition foreground page, the virtual item composition background page, and the at least one virtual item information to obtain at least one composition effect, and display each of the at least one composition effect according to the composition position information. The synthesized special effects in the at least one synthesized special effect can represent dynamic images showing the virtual article information synthesis process. For example, the moving image may be gif (gif animation). The composite special effect may be a page special effect obtained by combining a virtual article composite special effect page corresponding to the virtual article information, a virtual article composite background page and a virtual article composite pre-composite Jing Yemian. The above-described combination may be a combination performed in order from high to low by page priority.
Optionally, after 109, the above execution body may further execute the following steps:
and the first step, in response to detecting a target decomposition control acting on the home page of the original virtual article, displaying a target virtual article decomposition page. The target virtual article decomposition page is provided with a corresponding virtual article decomposition control set and a virtual article information set to be decomposed. The target decomposition control may be a text control that jumps from the original virtual article home page to the virtual article decomposition page. The target virtual article resolution page may be a page for displaying a virtual article information set to be resolved, which belongs to different virtual article category information and is owned by a target user. The virtual article information to be decomposed may include, but is not limited to, at least one of the following: the virtual item icon, the number, the virtual name and the value information possession number to be decomposed.
And a second step of determining at least one selected virtual article decomposition control in response to detecting a selection operation acting on the set of virtual article decomposition controls.
And thirdly, determining virtual article type information of each piece of virtual article information to be decomposed in at least one piece of virtual article information to be decomposed corresponding to the at least one virtual article decomposition control in response to determining that the value information possession quantity corresponding to the at least one piece of virtual article decomposition control is greater than or equal to a preset value information possession threshold value, and obtaining a virtual article type information set to be decomposed. The preset value information possession threshold may be a sum of the value information possession amounts of at least one virtual article to be resolved. For example, the preset value information possession threshold may be 120. The determination may be a determination of a matching relationship of the virtual article information and the virtual article type information. The virtual article type information set to be decomposed may be a subset of the virtual article type information set.
Fourth, classifying the at least one piece of virtual article information to be decomposed to generate a virtual article information group to be decomposed corresponding to each piece of virtual article type information in the virtual article type information set to be decomposed, and obtaining a virtual article information group set to be decomposed.
And fifthly, decomposing each virtual article information group to be decomposed in the virtual article information group to be decomposed to obtain a class value fragment set. Wherein the class value fragments in the class value fragment set include: category fragment icons and category fragment numbers corresponding to the virtual article category information. The class value fragment sets and the virtual object class information sets to be decomposed have a one-to-one correspondence.
As an example, the execution subject may first execute the following steps for each virtual item information set to be decomposed in the virtual item information set to be decomposed: and determining the virtual article information group to be decomposed with the value information possession quantity group being larger than or equal to a first preset value information threshold value as a first virtual article information group to be decomposed and a first value information possession quantity group. The first preset value information threshold may be a threshold of the number of value information possession of the virtual article information group to be decomposed. For example, the first preset value information threshold may be 12. And secondly, determining the ratio of the first value information possession quantity group to a second preset value information threshold value as the type fragment quantity of the first virtual article information group to be decomposed. The second preset value information threshold may be a ratio of the number of value information owns to the number of value fragments owns. For example, the first preset threshold may be 100:9. and then, determining the virtual article information group to be decomposed, of which the value information possession quantity is smaller than the first preset value information threshold value, as a second virtual article information group to be decomposed and a second value information possession quantity group. Then, the number of kinds of pieces of the second virtual article information group to be decomposed is determined to be 0. Finally, the number of the category fragments and the corresponding category fragments are determined as category fragments.
And sixthly, performing feature recognition on the target virtual article decomposition page to obtain decomposition position information corresponding to the class value fragment set, and performing background segmentation on the target virtual article decomposition page to obtain a virtual article decomposition foreground page and a virtual article decomposition background page. The decomposition position information may be position information of the virtual article information to be decomposed for displaying the decomposition special effect. The virtual item decomposition foreground page may be a page including the target item in the virtual item decomposition page. The virtual article decomposition background page may be a page obtained by removing the virtual article decomposition foreground page from the virtual article decomposition page.
Seventh, combining the virtual object decomposition foreground page, the virtual object decomposition background page and each type value fragment in the type value fragment set to obtain a decomposition special effect set, and displaying each decomposition special effect in the decomposition special effect set according to the decomposition position information. The decomposition special effects in the decomposition special effect group can represent dynamic images showing the virtual article information decomposition process. The decomposed special effects can be page special effects obtained by combining a virtual article decomposed foreground page, a virtual article decomposed background page and decomposed special effect pages corresponding to each classified value fragment in the classified value fragment group.
Optionally, the executing body decomposes each virtual article information group to be decomposed in the virtual article information group to be decomposed to obtain a class value fragment set, and may further execute the following steps:
first, a category virtual article decomposition page corresponding to the selected virtual article category information is displayed. The category virtual article decomposing page is provided with a category virtual article decomposing control set and a category virtual article information set to be decomposed corresponding to the selected virtual article category information. The category virtual item resolution control in the category virtual item resolution control set may be a control for resolving virtual item information. The virtual article information of the type to be resolved in the virtual article information set of the type to be resolved may be virtual article information to be resolved corresponding to the selected virtual article type information.
And a second step of determining at least one selected virtual article decomposing control in response to detecting a selection operation on the virtual article decomposing control set.
And thirdly, responding to the fact that the value information possession quantity corresponding to the at least one type of virtual article decomposition control is larger than or equal to the preset value information possession threshold value, decomposing at least one type of virtual article information to be decomposed corresponding to the at least one type of virtual article decomposition control, and obtaining type value fragments corresponding to the selected type of virtual article information as target type value fragments. Wherein the target class value pieces include: the selected virtual article type information corresponds to the type fragment image and the type fragment number.
Optionally, the executing body may perform a combination process on the category virtual object decomposition foreground page, the category virtual object decomposition background page, and the target category value fragment to obtain a category decomposition special effect, and may further include the following steps after displaying the category decomposition special effect according to the category decomposition position information:
the first step is to acquire user behavior data of the virtual article synthesizing page, the virtual article decomposing page and the virtual article decomposing page. The user behavior data may be operation behavior data of the target user on a page. The acquisition may be an acquisition by a buried point free technique.
And secondly, tracking and predicting the user behavior data to obtain a predicted monitoring node set. Each of the predictive monitoring nodes in the set of predictive monitoring nodes may be a node that characterizes a functional method corresponding to the user operation behavior. For example, the predictive detection node may be at least one of: callback function of click event, show callback function of list element, page life cycle function.
As an example, the execution body may use a tracking model to track the user behavior path of the user behavior data, so as to obtain a user behavior path set. The tracking model may be a model for determining a user behavior path based on machine learning. For example, the tracking model may be a CNN model. Then, a set of monitoring nodes corresponding to each of the user behavior paths in the set of user behavior paths is determined. And finally, binding each monitoring node in the monitoring node set with the declarative embedded point code to obtain a predicted monitoring node set. The declarative embedded point code may be a code for monitoring and collecting data of each monitoring node. It should be noted that the declarative embedded point code can decouple the embedded point code and the page interaction business logic code, and can extract more comprehensive user behavior data on the premise of ensuring the page interaction quality.
And thirdly, acquiring log data and user target behavior data acquired by the predictive monitoring node set. The log data may be operation data about the user on each page within a predetermined period.
And step four, extracting keywords from the log data to obtain log data corresponding to the predictive monitoring node as target log data, and determining the similarity between the target log data and the target behavior data of the user as accuracy.
And fifthly, in response to determining that the accuracy is greater than or equal to a preset accuracy threshold, clustering the user target behavior data to obtain a user class set. The preset accuracy threshold may be a threshold for judging quality of user behavior collected by the prediction detection node. For example, the preset accuracy threshold may be 0.85. The user category in the user category set may be a user category obtained by classifying user behavior data. For example, the set of user categories may include at least one of: consumer users, interactive users, and silent users.
As an example, the execution subject may first perform the dimension reduction process on the user target behavior data by using a principal component analysis method, to obtain dimension reduced user behavior data. Wherein, the user behavior data after dimension reduction comprises: virtual article synthesis behavior characteristics, virtual article decomposition behavior characteristics and page interaction behavior characteristics. And secondly, clustering the user behavior data after the dimension reduction by using a clustering algorithm to obtain a user class set. Wherein, the clustering algorithm can be at least one of the following: and (5) optimizing a K-means algorithm and a hierarchical clustering algorithm in a small batch.
Sixth, for each user category in the user category set, the following fusion processing step is performed:
and a first sub-step of acquiring user comment data. The user comment data may be comment data of a page by a user. The user comment data may include: text data and image data.
And a second sub-step, carrying out user emotion analysis processing on the image data through an image emotion analysis model of the user emotion analysis model to obtain an image emotion feature vector. The image emotion feature vector may be a vector representing emotion tendencies of the user through the image. The user emotion analysis model may further include: text emotion analysis model and automatic fusion model. The image emotion analysis model may be a model for performing emotion analysis on image data.
As an example, the execution subject may first extract visual features of an image using a Swin Transformer network model, resulting in a plurality of visual feature vectors. And extracting color features of the binarized image by using a residual error model to obtain a plurality of color feature vectors. The binarized image may be an image obtained by performing binarization processing on image data. And then, carrying out iterative feature information fusion on the plurality of visual feature vectors and the corresponding color feature vectors through a gating information fusion network and an attention mechanism model to obtain the emotion feature vectors of the local images. Finally, the global image emotion feature vector and the local image emotion feature vector are subjected to weighted fusion through a GU (gate unit) to obtain the image emotion feature vector. The global image feature vector may be a vector located at a termination position of the plurality of visual feature vectors. The GU may first determine a maximum value set, a minimum value set, an average value set, and a standard deviation set of the global image emotion feature vector and the local image emotion feature vector, respectively, using a pooling layer. And secondly, splicing the maximum value set, the minimum value set, the average value set and the standard value difference set to obtain a spliced feature vector. And then, sequentially inputting the spliced feature vector into the full-connection layer, the Tanh layer, the full-connection layer and the Sigmoid layer to obtain a first weight factor and a second weight factor. And finally, determining the sum of the product of the first weight factor and the global image emotion feature vector and the product of the second weight factor and the local image emotion feature vector as the image emotion feature vector.
And a third sub-step, carrying out user emotion analysis processing on the text data through a text emotion analysis model to obtain text emotion feature vectors. The text emotion feature vector may be a vector representing emotion tendencies of the user through text. The text emotion analysis model may be a model for emotion analysis of text data.
As an example, the execution subject may first encode words in the text data using a ERNIE (Enhanced Representation through kNowledge IntEgration) pre-trained model to obtain a set of word vectors. Wherein the word vectors in the word vector set are 768-dimensional word vectors. And secondly, inputting the word vector set into a two-way long-short-term memory network to obtain a text vector set containing the context. Thirdly, different weights are given to the text vector set through a semantic attention mechanism model, so that emotion semantic information is obtained. And then, carrying out local feature extraction on the text data through the convolutional neural network model and a plurality of filters to obtain a plurality of local text feature vectors, and carrying out maximum pooling operation on the plurality of local text feature vectors to obtain the local text feature vectors. And finally, splicing the emotion semantic information and the local text feature vector to obtain a text emotion feature vector.
And a fourth sub-step, carrying out fusion processing on the image emotion feature vector and the text emotion feature vector through an automatic fusion model to obtain a user emotion keyword set. The user emotion analysis keywords in the user emotion analysis keyword set can represent emotion keywords of a user on a page. For example, the user emotion keyword may include at least one of: representing positive keywords, representing neutral keywords, and representing negative keywords. The automatic fusion model may be a model that fuses an image emotion feature vector and a text emotion feature vector.
As an example, the execution subject may first stitch the image emotion feature vector and the text emotion feature vector to obtain a stitched vector. Then, based on the above-described post-splice vector, the following determination steps are performed: and firstly, inputting the spliced vectors into a first full-connection layer to obtain emotion classification confidence degrees corresponding to the spliced vectors. And secondly, inputting the emotion classification confidence coefficient to a second full-connection layer to obtain a reconstructed and spliced vector. Wherein the function corresponding to the second full connection layer is an inverse function of the function corresponding to the first full connection layer. And thirdly, determining the loss value of the spliced vector and the reconstructed spliced vector through a mean square error function. And fourthly, determining the emotion keyword set corresponding to the spliced vector as the user emotion keyword set in response to the fact that the loss value is smaller than a preset loss threshold value. Wherein, the preset loss threshold may be 0.8. And finally, in response to determining that the loss value is greater than or equal to a preset loss threshold, adjusting the related parameters of the first full-connection layer to obtain an adjusted first full-connection layer, determining the adjusted first full-connection layer as the first full-connection layer, and executing the determining step again.
And seventhly, respectively adjusting the virtual article synthesis page, the virtual article decomposition page and the virtual article decomposition page according to the obtained emotion keyword sets of the users.
The first to seventh steps and related content thereof are taken as an invention point of the embodiments of the present disclosure, which solves the third technical problem mentioned in the background art, namely, because the interactive operation data in the page is collected in full, a large amount of useless data can be collected and stored, which results in waste of storage resources, and a large amount of computing resources are needed to analyze the data, which results in lower waste of computing resources and lower analysis accuracy, and the user preference cannot be known accurately, which results in slower page update iteration and lower user experience. The factors that lead to slower page update iterations and reduce the user experience are often as follows: because the interactive operation data in the page is collected in a full quantity, a large amount of useless data can be collected and stored, so that storage resources are wasted, a large amount of computing resources are needed for analyzing the data, the waste of the computing resources and the analysis accuracy are low, the user preference cannot be accurately known, page updating iteration is slow, and the user experience is low. If the above factors are solved, the effect of improving the user experience can be achieved. To achieve this, the present disclosure first obtains relatively comprehensive user behavior data through a point-free technique. And secondly, tracking and predicting are carried out through comprehensive user behavior data, so that a predicted monitoring node with accurate prediction is obtained, and the collection of useless user behavior data can be reduced, thereby reducing the waste of storage resources. And thirdly, determining the accuracy of the user data collected by the predictive monitoring node through the log data, and improving the accuracy of the data collected by the predictive monitoring node so as to improve the accuracy of the subsequent analysis of the user behavior data and achieve the accurate prediction of the target user. And then, clustering processing is carried out on the user behavior data, so that page adjustment is conveniently carried out on users of different categories, the pages are more personalized, and the user viscosity is increased. And then, obtaining comment data of each user category, extracting visual and color features of different layers through an image emotion analysis model, and organically fusing contribution to emotion analysis to accurately position an emotion distinguishing area, so that more accurate, complete and different-layer image emotion information can be extracted. The problem of word ambiguity can be solved through the text emotion analysis model, so that the accuracy of text extraction can be improved. The information redundancy problem brought by constructing the multi-mode joint representation through splicing deterministic operation can be solved through an automatic fusion model. Finally, the page is adjusted through the obtained keyword sets, so that the page is more personalized, the user needs are met conveniently, and the user experience is improved.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of a virtual article synthesizing apparatus, which correspond to those method embodiments shown in fig. 1, and which may be applied in particular in various electronic devices.
As shown in fig. 2, a virtual article synthesizing apparatus 200 includes: a first acquisition unit 201, a generation unit 202, a second acquisition unit 203, a display unit 204, a determination unit 205, a third acquisition unit 206, a synthesis unit 207, a feature recognition unit 208, and a combination processing unit 209. Wherein the first acquisition unit 201 is configured to: and responding to the detection of the selection operation acting on the real-time video playing head page, and obtaining a target playing control set corresponding to the selected real-time video playing page, wherein the real-time video playing head page has the real-time video playing page set. The generating unit 202 is configured to: generating a native page request in response to detecting a selection operation acting on any one of the target play controls in the target play control set, and analyzing the native page request to generate a native virtual article head page, wherein the native virtual article head page comprises: and a virtual article type information set, wherein the virtual article type information set is displayed in a carousel form, and the original virtual article head page is displayed in a pop-up layer form. The second acquisition unit 203 is configured to: and responding to the selection operation of any virtual article type information acting on the original virtual article head page, and acquiring a virtual article information set corresponding to the selected virtual article type information. The display unit 204 is configured to: displaying a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set. The determination unit 205 is configured to: in response to detecting a selection operation acting on the set of virtual article synthesis controls, determining at least one virtual article synthesis control selected. The third acquisition unit 206 is configured to: and acquiring the value information possession quantity corresponding to the target user and the value fragment possession quantity corresponding to the virtual article type information as the target value information possession quantity and the target value fragment possession quantity. The synthesizing unit 207 is configured to: and in response to detecting that the number of the target value information owners is greater than or equal to the number of the value information owners corresponding to the at least one virtual article compositing control and the number of the target value fragments owners is greater than or equal to the number of the value fragments owners corresponding to the at least one virtual article compositing control, compositing the at least one virtual article corresponding to the at least one virtual article compositing control to obtain at least one piece of virtual article information. The feature recognition unit 208 is configured to: and performing feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and performing background segmentation on the virtual article synthesis page to obtain a virtual article pre-synthesis Jing Yemian and a virtual article synthesis background page. The combination processing unit 209 is configured to: and combining the virtual article synthesis foreground page, the virtual article synthesis background page and each piece of virtual article information in the at least one piece of virtual article information to obtain at least one synthesis special effect, and displaying each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
It will be appreciated that the elements described in virtual article synthesizing apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to the method are equally applicable to the virtual article synthesizing apparatus 200 and the units contained therein, and are not described herein.
Referring now to fig. 3, a schematic diagram of an electronic device (e.g., electronic device) 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to the detection of the selection operation acting on the real-time video playing head page, and acquiring a target playing control set corresponding to the selected real-time video playing page; generating a native page request in response to detection of a selection operation acting on any one of the target play controls in the target play control set, and analyzing the native page request to generate a native virtual article head page; responding to the selection operation of any virtual article type information acting on the original virtual article head page, and acquiring a virtual article information set corresponding to the selected virtual article type information; displaying a virtual article synthesis page corresponding to the selected virtual article information set; in response to detecting a selection operation acting on the set of virtual article synthesis controls, determining at least one virtual article synthesis control selected; acquiring the value information possession quantity corresponding to the target user and the value fragment possession quantity corresponding to the virtual article type information as the target value information possession quantity and the target value fragment possession quantity; responding to the detection that the number of the target value information possession is greater than or equal to the number of the value information possession corresponding to the at least one virtual article synthesis control, and the number of the target value fragment possession is greater than or equal to the number of the value fragment possession corresponding to the at least one virtual article synthesis control, synthesizing the at least one virtual article corresponding to the at least one virtual article synthesis control, and obtaining at least one piece of virtual article information; performing feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and performing background segmentation on the virtual article synthesis page to obtain a Jing Yemian virtual article synthesis front page and a virtual article synthesis background page; and combining the virtual article synthesis foreground page, the virtual article synthesis background page and each piece of virtual article information in the at least one piece of virtual article information to obtain at least one synthesis special effect, and displaying each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a first acquisition unit, a generation unit, a second acquisition unit, a display unit, a determination unit, a third acquisition unit, a synthesis unit, a feature recognition unit, and a combination processing unit. The names of these units do not limit the unit itself in some cases, for example, the first obtaining unit may also be described as "a unit for obtaining, in response to detecting a selection operation acting on a real-time video playback head page, a target playback control set corresponding to the selected real-time video playback page".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (9)

1. A virtual article synthesis method, comprising:
responding to the detection of a selection operation acting on a real-time video playing head page, and obtaining a target playing control set corresponding to the selected real-time video playing page, wherein the real-time video playing head page has a real-time video playing page set;
generating a native page request in response to detection of selection operation acting on any target playing control in the target playing control sets, and performing block processing on the selected real-time video playing page to obtain a video block page set;
Packaging the video block page set and the native page request into an instruction request;
requesting middleware through a preset message, and sending the instruction request to a native request processor;
the native request processor is controlled to analyze the instruction request, and an analyzed instruction request is obtained;
calling a native block page set according to the parsed instruction request and the video block page set;
matching the video block page set and the original block page set to obtain a matched block page set;
identifying each matching block page in the matching block page set to generate matching block pages with different matching block page contents as target matching block pages to obtain a target matching block page set;
and replacing the video block page set according to the target matching block page set to obtain a primary virtual article head page, wherein the primary virtual article head page comprises: a virtual article type information set, wherein the virtual article type information set is displayed in a carousel mode, and the original virtual article head page is displayed in a pop-up layer mode;
Responding to the selection operation of any virtual article type information acting on the original virtual article head page, and acquiring a virtual article information set corresponding to the selected virtual article type information;
displaying a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set;
in response to detecting a selection operation acting on the set of virtual article synthesis controls, determining at least one virtual article synthesis control selected;
acquiring the value information possession quantity corresponding to the target user and the value fragment possession quantity corresponding to the virtual article type information as the target value information possession quantity and the target value fragment possession quantity;
responding to the detection that the number of the target value information possession is greater than or equal to the number of the value information possession corresponding to the at least one virtual article synthesis control, and the number of the target value fragments possession is greater than or equal to the number of the value fragments possession corresponding to the at least one virtual article synthesis control, synthesizing the at least one virtual article corresponding to the at least one virtual article synthesis control, and obtaining at least one piece of virtual article information;
Performing feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and performing background segmentation on the virtual article synthesis page to obtain a Jing Yemian virtual article synthesis front page and a virtual article synthesis background page;
and combining the pre-synthesis Jing Yemian of the virtual article, the synthesis background page of the virtual article and each piece of virtual article information in the at least one piece of virtual article information to obtain at least one synthesis special effect, and displaying each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
2. The method of claim 1, wherein the method further comprises:
in response to detecting a target decomposition control acting on the home page of the original virtual article, displaying a target virtual article decomposition page, wherein the target virtual article decomposition page has a corresponding virtual article decomposition control set and a virtual article information set to be decomposed;
in response to detecting a selection operation acting on the set of virtual article resolution controls, determining at least one virtual article resolution control selected;
in response to determining that the value information possession quantity corresponding to the at least one virtual article decomposition control is greater than or equal to a preset value information possession threshold, determining virtual article type information of each piece of virtual article information to be decomposed in at least one piece of virtual article information to be decomposed corresponding to the at least one virtual article decomposition control, and obtaining a virtual article type information set to be decomposed;
Classifying the at least one piece of virtual article information to be decomposed to generate a virtual article information group to be decomposed corresponding to each piece of virtual article type information in the virtual article type information set to be decomposed, so as to obtain a virtual article information group set to be decomposed;
decomposing each virtual article information group to be decomposed in the virtual article information group set to be decomposed to obtain a class value fragment set, wherein class value fragments in the class value fragment set comprise: the number of the category fragment icons and the category fragment images corresponding to the category information of the virtual article;
performing feature recognition on the target virtual article decomposition page to obtain decomposition position information corresponding to the class value fragment set, and performing background segmentation on the target virtual article decomposition page to obtain a virtual article decomposition foreground page and a virtual article decomposition background page;
and carrying out combination treatment on the virtual article decomposition foreground page, the virtual article decomposition background page and each type value fragment in the type value fragment set to obtain a decomposition special effect set, and displaying each decomposition special effect in the decomposition special effect set according to the decomposition position information.
3. The method of claim 2, wherein after said decomposing each virtual item information set of the set of virtual item information sets to be decomposed to obtain a set of category value fragments, the method further comprises:
displaying a category virtual article decomposition page corresponding to the selected virtual article category information, wherein the category virtual article decomposition page comprises a category virtual article decomposition control set and a category virtual article information set to be decomposed corresponding to the selected virtual article category information;
in response to detecting a selection operation acting on the set of category virtual item resolution controls, determining at least one category virtual item resolution control selected;
and responding to the determination that the value information possession quantity corresponding to the at least one type of virtual article decomposition control is greater than or equal to the preset value information possession threshold, decomposing at least one type of virtual article information to be decomposed corresponding to the at least one type of virtual article decomposition control, and obtaining type value fragments corresponding to the selected type of virtual article information as target type value fragments.
4. The method of claim 1, wherein the generating a native page request and parsing the native page request to generate a native virtual article head page in response to detecting a selection operation on any one of the set of target play controls comprises:
Generating a target native page request in response to detection of a selection operation of a first target playing control acting on the target playing control set, and analyzing the target native page request to generate a virtual article fragment page, wherein the first target playing control is a link icon control of the virtual article fragment page;
and generating a native page request in response to detecting a selection operation acting on any virtual article fragment in the virtual article fragment page, and analyzing the native page request to generate a native virtual article head page.
5. The method of claim 1, wherein the generating a native page request and parsing the native page request to generate a native virtual article head page in response to detecting a selection operation on any one of the set of target play controls comprises:
in response to detecting a selection operation of a second target playing control acting on the target playing control set, displaying real-time video player information of the selected real-time video playing page, wherein the real-time video player information comprises real-time video player identity information and virtual article graphic control;
Responding to the detection of the selection operation acting on the virtual article graphic control, and displaying a target virtual article page corresponding to the real-time video player identity information, wherein the target virtual article page has a virtual article selection control set;
and generating a native page request in response to detecting a selection operation acting on any one virtual article selection control in the virtual article selection control set, and analyzing the native page request to generate a native virtual article head page.
6. The method of claim 1, wherein the synthesizing the at least one virtual item corresponding to the at least one virtual item synthesis control to obtain at least one virtual item information in response to detecting that the target value information possession number is greater than or equal to the value information possession number corresponding to the at least one virtual item synthesis control, the target value fragment possession number is greater than or equal to the value fragment possession number corresponding to the at least one virtual item synthesis control, comprises:
for each virtual article synthesis control of the at least one virtual article synthesis control, performing the determining step of:
The product of the value fragment possession quantity corresponding to the virtual article synthesis control and a preset value fragment threshold is determined to be the synthesized value fragment possession quantity of the virtual article information corresponding to the virtual article synthesis control;
determining the sum of the value fragment possession quantity corresponding to the virtual article synthesis control and the synthesized value fragment possession quantity as the target value fragment possession quantity of the virtual article synthesis control;
determining the value information possession quantity corresponding to the virtual article synthesis control as a target value information possession quantity;
determining the obtained product of the number of the obtained at least one target value fragment and the number of virtual article synthesis controls included in the at least one virtual article synthesis control as the value fragment number corresponding to the at least one virtual article synthesis control;
determining the obtained product of the number of the at least one target value information possession and the number of the virtual article synthesis controls included in the at least one virtual article synthesis control as the value information possession corresponding to the at least one virtual article synthesis control;
and in response to detecting that the number of the target value information owners is greater than or equal to the number of the value information owners corresponding to the at least one virtual article compositing control and the number of the target value fragments owners is greater than or equal to the number of the value fragments owners corresponding to the at least one virtual article compositing control, compositing the at least one virtual article corresponding to the at least one virtual article compositing control to obtain at least one piece of virtual article information.
7. A virtual article synthesizing apparatus comprising:
the first acquisition unit is configured to respond to detection of a selection operation acting on a real-time video playing head page, and acquire a target playing control set corresponding to the selected real-time video playing page, wherein the real-time video playing head page has a real-time video playing page set;
the generation unit is configured to respond to detection of selection operation acting on any one of the target playing controls in the target playing control set, generate a native page request, and perform block processing on the selected real-time video playing page to obtain a video block page set; packaging the video block page set and the native page request into an instruction request; requesting middleware through a preset message, and sending the instruction request to a native request processor; the native request processor is controlled to analyze the instruction request, and an analyzed instruction request is obtained; calling a native block page set according to the parsed instruction request and the video block page set; matching the video block page set and the original block page set to obtain a matched block page set; identifying each matching block page in the matching block page set to generate matching block pages with different matching block page contents as target matching block pages to obtain a target matching block page set; and replacing the video block page set according to the target matching block page set to obtain a primary virtual article head page, wherein the primary virtual article head page comprises: a virtual article type information set, wherein the virtual article type information set is displayed in a carousel mode, and the original virtual article head page is displayed in a pop-up layer mode;
A second acquisition unit configured to acquire a virtual article information set corresponding to the selected virtual article type information in response to detection of a selection operation of any one of the virtual article type information acting on the original virtual article head page;
the display unit is configured to display a virtual article synthesis page corresponding to the selected virtual article information set, wherein the virtual article synthesis page has a corresponding virtual article synthesis control set;
a determination unit configured to determine the selected at least one virtual article composition control in response to detecting a selection operation acting on the set of virtual article composition controls;
a third acquisition unit configured to acquire a value information possession quantity corresponding to a target user, a value fragment possession quantity corresponding to the virtual article type information, as a target value information possession quantity and a target value fragment possession quantity;
the synthesizing unit is configured to synthesize at least one virtual article corresponding to the at least one virtual article synthesizing control to obtain at least one piece of virtual article information in response to detecting that the number of the target value information owners is greater than or equal to the number of value information owners corresponding to the at least one virtual article synthesizing control and the number of the target value fragments owners is greater than or equal to the number of value fragments owners corresponding to the at least one virtual article synthesizing control;
The feature recognition unit is configured to perform feature recognition on the virtual article synthesis page to obtain synthesis position information corresponding to the at least one piece of virtual article information, and perform background segmentation on the virtual article synthesis page to obtain a Jing Yemian before virtual article synthesis and a virtual article synthesis background page;
and the combination processing unit is configured to perform combination processing on each piece of virtual article information in the virtual article synthesis front Jing Yemian, the virtual article synthesis background page and the at least one piece of virtual article information to obtain at least one synthesis special effect, and display each synthesis special effect in the at least one synthesis special effect according to the synthesis position information.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-6.
CN202311076432.6A 2023-08-25 2023-08-25 Virtual article synthesis method, apparatus, electronic device, and computer-readable medium Active CN116781965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311076432.6A CN116781965B (en) 2023-08-25 2023-08-25 Virtual article synthesis method, apparatus, electronic device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311076432.6A CN116781965B (en) 2023-08-25 2023-08-25 Virtual article synthesis method, apparatus, electronic device, and computer-readable medium

Publications (2)

Publication Number Publication Date
CN116781965A CN116781965A (en) 2023-09-19
CN116781965B true CN116781965B (en) 2023-11-24

Family

ID=87988187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311076432.6A Active CN116781965B (en) 2023-08-25 2023-08-25 Virtual article synthesis method, apparatus, electronic device, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN116781965B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346634A (en) * 2020-11-23 2021-02-09 腾讯科技(深圳)有限公司 Virtual article issuing method and device
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN114090862A (en) * 2020-08-24 2022-02-25 腾讯科技(深圳)有限公司 Information processing method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277908B (en) * 2020-01-16 2021-04-06 北京达佳互联信息技术有限公司 Data processing method, device, server, live broadcast system and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090862A (en) * 2020-08-24 2022-02-25 腾讯科技(深圳)有限公司 Information processing method and device and electronic equipment
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN112346634A (en) * 2020-11-23 2021-02-09 腾讯科技(深圳)有限公司 Virtual article issuing method and device

Also Published As

Publication number Publication date
CN116781965A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109992710B (en) Click rate estimation method, system, medium and computing device
CN107832434B (en) Method and device for generating multimedia play list based on voice interaction
CN113011186B (en) Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium
CN116720004B (en) Recommendation reason generation method, device, equipment and storage medium
CN110278447B (en) Video pushing method and device based on continuous features and electronic equipment
CN112434086B (en) Information flow mining method based on cloud computing and big data and cloud computing interaction center
CN114417174B (en) Content recommendation method, device, equipment and computer storage medium
CN115080836A (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN112463968A (en) Text classification method and device and electronic equipment
CN114359775A (en) Key frame detection method, device, equipment, storage medium and program product
CN115203539B (en) Media content recommendation method, device, equipment and storage medium
JP2022537860A (en) Voice packet recommendation method, device, electronic device and program
CN112182281B (en) Audio recommendation method, device and storage medium
CN112784157A (en) Training method of behavior prediction model, behavior prediction method, device and equipment
CN117009650A (en) Recommendation method and device
CN113297525B (en) Webpage classification method, device, electronic equipment and storage medium
CN113033707B (en) Video classification method and device, readable medium and electronic equipment
WO2024021685A1 (en) Reply content processing method and media content interactive content interaction method
CN112989182A (en) Information processing method, information processing apparatus, information processing device, and storage medium
CN116781965B (en) Virtual article synthesis method, apparatus, electronic device, and computer-readable medium
CN115129975A (en) Recommendation model training method, recommendation device, recommendation equipment and storage medium
US11934921B2 (en) Dynamic content rating assistant
CN113837216A (en) Data classification method, training method, device, medium and electronic equipment
CN114417944B (en) Recognition model training method and device, and user abnormal behavior recognition method and device
CN118094016B (en) Recommendation method, recommendation device, recommendation apparatus, recommendation computer readable storage medium, and recommendation program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant