CN116301964A - Page updating method, device and system - Google Patents

Page updating method, device and system Download PDF

Info

Publication number
CN116301964A
CN116301964A CN202310035766.2A CN202310035766A CN116301964A CN 116301964 A CN116301964 A CN 116301964A CN 202310035766 A CN202310035766 A CN 202310035766A CN 116301964 A CN116301964 A CN 116301964A
Authority
CN
China
Prior art keywords
page
data
scene
user
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310035766.2A
Other languages
Chinese (zh)
Inventor
孙梓琳
程冲
黄定磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hema China Co Ltd
Original Assignee
Hema China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hema China Co Ltd filed Critical Hema China Co Ltd
Priority to CN202310035766.2A priority Critical patent/CN116301964A/en
Publication of CN116301964A publication Critical patent/CN116301964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • G06F16/986Document structures and storage, e.g. HTML extensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the specification provides a page updating method, a device and a system, wherein the page updating method is applied to a user terminal and comprises the following steps: responding to an operation instruction submitted by a user, sending a loading request to a server, and receiving scene data returned by the server for the loading request; generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user; under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data; and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.

Description

Page updating method, device and system
Technical Field
The embodiment of the invention relates to the technical field of man-machine interaction, in particular to a page updating method, device and system.
Background
With the development of internet technology, more and more services start to be online, and shopping APP effectively improves the life convenience of users by providing online shopping modes for the users. When a user uses a shopping APP to purchase, in order to improve the reality of the purchase of the user, an image or a text description of the purchased object is usually displayed to the user, so that the user can know the related information of the purchased object conveniently. In the prior art, aiming at the display content in the scene, most of the display content is fixed, and only images and/or characters are displayed, or the popularization video corresponding to the commodity is displayed. This process fails to achieve interaction between the user and the purchased item, resulting in a low user purchasing experience, and therefore an effective solution is needed to solve the above-mentioned problems.
Disclosure of Invention
In view of this, the present embodiment provides a page update method. One or more embodiments of the present specification relate to a page updating apparatus, two page updating systems, a computing device, a computer-readable storage medium, and a computer program that solve the technical drawbacks of the prior art.
According to a first aspect of embodiments of the present disclosure, there is provided a page update method, applied to a user terminal, including:
responding to an operation instruction submitted by a user, sending a loading request to a server, and receiving scene data returned by the server for the loading request;
generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user;
under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data;
and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
According to a second aspect of embodiments of the present disclosure, there is provided a page updating apparatus, applied to a user terminal, including:
The receiving data module is configured to respond to an operation instruction submitted by a user, send a loading request to a server, and receive scene data returned by the server for the loading request;
the generation page module is configured to generate an interaction page according to scene page data in the scene data and display the interaction page to the user;
the determining data module is configured to determine video data corresponding to a target signal in the scene data under the condition that the target signal associated with the interaction page is acquired;
and the video playing module is configured to update the interactive page based on the video data and play the interactive video associated with the target signal through the updated interactive page.
According to a third aspect of embodiments of the present disclosure, there is provided a page update method, applied to a user terminal, including:
responding to an operation instruction submitted by a user, sending a loading request to a server, and receiving commodity display data returned by the server for the loading request;
generating a commodity interaction page according to the commodity page data in the commodity display data and displaying the commodity interaction page to the user;
Under the condition that an interaction signal related to the commodity interaction page is acquired, determining commodity video data corresponding to the interaction signal in the commodity display data;
and updating the commodity interaction page based on the commodity video data, and playing the commodity interaction video associated with the interaction signal through the updated commodity interaction page.
According to a fourth aspect of embodiments of the present disclosure, there is provided a page updating apparatus, applied to a user terminal, including:
the receiving data module is configured to respond to an operation instruction submitted by a user, send a loading request to a server, and receive commodity display data returned by the server for the loading request;
the generation page module is configured to generate a commodity interaction page according to commodity page data in the commodity display data and display the commodity interaction page to the user;
the acquisition signal module is configured to determine commodity video data corresponding to the interaction signal in the commodity display data under the condition that the interaction signal related to the commodity interaction page is acquired;
and the updating page module is configured to update the commodity interaction page based on the commodity video data and play the commodity interaction video associated with the interaction signal through the updated commodity interaction page.
According to a fifth aspect of embodiments of the present disclosure, there is provided a page update system, including a user terminal and a server, the system including:
the user terminal is used for responding to an operation instruction submitted by a user and sending a loading request to the server;
the server is used for reading the scene data according to the loading request and sending the scene data to the user terminal;
the user terminal is further used for generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user; under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data; and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
According to a sixth aspect of embodiments of the present specification, there is provided another page update system, comprising:
a user terminal and a server;
the server is used for storing scene data, the user terminal is used for executing page update executable instructions, and the page update executable instructions realize the steps of the page update method when being executed by the user terminal.
According to a seventh aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions that, when executed, implement the steps of any of the page update methods described above.
According to an eighth aspect of embodiments of the present specification, there is provided a computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the steps of the page update method described above.
According to a ninth aspect of embodiments of the present specification, there is provided a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the above-described page update method.
In order to improve user experience, the page updating method provided by the embodiment can send a loading request to the server after receiving an operation instruction submitted by a user, and receive scene data fed back by the server for the loading request, and at the moment, the scene page data can be extracted from the scene data and used for generating an interaction page and displaying the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
Drawings
FIG. 1 is a schematic diagram of a page update method according to an embodiment of the present disclosure;
FIG. 2a is a flow chart of a page update method provided in one embodiment of the present disclosure;
FIG. 2b is a schematic diagram of a page in a page update method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an interactive page in a page update method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of interactive video in a page update method according to an embodiment of the present disclosure;
FIG. 5 is a process flow diagram of a page update method according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a page update apparatus according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a page update system according to one embodiment of the present disclosure;
FIG. 8 is a flow chart of another page update method provided by one embodiment of the present disclosure;
FIG. 9 is a schematic diagram of another page updating apparatus according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present specification, a page updating method is provided, and the present specification relates to a page updating apparatus, a computing device, two types of page updating systems, a computer-readable storage medium, and a computer program, one by one, in the following embodiments.
Referring to the schematic diagram shown in fig. 1, in order to improve user experience, the page update method provided in this embodiment may send a loading request to a server after receiving an operation instruction submitted by a user, and receive scene data fed back by the server for the loading request, where at this time, the scene page data may be extracted from the scene data, and be used to generate an interaction page and display the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Fig. 2a shows a flowchart of a page update method according to an embodiment of the present disclosure, where the method is applied to a user terminal, and specifically includes the following steps.
Step S202, a loading request is sent to a server in response to an operation instruction submitted by a user, and scene data returned by the server for the loading request is received.
Specifically, the user terminal specifically refers to a terminal device held by a user, including but not limited to a mobile phone, a computer, an intelligent wearable device, etc., and the user terminal is configured with a plurality of sensors capable of collecting signals, and the number of the sensors may be multiple, so as to collect signals of different types, such as motion signals, pressure signals, sound signals, etc. The method is used for determining different interactive videos later and realizing interaction with the user. Correspondingly, the service end specifically refers to an end for providing related services in the target application, and the service end is used for feeding back scene data to the user terminal.
Further, the operation instruction is an operation instruction submitted by a user when using a target application installed in the user terminal, and the instruction is used for triggering entering the interactive page, and at this time, a loading request is required to be sent to the server according to the operation instruction. The loading request is specifically a request sent to the server for requesting scene data, and the request can carry related information of the content to be displayed in the interactive page, so that the server can read the corresponding scene data according to the request and feed the corresponding scene data back to the user terminal for use. The scene data particularly refers to data for generating an interactive page and displaying corresponding interactive videos in the interactive page.
It should be noted that, when the user submits the operation instruction, the instruction may be an instruction submitted for the setting control in the target application, and the loading request generated according to the instruction may be generated according to the content selected by the user in a self-defining manner, or may be generated according to the content set in each period of the target application, which is not limited in this embodiment. Under the condition of user definition, the scene data fed back by the server is corresponding user definition content; and under the condition of setting each period, the scene data fed back by the server side is the preset content.
For example, a user selects a cooking tool as a pan by using a shopping APP, and selects a cooking object as steak, then a loading request carrying the information can be sent to a server side of the shopping APP, and the server side can feed back scene data associated with the pan and the steak for subsequent generation of an interactive page and corresponding interactive videos. When a user uses a setting function in the shopping APP in each period preset scene, a loading request is directly sent to a server, the server feeds back scene data to the user terminal according to preset display content of the same day, the scene data corresponds to a cooking tool which is a soup pot, and a cooking object is egg soup and is used for generating an interaction page and corresponding interaction videos.
Based on the method, after the user terminal receives the operation instruction submitted by the user, the loading request corresponding to the interaction mechanism can be requested to the server side in response to the operation instruction, at the moment, the server side feeds back the scene data corresponding to the loading request to the user terminal, after the user terminal receives the scene data, the user terminal can be used for displaying the interaction page subsequently, and after the target signal is acquired, the interaction video corresponding to the target signal can be displayed, so that the use experience of the user is improved.
Before that, the scene data fed back by the server side is considered to be used for generating the interaction page and the corresponding interaction video subsequently so as to improve the commodity purchasing requirement of the user; therefore, before the server feeds back the scene data, in order to ensure that the fed back scene data better meets the requirements of the user, the server can be realized in the following manner:
step 1, receiving a behavior request submitted by a commodity page displayed by a user through a user terminal.
Specifically, the commodity page specifically refers to a page currently displayed by the user terminal, the page comprises a commodity list, a user can search for the commodity or purchase the commodity through the page, and then, the user can check detailed information of the commodity or the like; the behavior request specifically refers to a request triggered when a user performs commodity purchasing or commodity searching through a commodity page, wherein the request is a loading request, can trigger the determination processing operation of scene data, and is used for realizing feedback to a user terminal for use after the scene data is determined by a server side, so that the user can perform commodity purchasing operation more conveniently.
Based on the above, after the server receives the behavior request, in order to provide a better quality commodity purchasing service for the user, the subsequent commodity recommendation scene determination can be performed in response to the behavior request, so as to determine scene data from the scene dimension, and realize interaction with the user at the user terminal.
For example, when a user purchases a commodity using the H shopping application, when the user clicks the H shopping application and searches for the commodity, a commodity list page as shown in fig. 2b will be displayed, at which time the commodity associated with the search content thereof is displayed in the commodity list; if the user submits the instruction of adding the shopping cart to the B commodity in the commodity list at this time, in order to recommend other commodities which can more effectively reach the purchasing requirement to the user in combination with the purchasing behavior of the user, a behavior request can be sent to the server according to the instruction of adding the shopping cart so as to realize that the server can calculate the associated scene according to the instruction of adding the shopping cart to the B commodity by the user, and the content of the associated scene is recommended to the user.
And 2, determining scene information associated with the behavior request and reading historical behavior information corresponding to the user.
Specifically, after the behavior request is received, further, in order to improve the determination accuracy of the recommended scene, so that the user can be accurately touched, scene information associated with the behavior request can be determined first, meanwhile, historical behavior information corresponding to the user is read, so that the scene information and the historical behavior information of the user can be integrated for realizing the commodity recommendation scene determination in the follow-up process, and further, the determined commodity recommendation scene can meet shopping requirements of the user from multiple dimensions.
The scene information specifically refers to description information corresponding to a scene associated with a behavior request triggered currently by a user, a plurality of scenes possibly associated with the behavior request can be determined through the scene information, the follow-up determination of commodity recommendation scenes can be performed on the basis of the scene information, the scene information can be an empty set, that is, the behavior request can not be associated with any scene, and at the moment, the scene information is the empty set; in addition, the scene information may further include related description information of the scene of the latest holiday, for example, description information of 1 day further from the holiday F1, for subsequent use in determining the recommended scene of the commodity. Accordingly, the historical behavior information specifically refers to description information corresponding to the behavior of the user using the shopping application before the current moment, including but not limited to browsing behavior, purchasing behavior, consumption behavior, attention behavior, etc. of the user using the shopping application, which is not limited in this embodiment.
Further, when determining scenario information associated with a behavior request, considering that different behavior operations of a user trigger different behavior requests, and different behavior requests correspond to different scenarios, the behavior requests need to be parsed, so that scenario information corresponding to the associated scenarios can be determined according to parsing results, so that subsequent use is convenient.
Analyzing the behavior request to obtain commodity attribute information of the corresponding target commodity; determining at least one recommended scene associated with the target commodity according to the commodity attribute information; scene information associated with the behavior request is composed based on scene sub-information of each recommended scene.
Specifically, the target commodity specifically refers to a commodity selected or searched by a user when submitting a behavior request through a commodity page in a user terminal; the commodity attribute information specifically refers to type information, usage information and the like of the target commodity, and can be used for specifying a scene associated with the commodity. The recommended scene specifically refers to a scene to which the target commodity is associated, such as a holiday scene, a usage scene, and the like. The scene sub-information is the description information corresponding to each recommended scene.
Based on the above, after the behavior request is obtained, the behavior request can be analyzed and determined to obtain the target commodity, and at this time, commodity attribute information of the target commodity can be read to determine at least one recommended scene associated with the target commodity according to the commodity attribute information, so that a scene possibly associated with the current shopping behavior of the user can be initially determined, and then scene sub-information corresponding to each recommended scene is combined to form scene information associated with the behavior request, so that the subsequent determination of commodity recommended scenes in combination with historical behavior information is facilitated.
According to the above example, after the behavior request is obtained, the behavior request is analyzed, it is determined that the commodity which is currently added into the shopping cart by the user is B, then the attribute information of the B commodity can be read at the moment, then the related holiday scenes of the commodity can be determined according to the attribute information of the B commodity, namely the F1 holiday, the F2 holiday and the F3 holiday, and then scene information is formed by combining scene sub-information corresponding to each holiday scene at the moment for later use in determining commodity recommendation scenes.
In practical application, when determining the recommended scene based on the commodity attribute information, the recommended scene may not be of the same type, and may include both a holiday scene and a usage scene, so as to ensure the richness of the recommended scene, and facilitate the subsequent screening of commodity recommended scenes of users more closely attached to the user.
In summary, by determining the recommended scenes in combination with the commodity attribute information, the richness of the possibly recommended scenes can be further improved, and the determination of the commodity recommended scenes based on the richness can be completed from multiple angles, so that the determined commodity recommended scenes are guaranteed to be more attached to the user.
Further, in determining the historical behavior information, considering that the corresponding historical behavior of the user before the current moment may be massive, if the user reads and uses the historical behavior completely, larger computing resources may need to be consumed, so in order to save the computing resources and ensure the accuracy of determining the commodity recommendation scene, the historical behavior information in the information reading period may be selected for use, and in this embodiment, the specific implementation manner is as follows:
Determining a user identifier corresponding to a user and an information reading period corresponding to a behavior request; reading initial historical behavior information corresponding to a user according to a user identifier, wherein the initial historical behavior information comprises commodity purchase information and commodity browsing information; and filtering the initial historical behavior information based on the information reading period to obtain the historical behavior information.
Specifically, the user identifier specifically refers to a unique identifier corresponding to the user, and the information reading period specifically refers to a period for intercepting historical behavior information, and it is to be noted that the information reading period needs to be closer to the current moment for receiving the behavior request, so that it is ensured that the recent shopping requirement of the user can be met when determining the commodity recommendation scene. The initial historical behavior information specifically refers to global historical behavior information of a user using a shopping application, and the global historical behavior information comprises commodity purchase information and commodity browsing information, wherein the commodity purchase information is behavior information corresponding to a commodity purchased by the user, and the commodity browsing information is behavior information corresponding to a commodity browsed by the user.
Based on the above, in order to save computing resources and ensure accuracy of determining commodity recommendation scenes, user identifications corresponding to users and information reading periods corresponding to behavior requests can be determined first; and then reading initial historical behavior information corresponding to the user, namely global historical behavior information, wherein the global historical behavior information comprises all commodity purchase information and commodity browsing information of the user, and filtering the initial historical behavior information based on an information reading request at the moment to obtain the historical behavior information corresponding to the recent purchase behavior of the user so as to facilitate the subsequent determination of commodity recommendation scenes by combining scene information.
The size of the interval of the information reading period may be set according to the requirement, for example, 1 day, 2 days, or 1 week, and the present embodiment is not limited in any way.
Along the above example, it is determined that the user identifier corresponding to the user is id_1, and the information reading period corresponding to the behavior request is T1-T2, then all the historical behavior information corresponding to the user may be read at this time, where the information includes all the commodity purchasing behavior information and commodity browsing behavior information of the user from the registration day to the present, then all the historical behavior information is screened according to the information reading period T1-T2, and the historical behavior information corresponding to the information reading period T1-T2 is selected for use in subsequent determination of the commodity recommendation scene.
In summary, by combining the user identification and the determination of the historical behavior information of the information reading period, the time of submitting the behavior request in association with the determined historical behavior information can be ensured, so that the subsequent determination of the commodity recommendation scene can be completed in combination with the recent behavior, the commodity recommendation scene can be more attached to the user, and the touch rate is improved.
In addition, it should be noted that, the historical behavior information corresponding to the user may further include an operation corresponding to the user before searching for the target commodity or adding to the shopping cart when the user uses the shopping application, so as to implement the determination using recent historical behavior information when determining the commodity recommendation scene, which has higher accuracy.
And step 3, determining commodity recommendation scenes of the associated users according to the scene information and the historical behavior information.
Specifically, after the scene information and the historical behavior information are determined, further, in order to ensure that the scene returned and recommended to the user is closer to the recent behavior of the user, the commodity recommendation scene can be determined according to the scene information and the historical behavior information; the commodity recommendation scene specifically refers to a scene which can effectively reach the purchasing requirement of a user at the current moment, and related commodities associated with the scene are also commodities associated with the purchasing requirement of the user, so that after the commodity recommendation scene is triggered by the user, the recommendation can be completed by combining the commodities in the specific scene.
In practical application, when determining the commodity recommendation scene by combining scene information and historical behavior information, the commodity recommendation scene can be realized by using a recommendation algorithm deployed by a server, wherein the recommendation algorithm is a machine learning model, and the commodity recommendation scene output by the model can be obtained by inputting the scene information and the historical behavior information into the recommendation model for processing. In addition, in order to reduce the calculation force pressure of the server, the method can be realized by a built-in algorithm model of the user terminal, namely, the algorithm model is built in shopping application of the user terminal, calculation is completed by using the calculation force of the user terminal, and a commodity recommendation scene can be obtained; it should be noted that, the information required in the process needs to be sent to the user terminal by the server and then the commodity recommendation scene is determined. The algorithm deployed by the server side or the algorithm built in the user terminal is a recommendation model which is sufficiently trained.
In the implementation, considering that the target commodity associated with the behavior request may not be associated with any recommended scene, in order to complete the determination of the commodity recommended scene, the determination of the commodity recommended scene can be directly completed according to the historical behavior information and the attribute information of the target commodity; for example, the attribute information of the target commodity includes usage information, the F2 holiday scene may be determined according to the historical behavior information and the usage information of the user, and the usage information is used for feeding back scene data corresponding to the F2 holiday scene to the user terminal.
Further, in order to ensure the accuracy of determining the commodity recommendation scene and meet the current shopping requirement of the user when determining the commodity recommendation scene, the method can be completed by combining the association degree between the scene and the user, and in this embodiment, the specific implementation manner is as follows:
determining at least one first recommended scene associated with the user according to the scene information; and calculating the association degree between each first recommended scene and the user according to the historical behavior information, and selecting a commodity recommended scene from at least one first recommended scene according to the association degree.
Specifically, the first recommended scene specifically refers to a scene of determining an associated user according to scene information, wherein the scene is associated with the current shopping behavior of the user to a certain extent; correspondingly, the relevance specifically refers to the relevance between the user and each first recommended scene, and the higher the relevance is, the closer the recommended scene is to the current shopping behavior of the user, and the lower the relevance is, the lower the recommended scene is, the less the recommended scene is to the current shopping behavior of the user.
Based on this, in the case where at least one first recommended scene associated with the user is determined according to the scene information, in order to be able to determine a commodity recommended scene sufficiently high to associate the user from among the plurality of first recommended scenes, the degree of association between each first recommended scene and the user may be calculated according to the historical behavior information, and then the degree of association may be ranked, that is, the recommended scene having the highest degree of association among the first recommended scenes may be selected as the commodity recommended scene according to the ranking result.
Along the above example, after determining the F1 holiday scene, the F2 holiday scene and the F3 holiday scene, calculating the association degree of each holiday scene and the user according to the historical behavior information of the user at the moment, namely calculating the association degree between each holiday scene and the recent shopping behavior of the user, determining the association degree between the F1 holiday scene and the user as S1, the association degree between the F2 holiday scene and the user as S2 and the association degree between the F3 holiday scene and the user as S3 according to the calculation result; and S2 > S1 > S3. Then the F3 holiday scene may be selected as the commodity recommendation scene associated with the user, so as to perform subsequent scene data reading and feedback to the user terminal.
In sum, the commodity recommendation scenes are determined in a mode of calculating the association degree, so that the commodity recommendation scenes can be accurately determined from a plurality of recommendation scenes, the probability of touching a user is improved, the purpose of assisting the user in shopping is achieved, and the shopping experience of the user is improved.
In addition, since the scenes associated with the scene information may not include all the scenes related to the current shopping needs of the user, if the determination of the commodity recommendation scene is performed based on the scenes, the recommendation scenes may not be fully determined, so that the recommendation scenes can be supplemented by combining with the historical behavior information and then screened after the supplement, thereby ensuring higher screening precision.
Determining at least one second recommended scene associated with the user according to the historical behavior information; integrating at least one first recommended scene and at least one second recommended scene to obtain a scene to be recommended; according to the historical behavior information, calculating the association degree between each scene to be recommended and the user; and selecting a commodity recommendation scene from the scenes to be recommended according to the association degree.
Specifically, the second recommended scene specifically refers to a recommended scene determined based on the historical behavior information, and the second recommended scene is different from the first recommended scene; correspondingly, the scene to be recommended specifically refers to a recommended scene combined based on the first recommended scene and the second recommended scene.
Based on the above, in order to ensure that the association degree can be calculated, the association degree calculation can be performed on the recommended scenes associated with the user, at least one second recommended scene associated with the user can be determined through the historical behavior information, then the first recommended scene and the second recommended scene are combined, the to-be-recommended scene can be obtained, and then the commodity recommended scene can be obtained through the association degree calculation between the to-be-recommended scene and the user.
It should be noted that, in this embodiment, the calculation of the association degree between the scene to be recommended and the user may refer to the same or corresponding descriptions in the above embodiment, and this embodiment is not repeated here.
In conclusion, the completion of the recommended scene is carried out by combining the historical behavior information, and the recommended scene is determined based on the completion of the recommended scene, so that the selected objects can be ensured to be richer, and more accurate commodity recommended scenes can be screened based on the selected objects, and the shopping demands of users can be met more.
And 4, reading scene data of the commodity recommendation scene, and sending the scene data to the user terminal, wherein the scene data is used for generating an interactive page and playing an interactive video in the user terminal.
Specifically, after the commodity recommendation scene is determined, further, in order to effectively reach the user, the related commodity is recommended to the user from the scene dimension, interaction with the user is realized on the basis, commodity purchasing requirements of the user are improved, scene data of the commodity recommendation scene can be read first, and then the scene data are sent to the user terminal, so that the client side can conveniently display an interaction page and play interaction videos according to user requirement operation.
Further, in order to realize that the commodity recommendation scene can be applied in different scenes, the server side can set multiple sets of scene data for each scene, and at this time, the scene data needs to be determined by combining other dimensional information, and in this embodiment, the specific implementation manner is as follows:
reading at least two initial scene data corresponding to a commodity recommendation scene; determining scene data in at least two initial scene data according to behavior reference data corresponding to a user; and sending the scene data to the user terminal.
Specifically, the initial scene data specifically refers to a plurality of sets of scene data configured by the server side for the commodity recommendation scene in advance. Accordingly, the behavior reference data specifically refers to data capable of screening scene data among a plurality of sets of initial scene data, including, but not limited to, time data, geographic data, etc. of the user.
Based on the above, when the commodity recommending scene is associated with multiple sets of initial scene data, in order to recommend scene data meeting shopping requirements to a user, behavior reference data corresponding to the user can be determined first, then the scene data associated with the behavior reference data is determined in at least two initial scene data, and the scene data is sent to a server.
Along the above example, the F2 holiday scene is associated with two sets of scene data, namely, scene data X1 and X2, so that in order to be able to select the scene data with higher degree of association with the user, the scene data are used for displaying the interactive page at the user terminal, the parameters such as the geographic position and time corresponding to the user can be read first, then the parameters such as the geographic position and time are combined to select from the two sets of scene data, the scene data X1 is determined according to the selection result, the scene data X1 comprises the related data of steak placed in the pan, the video of the steak swaying in the pan, and the video of the cooking gas drifting are used for being subsequently sent to the user terminal to generate the interactive page and the interactive video.
In practical applications, when scene data selection is performed in combination with behavior reference data, selection is actually performed from a geographic dimension, a time dimension, a behavior dimension, and/or the like. For example, the user in the area A eats the Y1 commodity on the festival F2, the scene data X1 is set for the habit, the user in the area B eats the Y2 commodity on the festival F2, the scene data X2 is set for the habit, and when the geographic position of the user is related to the area A, the corresponding scene data X1 is selected to be sent to the user terminal. In practical application, when the scene data is selected, the selection may be completed according to the actual requirement, and the embodiment is not limited in any way.
In summary, when the user performs commodity purchasing, the cognition of the user on the commodity recommendation scene can be enhanced through the page added with the scene, so that the purpose of assisting the user in performing convenient purchasing is achieved, and the user can more conveniently purchase commodities required by the user. On the basis, when the user interacts with the commodity purchasing system, the interaction enthusiasm of the user can be further improved, and therefore commodity purchasing experience of the user is improved.
Before this, when the scene data request is performed, the operation instruction submitted by the specified function in the target application needs to be triggered, and in this embodiment, the specific implementation manner is as follows:
receiving an operation instruction submitted by the user through a target application; responding to the operation instruction and sending the loading request to the server; and receiving the scene data returned by the server side aiming at the loading request, wherein the scene data comprises the scene page data and at least one piece of initial video data.
Specifically, the target application is specifically an application that provides shopping services to the user, which is installed at the user terminal. Accordingly, the scene page data specifically refers to data for generating an interactive page, which includes data corresponding to related content to be displayed in the interactive page, such as merchandise image data, video data, audio data, and the like, and related merchandise information. Accordingly, the initial video data specifically refers to videos capable of being displayed based on the target signal in the interactive page, different types of signals correspond to different videos, and video content is related to content initially displayed in an interactive manner, for example, the initially displayed content is content of steak placed in a pan, the video content can be content of gas scattering or content of steak turning, and the content is determined according to the target signal.
In addition, when the server feeds back scene data aiming at the loading request, only the scene page data can be fed back, after the target signal is acquired, the initial video data is fed back, and the video data corresponding to the video to be displayed is selected from the video data according to the signal type of the target signal.
Based on the method, after receiving an operation instruction submitted by a user through a target application, the user needs to enter an interactive page at the moment, so that a loading request of a corresponding function can be sent to a server in response to the operation instruction; and receiving scene data returned by the server side aiming at the loading request, wherein the scene data comprises scene page data and at least one initial video data, and is used for subsequently generating an interaction page for displaying to a user, and simultaneously, interaction can be carried out according to user interaction operation.
For example, when the user uses the shopping APP to enter the interactive page, a loading request is sent to the server according to an operation instruction submitted by the user, and the interactive page displays the related information of the steak in the polling display period, so that the received scene page data contains the related data of the steak placed in the pan, the video of the steak swaying in the pan and the video of the cooking gas drifting in the pan, and the video is sent to the mobile phone held by the user, so that the steak is conveniently displayed at the mobile phone.
In summary, the scene data comprising the scene page data and the initial video data is requested from the server, so that the generation of the subsequent interaction page is facilitated, the video is selected to interact in response to the target signal of the user, and the experience of the user using the target application is improved.
In addition, in order to meet the requirement of user-defined interaction, a loading request can be generated according to a target object selected by a user and fed back to a server, and data fed back by the server according to the user-defined request is received.
Responding to an operation instruction submitted by the user through an object selection page, and determining a first target object and a second target object; generating a loading request based on the first target object and the second target object, and sending the loading request to the server; receiving the scene data returned by the server side aiming at the loading request; wherein the scene data includes a static object graph for presentation in the interaction page and including the first target object and the second target object.
Specifically, the object selection page specifically points to a page to be displayed object selection in the interaction page, and the selected object can interact according to a target signal corresponding to the user when the interaction page is updated, wherein the first target object and the second target object are the selected object, and the first target object and the second target object have relevance, for example, in a fresh shopping scene, the first target object can be a cooking tool, for example, a pan, a red wine glass and the like, and the second target object can be a cooking object, for example, steak, red wine and the like. For example, in a clothing shopping scenario, the first target object may be a virtual model and the second target object may be clothing. Correspondingly, the static object diagram specifically refers to a picture for displaying in an interaction page, wherein the picture comprises a first target object and a second target object, and is used for carrying out interaction according to a target signal in an interaction stage, and the interaction content is determined by video data in scene data. For example, in a fresh-like scenario, the static object map may be an image of steak placed in a pan. In the clothing shopping scene, the static object diagram may be an image of clothing of the model, and the static object diagrams of different scenes may be configured according to actual requirements, which is not limited in this embodiment, and may be used for subsequent interaction with the user in combination with the interaction page.
Based on this, in the case where an operation instruction submitted by the user through the object selection page is received, the first target object and the second target object selected by the user may be determined in response to the operation instruction at this time; then generating a loading request according to the first target object and the second target object, and sending the loading request to a server; after receiving the loading request, the server side determines a first target object and a second target object selected by a user before the interactive page jumps, then the scene data corresponding to the first target object and the second target object are read for feedback, the feedback scene data comprise static object diagrams, and the static object diagrams are used for displaying in the interactive page and comprise the first target object and the second target object, so that the interactive page corresponding to the user-defined requirement can be conveniently displayed later.
For example, when the user selects the pan and the steak in the object selection page in the shopping APP, a loading request can be sent to the server according to the pan and the steak at this time, and scene data fed back by the server for the loading request can be received, wherein the scene data includes a scene object diagram, and content of the steak placed in the pan, a video of the steak swaying in the pan, and a video of cooking gas drifting are recorded in the diagram, so that the interactive page can be conveniently generated for display later.
In sum, by providing the user with the user-defined object selection page, the user can conveniently select the object to be interacted before jumping to the interaction page, so that the user needs to be met under different scenes, and the user experience is further improved.
And step S204, generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user.
Specifically, after the scene data fed back by the server side are received, further, the scene page data can be extracted from the scene data, so that an interactive page capable of being interacted according to the user is created according to the scene page data and is displayed for the user, and the currently displayed interactive page is a page before no interaction.
The scene page data specifically refers to data for generating an interaction page, which includes data corresponding to related content to be displayed in the interaction page, such as commodity image data, video data, audio data, and the like, and related commodity information.
Further, in the process of generating the interaction page, in order to enable the subsequent realization of interaction, the method may be realized by combining a static scene graph and a scene element, and in this embodiment, the specific implementation manner is as follows:
Extracting the scene page data from the scene data; analyzing the scene page data to obtain static scene graph data and scene element data; and generating the interaction page containing a static scene graph and an object selection area based on the static scene graph data and the scene element data, wherein the object selection area is used for displaying an object to be selected.
Specifically, the static scene graph data specifically refers to image data for generating a static scene graph, wherein the static scene graph specifically refers to a picture for displaying in an interactive page, and the picture contains an object to be displayed and is a static graph. Correspondingly, the scene element data specifically refers to data for generating an object selection area and objects to be selected in the object selection area, wherein the object selection area is used for displaying the objects to be selected, and the objects to be selected specifically refer to commodities to be purchased by a user and the like.
Based on the above, after the user terminal receives the scene data, the scene page data may be extracted from the scene data first for generating an interaction page for interaction with the user, and in order to improve interaction experience, the scene page data needs to be parsed first to obtain static scene graph data and scene element data, where the static scene graph data is used for generating a static scene graph for displaying an object interacting with the user in the interaction page. The scene element data is used for generating an object selection area containing the object to be selected, and is used for facilitating the user to select the object to be selected. At this time, an interactive page containing the static scene graph and the object selection area can be generated according to the static scene graph data and the scene element data, and the interactive page is displayed for a user.
Along the above example, after the scene data is obtained, resolving the scene data to obtain static scene graph data and scene element data, at this time, a static scene graph of steaks in a pan can be generated based on the static scene graph data, and meanwhile, an object selection area containing commodities to be selected is generated according to the scene element data, wherein the commodities to be selected comprise a brand A steak, a brand B pan and the like. Thereafter, an interactive page is generated according to the static scene graph and the object selection area containing the commodity to be selected, and the interactive page is shown in (a) of fig. 3.
In sum, the static scene graph data and the scene element data are combined to generate the interaction page containing the static scene graph and the object selection area, so that the interaction can be conveniently performed by the user on the basis, the user can select the object to be selected corresponding to the current scene, the purchasing convenience of the user is further improved, and the purchasing reality of the user is improved.
Step S206, determining video data corresponding to the target signal in the scene data when the target signal associated with the interactive page is acquired.
Specifically, after the interactive page is displayed, it is explained that the user can interact with the displayed content in the interactive page. When the target signal of the related interaction page is acquired through the user terminal, the fact that the user interacts with the content of the interaction page when the interaction page is displayed is explained, and in order to ensure that the interaction result is the same as the interaction requirement of the user, video data corresponding to the target signal can be determined from scene data after the target signal is acquired and used for playing the interaction video corresponding to the target signal in the interaction page.
The target signal specifically refers to a signal acquired by a user at a signal acquisition device configured by a user terminal, and the signal includes, but is not limited to, a motion signal, a pressure signal, a sound signal, and the like. Correspondingly, the video data specifically refers to the encoded data before video decoding corresponding to the target signal, and the video content generated by the video data is related to the target signal and the object content displayed in the interactive page.
For example, the image of steak placed in the pan is displayed in the interactive page, when the user shakes the mobile phone, the acquired signal is a motion signal, and then the video data of the steak shaking in the pan is selected in scene data for playing the video of the steak shaking in the pan when the interactive page is updated later.
For example, when the user blows to the receiver, the acquired signal will be a sound signal, and then the video data of the air flowing in the pan by the steak will be selected in the scene data for playing the video of the air flowing in the pan by the steak when the interactive page is updated later.
For example, when the user blows to the receiver, the collected signal is a sound signal, and the video data of the dandelion seeds blown off are selected from the scene data for playing the video of the dandelion seeds blown off when the interactive page is updated later.
For example, when the user presses the screen, the acquired signal is a pressure signal, and the pressed video data of the mattress is selected in the scene data for playing the pressed video of the mattress when the interactive page is updated.
In practical applications, the video content may be set by the server according to the requirements, which is not limited in this embodiment.
Further, when the signal is collected, the signal collection is actually completed through a signal collection device configured in the user terminal, and in this embodiment, the specific implementation manner is as follows:
determining the signal type of a target signal under the condition that the target signal related to the interactive page is acquired through signal acquisition equipment configured by the user terminal; the video data is determined in the scene data according to the signal type.
Specifically, the signal acquisition device specifically refers to a sensor configured by the user terminal, and can be used for acquiring a motion signal, a sound signal, a pressure signal, or the like. Accordingly, the signal type specifically refers to a type corresponding to the target signal, including but not limited to a sound type, a motion type, or a pressure type.
Based on the above, in the case that the target signal of the related interaction page is acquired through the signal acquisition device configured by the user terminal, it is explained that after the interaction page is displayed, the user interacts with the object displayed in the interaction page, and in order to be able to respond to the interaction requirement of the user, the target signal can be acquired through the signal acquisition device. Considering that the objects displayed in the page correspond to various interactive contents, such as the interactive contents that the steak corresponds to shaking and gas drifting, the signal type of the target signal needs to be determined, and the video data is determined in the scene data according to the signal type, so that the interactive video corresponding to the signal type can be played later conveniently.
Along the above example, after displaying the interactive page as shown in fig. 3 (a), when the mobile phone detects the motion signal of the user, it indicates that the user shakes the mobile phone at this time, the corresponding steak should shake in the pan, so that the video data corresponding to the steak shake can be selected from the scene data according to the signal type of the motion signal. When the voice signal of the user is detected through the mobile phone, the user blows to the receiver, and the corresponding steak cooking gas should fly in the air, so that the video data corresponding to the gas scattering can be selected from the scene data according to the signal type of the voice signal.
In summary, by selecting video data in combination with the signal type, the selected video data can be ensured to correspond to the target signal, namely, the interaction action of the corresponding user, so that the played interaction video is ensured to meet the interaction requirement of the user, and the use experience of the user is improved.
In addition, considering that before signal acquisition is performed, the user may change the interactive object in the interactive page, so interaction needs to be performed in combination with the changed result, and in this embodiment, the specific implementation manner is as follows:
receiving a click command submitted by the user aiming at an interaction control in the interaction page; determining object data corresponding to the interaction control in response to the click command, updating the interaction page based on the object data, and obtaining and displaying a target interaction page containing a target object; under the condition that a target signal associated with the target interaction page is acquired, determining at least one object video data in the scene data according to the target signal; object video data associated with the target object is determined in the at least one object video data as the video data.
Specifically, the interactive control is a control for providing object selection for a user, and different controls correspond to different objects. Correspondingly, the object data specifically refers to data corresponding to the selected object, and is used for updating the content in the interaction page. Accordingly, the object video data specifically refers to video data corresponding to the target object, and the video data may be plural, and the video data may be selected for use therein according to the target signal and the target object.
Based on the above, in order to enable the user to interact with different objects, interactive controls corresponding to various objects can be provided in the interactive page, when a click instruction submitted by the user for the interactive controls in the interactive page is received, it is explained that the user needs to change the interactive objects, then object data corresponding to the interactive controls can be determined in response to the click instruction, the interactive page is updated based on the object data, and a target interactive page containing target objects is obtained and displayed; then, under the condition that the target signal of the related target interaction page is acquired, at least one object video data can be determined in the scene data according to the target signal; object video data of the associated target object is determined among the at least one object video data as video data.
The process of determining the scene data may refer to the same or corresponding descriptions in the above embodiments, which are not repeated here.
Along the above example, after displaying the interactive page shown in fig. 3 (a), when the user selects the control corresponding to the egg through the interactive page, it is indicated that the user needs to change the steak into the egg, then the interactive page may be updated according to the selected egg data, and the interactive page in which the egg is placed in the pan is displayed according to the update result, as shown in fig. 3 (b). After that, when the motion signal of the user is detected through the mobile phone, it is stated that the user shakes the mobile phone at this time, the corresponding egg should shake in the pan, so that a plurality of initial video data corresponding to the egg can be selected from the scene data according to the signal type of the motion signal, and the video data of the egg shake can be selected from the plurality of initial video data according to the signal type.
In sum, through providing the function of custom selecting the object for the user, the user can conveniently select the target object to interact according to the requirement, so that the method can be adapted to more interaction scenes, and the user experience is improved.
Step S208, updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
Specifically, after the video data corresponding to the target signal is determined, the interactive page can be updated based on the video data, so that the interactive video of the associated target signal is played through the updated interactive page, and the interactive video corresponding to the interactive action is played to the user. For example, a mobile phone is rocked, and a video that the beefsteak rocks in the pan is displayed. Such as shaking the wine glass, and displaying a video of shaking the red wine in the wine glass. And pressing the screen, for example, shows the video of the mattress being pressed.
Further, when playing the interactive video, the interactive video of the corresponding signal is actually inserted into the designated area in the interactive page, and in this embodiment, the specific implementation manner is as follows:
rendering the video data to obtain the interactive video associated with the target signal; determining a video playing area in the interactive page, and inserting the interactive video into the video playing area; and playing the interactive video through the interactive page according to the video insertion result.
Specifically, the video playing area specifically refers to an area for playing the interactive video, and the area overlaps with a position for displaying the static scene graph in the interactive page, so that an effect of picture skip is ensured not to occur, and the interactive video is played.
Based on the above, after the video data is obtained, the video data can be rendered through the user terminal, so that the interactive video of the associated target signal is obtained according to the rendering result; then determining a video playing area in the interactive page, and inserting the interactive video into the video playing area; and playing the interactive video through the interactive page according to the video insertion result.
According to the above example, the interactive video corresponding to the motion signal is obtained by rendering the video data, then the interactive video is inserted into the interactive page to display the picture area of the steak placed in the pot, so that the interactive video can be played through the interactive page according to the insertion result, and the effect of the interactive video in the playing process is that the air is dispersed, wherein one frame is shown in (a) in fig. 4.
In sum, by inserting the interactive video in the video playing area, no switching effect between the content displayed in the interactive page and the interactive video can be ensured, so that more real interactive experience is provided for users.
In addition, in order to improve interaction participation of the user, a motion track can be created by combining signals, so that the virtual object moves according to the motion track, and in this embodiment, the specific implementation manner is as follows:
generating a virtual object according to the video data, and generating a motion trail of the virtual object according to the target signal; inserting the virtual object in an interaction area of the interaction page to serve as updating of the interaction page; and driving the virtual object to move in the interaction area through the movement track, and playing the interaction video through the updated interaction page.
Specifically, the virtual object specifically refers to an object to be displayed in the interaction page, and the object can interact with a user. Correspondingly, the motion trail specifically refers to a motion trail corresponding to the virtual object, and the motion trail is related to the target signal.
Based on the above, firstly, generating a virtual object according to video data, and generating a motion trail of the virtual object according to a target signal; then inserting a virtual object in an interaction area of the interaction page to serve as updating of the interaction page; and then driving the virtual object to move in the interaction area through the movement track, and playing the interaction video as an updated interaction page.
For example, the interactive page shows that the beefsteak is placed in the pan, after the video data is determined, a beefsteak model can be generated according to the video data, then a motion track is determined according to the collected target signal, and then the beefsteak is driven to move according to the motion track, so that the video content of the beefsteak model moving in the pan can be played through the interactive page, wherein one frame is shown in fig. 4 (b).
In addition, in order to avoid the virtual object moving to other areas and generate unreasonable display content, the movement range of the virtual object can be set according to requirements when the virtual object is driven to move through the movement track.
In conclusion, the virtual object is driven to move by combining the motion trail, so that the motion trail of the virtual object can be ensured to be more attached to the target signal, and the sense of reality is further improved.
In order to improve the user participation experience, the audio playing can be performed in cooperation with the video, and in this embodiment, the specific implementation manner is as follows:
reading audio data associated with the interactive video from the scene data; and generating interactive audio according to the audio data, and playing the interactive audio after aligning the interactive audio with the interactive video.
Specifically, the audio data is specifically audio set by a pointer to the interactive video, such as the sound effect of fried beefsteak, the sound effect of fried eggs, the sound effect of blowing, and the like. Based on the above, in order to improve the sense of realism, the audio data of the associated interactive video may be read from the scene data when the interactive video is played; and generating interactive audio according to the audio data, and playing the interactive audio after aligning the interactive audio with the interactive video.
Along with the above example, the interactive video of rocking of the steak can be played simultaneously, so that the reality of the scene is improved.
In order to improve user experience, the page updating method provided by the embodiment can send a loading request to the server after receiving an operation instruction submitted by a user, and receive scene data fed back by the server for the loading request, and at the moment, the scene page data can be extracted from the scene data and used for generating an interaction page and displaying the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
The following describes, with reference to fig. 5, an example of application of the page update method provided in the present specification in a fresh shopping scenario. Fig. 5 is a flowchart of a processing procedure of a page updating method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S502, an operation instruction submitted by a user through a target application is received.
Step S504, a loading request is sent to the server side in response to the operation instruction.
In step S506, the scene data returned by the server for the loading request is received, where the scene data includes scene page data and at least one initial video data.
Step S508, extracting scene page data from the scene data.
And step S510, analyzing the scene page data to obtain static scene graph data and scene element data.
In step S512, based on the static scene graph data and the scene element data, an interactive page is generated, which includes the static scene graph and an object selection area, where the object selection area is used to display the object to be selected.
Step S514, under the condition that the target signal related to the interacted page is acquired through the signal acquisition equipment configured by the user terminal, determining the signal type of the target signal.
In step S516, video data is determined in the scene data according to the signal type.
Step S518, rendering the video data to obtain the interactive video of the associated target signal.
In step S520, a video playing area is determined in the interactive page, and the interactive video is inserted into the video playing area.
Step S522, playing the interactive video through the interactive page according to the video insertion result.
In order to improve user experience, the page updating method provided by the embodiment can send a loading request to the server after receiving an operation instruction submitted by a user, and receive scene data fed back by the server for the loading request, and at the moment, the scene page data can be extracted from the scene data and used for generating an interaction page and displaying the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of a page updating device, and fig. 6 shows a schematic structural diagram of the page updating device provided in one embodiment of the present disclosure. As shown in fig. 6, the apparatus is applied to a user terminal, and includes:
the receiving data module 602 is configured to send a loading request to a server in response to an operation instruction submitted by a user, and receive scene data returned by the server for the loading request;
a generating page module 604, configured to generate an interaction page according to scene page data in the scene data and display the interaction page to the user;
a determining data module 606 configured to determine video data corresponding to a target signal in the scene data in case that the target signal associated with the interactive page is acquired;
and a video playing module 608, configured to update the interactive page based on the video data, and play the interactive video associated with the target signal through the updated interactive page.
In an alternative embodiment, the receive data module 602 is further configured to:
receiving an operation instruction submitted by the user through a target application; responding to the operation instruction and sending the loading request to the server; and receiving the scene data returned by the server side aiming at the loading request, wherein the scene data comprises the scene page data and at least one piece of initial video data.
In an alternative embodiment, the generate page module 604 is further configured to:
extracting the scene page data from the scene data; analyzing the scene page data to obtain static scene graph data and scene element data; and generating the interaction page containing a static scene graph and an object selection area based on the static scene graph data and the scene element data, wherein the object selection area is used for displaying an object to be selected.
In an alternative embodiment, the apparatus further comprises:
the display module is configured to receive click instructions submitted by the user for interaction controls in the interaction page; determining object data corresponding to the interaction control in response to the click command, updating the interaction page based on the object data, and obtaining and displaying a target interaction page containing a target object;
accordingly, the determination data module 606 is further configured to:
under the condition that a target signal associated with the target interaction page is acquired, determining at least one object video data in the scene data according to the target signal; object video data associated with the target object is determined in the at least one object video data as the video data.
In an alternative embodiment, the determining data module 606 is further configured to:
determining the signal type of a target signal under the condition that the target signal related to the interactive page is acquired through signal acquisition equipment configured by the user terminal; the video data is determined in the scene data according to the signal type.
In an alternative embodiment, the play video module 608 is further configured to:
rendering the video data to obtain the interactive video associated with the target signal; determining a video playing area in the interactive page, and inserting the interactive video into the video playing area; and playing the interactive video through the interactive page according to the video insertion result.
In an alternative embodiment, the receive data module 602 is further configured to:
responding to an operation instruction submitted by the user through an object selection page, and determining a first target object and a second target object; generating a loading request based on the first target object and the second target object, and sending the loading request to the server; receiving the scene data returned by the server side aiming at the loading request; wherein the scene data includes a static object graph for presentation in the interaction page and including the first target object and the second target object.
In an alternative embodiment, the play video module 608 is further configured to:
generating a virtual object according to the video data, and generating a motion trail of the virtual object according to the target signal; inserting the virtual object in an interaction area of the interaction page to serve as updating of the interaction page; and driving the virtual object to move in the interaction area through the movement track, and playing the interaction video through the updated interaction page.
In an alternative embodiment, the apparatus further comprises:
a play audio module configured to read audio data associated with the interactive video in the scene data; and generating interactive audio according to the audio data, and playing the interactive audio after aligning the interactive audio with the interactive video.
In summary, in order to improve user experience, after receiving an operation instruction submitted by a user, a loading request may be sent to a server, and scene data fed back by the server for the loading request may be received, where at this time, scene page data may be extracted from the scene data, and used to generate an interaction page and display the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
The above is an exemplary scheme of a page updating apparatus of the present embodiment. It should be noted that, the technical solution of the page updating apparatus and the technical solution of the page updating method belong to the same concept, and details of the technical solution of the page updating apparatus, which are not described in detail, can be referred to the description of the technical solution of the page updating method.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of a page update system, and fig. 7 shows a schematic structural diagram of a page update system provided in one embodiment of the present disclosure. As shown in fig. 7, the page update system 700 includes a user terminal 710 and a server 720, and specifically includes:
the user terminal 710 is configured to send a loading request to a server in response to an operation instruction submitted by a user;
the server 720 is configured to read the scene data according to the loading request, and send the scene data to the user terminal;
the user terminal 710 is further configured to generate an interaction page according to scene page data in the scene data and display the interaction page to the user; under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data; and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
In an alternative embodiment, the user terminal 710 is further configured to:
receiving an operation instruction submitted by the user through a target application; responding to the operation instruction and sending the loading request to the server; and receiving the scene data returned by the server side aiming at the loading request, wherein the scene data comprises the scene page data and at least one piece of initial video data.
In an alternative embodiment, the user terminal 710 is further configured to:
extracting the scene page data from the scene data; analyzing the scene page data to obtain static scene graph data and scene element data; and generating the interaction page containing a static scene graph and an object selection area based on the static scene graph data and the scene element data, wherein the object selection area is used for displaying an object to be selected.
In an alternative embodiment, the user terminal 710 is further configured to:
receiving a click command submitted by the user aiming at an interaction control in the interaction page; determining object data corresponding to the interaction control in response to the click command, updating the interaction page based on the object data, and obtaining and displaying a target interaction page containing a target object; correspondingly, under the condition that the target signal associated with the interaction page is acquired, determining video data corresponding to the target signal in the scene data comprises the following steps: under the condition that a target signal associated with the target interaction page is acquired, determining at least one object video data in the scene data according to the target signal; object video data associated with the target object is determined in the at least one object video data as the video data.
In an alternative embodiment, the user terminal 710 is further configured to:
determining the signal type of a target signal under the condition that the target signal related to the interactive page is acquired through signal acquisition equipment configured by the user terminal; the video data is determined in the scene data according to the signal type.
In an alternative embodiment, the user terminal 710 is further configured to:
rendering the video data to obtain the interactive video associated with the target signal; determining a video playing area in the interactive page, and inserting the interactive video into the video playing area; and playing the interactive video through the interactive page according to the video insertion result.
In an alternative embodiment, the user terminal 710 is further configured to:
responding to an operation instruction submitted by the user through an object selection page, and determining a first target object and a second target object; generating a loading request based on the first target object and the second target object, and sending the loading request to the server; receiving the scene data returned by the server side aiming at the loading request; wherein the scene data includes a static object graph for presentation in the interaction page and including the first target object and the second target object.
In an alternative embodiment, the user terminal 710 is further configured to:
generating a virtual object according to the video data, and generating a motion trail of the virtual object according to the target signal; inserting the virtual object in an interaction area of the interaction page to serve as updating of the interaction page; and driving the virtual object to move in the interaction area through the movement track, and playing the interaction video through the updated interaction page.
In an alternative embodiment, the user terminal 710 is further configured to:
reading audio data associated with the interactive video from the scene data; and generating interactive audio according to the audio data, and playing the interactive audio after aligning the interactive audio with the interactive video.
In an optional embodiment, the user terminal 710 is further configured to send a loading request to the server in response to an operation instruction submitted by a user through a commodity page;
the server 720 is further configured to receive the loading request, determine scenario information associated with the loading request, and read historical behavior information corresponding to the user; determining commodity recommendation scenes associated with the users according to the scene information and the historical behavior information; and reading scene data of the commodity recommendation scene and sending the scene data to the user terminal, wherein the scene data is used for generating the interactive page and playing the interactive video in the user terminal.
In summary, in order to improve user experience, after receiving an operation instruction submitted by a user, a loading request may be sent to a server, and scene data fed back by the server for the loading request may be received, where at this time, scene page data may be extracted from the scene data, and used to generate an interaction page and display the interaction page to the user. And then under the condition that the target signal of the user aiming at the interaction page is acquired, video data corresponding to the target signal can be determined in the scene data, and the interaction page is updated by using the video data, so that the interaction video corresponding to the target signal is played according to the updated interaction page, and the user use experience can be improved by matching with a human-computer interaction mechanism when the user uses the target application, so that the participation feeling of the user is improved.
The above is an exemplary scheme of a page update system of the present embodiment. It should be noted that, the technical solution of the page updating system and the technical solution of the page updating method belong to the same conception, and details of the technical solution of the page updating system, which are not described in detail, can be referred to the description of the technical solution of the page updating method.
The present embodiment also provides a page updating method, fig. 8 shows a flowchart of another page updating method provided in an embodiment of the present disclosure, and as shown in fig. 8, the method is applied to a user terminal, and includes:
Step S802, a loading request is sent to a server in response to an operation instruction submitted by a user, and commodity display data returned by the server for the loading request is received;
step S804, generating a commodity interaction page according to the commodity page data in the commodity display data and displaying the commodity interaction page to the user;
step S806, under the condition that the interactive signals related to the commodity interactive page are collected, determining commodity video data corresponding to the interactive signals in the commodity display data;
step S808, updating the commodity interaction page based on the commodity video data, and playing the commodity interaction video associated with the interaction signal through the updated commodity interaction page.
The commodities related to the present embodiment include, but are not limited to, food, foodstuff, clothes, vehicles, ornaments, etc., and it should be noted that details not described in detail in the present embodiment are the same or corresponding to those described in the above embodiment, and the present embodiment is not limited thereto.
Corresponding to the above method embodiment, the present disclosure further provides another embodiment of the page updating apparatus, and fig. 9 shows a schematic structural diagram of another page updating apparatus provided in one embodiment of the present disclosure. As shown in fig. 9, the apparatus is applied to a user terminal, and includes:
The receiving data module 902 is configured to respond to an operation instruction submitted by a user, send a loading request to a server, and receive commodity display data returned by the server for the loading request;
the generation page module 904 is configured to generate a commodity interaction page according to commodity page data in the commodity display data and display the commodity interaction page to the user;
the acquisition signal module 906 is configured to determine commodity video data corresponding to the interaction signal in the commodity display data under the condition that the interaction signal associated with the commodity interaction page is acquired;
and an update page module 908 configured to update the commodity interaction page based on the commodity video data, and play the commodity interaction video associated with the interaction signal through the updated commodity interaction page.
The above is an exemplary scheme of a page updating apparatus of the present embodiment. It should be noted that, the technical solution of the page updating apparatus and the technical solution of the page updating method belong to the same concept, and details of the technical solution of the page updating apparatus, which are not described in detail, can be referred to the description of the technical solution of the page updating method.
Fig. 10 illustrates a block diagram of a computing device 1000 provided in accordance with one embodiment of the present description. The components of the computing device 1000 include, but are not limited to, a memory 1010 and a processor 1020. Processor 1020 is coupled to memory 1010 via bus 1030 and database 1050 is used to store data.
Computing device 1000 also includes access device 1040, which access device 1040 enables computing device 1000 to communicate via one or more networks 1060. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. The access device 1040 may include one or more of any type of network interface, wired or wireless, such as a network interface card (NIC, network interface controller), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Network) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, a near-field communication (NFC, near Field Communication) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 1000, as well as other components not shown in FIG. 10, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 10 is for exemplary purposes only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 1000 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personal Computer). Computing device 1000 may also be a mobile or stationary server.
Wherein the processor 1020 is configured to execute computer-executable instructions that, when executed by the processor, perform the steps of the page update method described above.
The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the page update method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the page update method.
An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the page update method described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the above page update method belong to the same concept, and details of the technical solution of the storage medium, which are not described in detail, can be referred to the description of the technical solution of the above page update method.
An embodiment of the present disclosure further provides a computer program, where the computer program, when executed in a computer, causes the computer to perform the steps of the above-described page update method.
The above is an exemplary version of a computer program of the present embodiment. It should be noted that, the technical solution of the computer program and the technical solution of the page updating method belong to the same conception, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the page updating method.
An embodiment of the present disclosure further provides another page update system, including: a user terminal and a server; the server is used for storing scene data, the user terminal is used for executing page update executable instructions, and the page update executable instructions realize the steps of the page update method when being executed by the user terminal.
The above is another exemplary scheme of the page update system of the present embodiment. It should be noted that, the technical solution of the page updating system and the technical solution of the page updating method belong to the same conception, and details of the technical solution of the page updating system, which are not described in detail, can be referred to the description of the technical solution of the page updating method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A page updating method is applied to a user terminal and comprises the following steps:
responding to an operation instruction submitted by a user, sending a loading request to a server, and receiving scene data returned by the server for the loading request;
generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user;
under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data;
and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
2. The method of claim 1, wherein the sending a load request to a server in response to an operation instruction submitted by a user, and receiving scene data returned by the server for the load request, includes:
receiving an operation instruction submitted by the user through a target application;
responding to the operation instruction and sending the loading request to the server;
and receiving the scene data returned by the server side aiming at the loading request, wherein the scene data comprises the scene page data and at least one piece of initial video data.
3. The method of claim 1, the generating an interaction page and presenting to the user according to scene page data in the scene data, comprising:
extracting the scene page data from the scene data;
analyzing the scene page data to obtain static scene graph data and scene element data;
and generating the interaction page containing a static scene graph and an object selection area based on the static scene graph data and the scene element data, wherein the object selection area is used for displaying an object to be selected.
4. The method according to claim 1, wherein in the case that the target signal associated with the interactive page is acquired, before the step of determining the video data corresponding to the target signal in the scene data is performed, the method further comprises:
receiving a click command submitted by the user aiming at an interaction control in the interaction page;
determining object data corresponding to the interaction control in response to the click command, updating the interaction page based on the object data, and obtaining and displaying a target interaction page containing a target object;
correspondingly, under the condition that the target signal associated with the interaction page is acquired, determining video data corresponding to the target signal in the scene data comprises the following steps:
Under the condition that a target signal associated with the target interaction page is acquired, determining at least one object video data in the scene data according to the target signal;
object video data associated with the target object is determined in the at least one object video data as the video data.
5. The method of claim 1, wherein in the case that the target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data includes:
determining the signal type of a target signal under the condition that the target signal related to the interactive page is acquired through signal acquisition equipment configured by the user terminal;
the video data is determined in the scene data according to the signal type.
6. The method of claim 1, wherein updating the interactive page based on the video data and playing the interactive video associated with the target signal through the updated interactive page comprises:
rendering the video data to obtain the interactive video associated with the target signal;
determining a video playing area in the interactive page, and inserting the interactive video into the video playing area;
And playing the interactive video through the interactive page according to the video insertion result.
7. The method of claim 1, wherein the sending a load request to a server in response to an operation instruction submitted by a user, and receiving scene data returned by the server for the load request, includes:
responding to an operation instruction submitted by the user through an object selection page, and determining a first target object and a second target object;
generating a loading request based on the first target object and the second target object, and sending the loading request to the server;
receiving the scene data returned by the server side aiming at the loading request;
wherein the scene data includes a static object graph for presentation in the interaction page and including the first target object and the second target object.
8. The method of claim 1, wherein updating the interactive page based on the video data and playing the interactive video associated with the target signal through the updated interactive page comprises:
generating a virtual object according to the video data, and generating a motion trail of the virtual object according to the target signal;
Inserting the virtual object in an interaction area of the interaction page to serve as updating of the interaction page;
and driving the virtual object to move in the interaction area through the movement track, and playing the interaction video through the updated interaction page.
9. The method of any one of claims 1-8, further comprising:
reading audio data associated with the interactive video from the scene data;
and generating interactive audio according to the audio data, and playing the interactive audio after aligning the interactive audio with the interactive video.
10. A page updating method is applied to a user terminal and comprises the following steps:
responding to an operation instruction submitted by a user, sending a loading request to a server, and receiving commodity display data returned by the server for the loading request;
generating a commodity interaction page according to the commodity page data in the commodity display data and displaying the commodity interaction page to the user;
under the condition that an interaction signal related to the commodity interaction page is acquired, determining commodity video data corresponding to the interaction signal in the commodity display data;
and updating the commodity interaction page based on the commodity video data, and playing the commodity interaction video associated with the interaction signal through the updated commodity interaction page.
11. A page update system comprising a user terminal and a server, the system comprising:
the user terminal is used for responding to an operation instruction submitted by a user and sending a loading request to the server;
the server is used for reading the scene data according to the loading request and sending the scene data to the user terminal;
the user terminal is further used for generating an interaction page according to scene page data in the scene data and displaying the interaction page to the user; under the condition that a target signal associated with the interactive page is acquired, determining video data corresponding to the target signal in the scene data; and updating the interactive page based on the video data, and playing the interactive video associated with the target signal through the updated interactive page.
12. The system of claim 11, wherein the user terminal is further configured to send a loading request to the server in response to an operation instruction submitted by a user through a commodity page;
the server is further configured to receive the loading request, determine scenario information associated with the loading request, and read historical behavior information corresponding to the user; determining commodity recommendation scenes associated with the users according to the scene information and the historical behavior information; and reading scene data of the commodity recommendation scene and sending the scene data to the user terminal, wherein the scene data is used for generating the interactive page and playing the interactive video in the user terminal.
13. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions, the processor being configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the method of any one of claims 1 to 10.
14. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the method of any one of claims 1 to 10.
CN202310035766.2A 2023-01-10 2023-01-10 Page updating method, device and system Pending CN116301964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310035766.2A CN116301964A (en) 2023-01-10 2023-01-10 Page updating method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310035766.2A CN116301964A (en) 2023-01-10 2023-01-10 Page updating method, device and system

Publications (1)

Publication Number Publication Date
CN116301964A true CN116301964A (en) 2023-06-23

Family

ID=86819335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310035766.2A Pending CN116301964A (en) 2023-01-10 2023-01-10 Page updating method, device and system

Country Status (1)

Country Link
CN (1) CN116301964A (en)

Similar Documents

Publication Publication Date Title
US11520824B2 (en) Method for displaying information, electronic device and system
CN102999630B (en) Based on posture mark to watch related content
JP5951759B2 (en) Extended live view
CN111314761B (en) Video stream associated information live broadcast interaction method and terminal equipment thereof
CN113420247A (en) Page display method and device, electronic equipment, storage medium and program product
US20190327357A1 (en) Information presentation method and device
CN103686344A (en) Enhanced video system and method
CN104113786A (en) Information acquisition method and device
US10255243B2 (en) Data processing method and data processing system
CN110390569B (en) Content promotion method, device and storage medium
CN103440260A (en) Method and equipment used for providing representation information
CN105933730A (en) Video association information recommendation method and device
CN112148977A (en) Dynamic network resource display and matching method, device, electronic equipment and medium
US11544921B1 (en) Augmented reality items based on scan
WO2022001600A1 (en) Information analysis method, apparatus, and device, and storage medium
CN105589835A (en) Selectable Styles for Text Messaging System Font Service Providers
CN113254135A (en) Interface processing method and device and electronic equipment
CN105611050A (en) Selectable text messaging styles for brand owners
CN114727143A (en) Multimedia resource display method and device
CN105611049A (en) Selectable styles for text messaging system publishers
US10600060B1 (en) Predictive analytics from visual data
CN114329179A (en) Object recommendation method and device
CN111199443A (en) Commodity information processing method, commodity information processing device and computer-readable storage medium
CN114816180A (en) Content browsing guiding method and device, electronic equipment and storage medium
CN108718423A (en) A kind of 2 D code information shared system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination