CN114860358A - Object processing method and device, electronic equipment and storage medium - Google Patents

Object processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114860358A
CN114860358A CN202210344776.XA CN202210344776A CN114860358A CN 114860358 A CN114860358 A CN 114860358A CN 202210344776 A CN202210344776 A CN 202210344776A CN 114860358 A CN114860358 A CN 114860358A
Authority
CN
China
Prior art keywords
scene
configuration data
displayed
target event
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210344776.XA
Other languages
Chinese (zh)
Other versions
CN114860358B (en
Inventor
蔡晓华
李伟鹏
杨小刚
胡方正
鞠达豪
杨凯丽
朱彤
孙弘法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210344776.XA priority Critical patent/CN114860358B/en
Priority claimed from CN202210344776.XA external-priority patent/CN114860358B/en
Publication of CN114860358A publication Critical patent/CN114860358A/en
Application granted granted Critical
Publication of CN114860358B publication Critical patent/CN114860358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to an object processing method, an object processing apparatus, an electronic device, and a storage medium, including: acquiring configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; responding to a playing instruction of the object to be displayed, and playing the object to be displayed; in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event. The method and the device form a closed loop of the object processing scheme, so that different platforms can generate completely consistent objects to be displayed based on the configuration data, interface display and logic unification is realized, and later maintenance cost is reduced.

Description

Object processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an object processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of the mobile internet, users can browse information through applications and websites. In the process of providing information for a user by an application program or a website, more interactive logics are involved, including links related to front-end information display, user clicking, playing and watching and the like, and functions related to back-end interactive logic issuing and the like, so that the application program or the website needs logic capacity to analyze, process and execute each link and scene related to the information.
In the prior art, developers generally select different ways to solve the above information processing problem based on different platforms. For example, some platforms build information solely depending on a third-party framework, and some platforms build information by combining a native framework and a third-party framework. Therefore, the display effect of the constructed information on the platform on different platforms may be inconsistent, so that more time and energy are needed to be invested to maintain the information on different platforms in the later period, and the maintenance cost is increased.
Disclosure of Invention
The present disclosure provides an object processing method, an object processing apparatus, an electronic device, and a storage medium, and the technical solution of the present disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an object processing method, including:
acquiring configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data;
determining a scene sequence having scene information based on the scene configuration data;
rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
adding the rendered scene sequence to a view container to obtain an object to be displayed;
responding to a playing instruction of the object to be displayed, and playing the object to be displayed;
in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event.
In some possible embodiments, before the method for obtaining configuration data of an object to be displayed, the method further includes:
creating a scene controller;
determining a scene sequence with scene information based on the scene configuration data, rendering a view in each scene in the scene sequence based on the view configuration data, and obtaining a rendered scene sequence, including:
sending configuration data of an object to be displayed to a scene controller;
analyzing the configuration data of the object to be displayed by using a scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a sequence of scenes with scene information based on scene configuration data;
and rendering the view in each scene in the scene sequence based on the view configuration data by using the scene controller to obtain a rendered scene sequence.
In some possible embodiments, determining a sequence of scenes with scene information based on the scene configuration data includes:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on scene configuration data;
and determining a scene sequence based on the feature information of the scene, the display feature information of the scene and the feature information among the scenes.
In some possible embodiments, rendering a view in each scene in the sequence of scenes based on the view configuration data to obtain a rendered sequence of scenes includes:
determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain a rendered scene sequence.
In some possible embodiments, the method further comprises:
determining a target event on a scene in a sequence of scenes;
configuring a trigger of a target event;
and configuring a behavior operation instruction corresponding to the trigger on the instruction translator based on the trigger identification of the trigger.
In some possible embodiments, in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, the executing the behavior corresponding to the target event based on the trigger corresponding to the target event includes:
monitoring events on scenes in a scene sequence through an event monitor in the playing process of an object to be displayed;
if the target event is monitored, sending a trigger corresponding to the target event through an event monitor;
determining a trigger identifier of a trigger through an instruction transfer device, and determining a behavior operation instruction based on the trigger identifier;
and executing the behaviors in the received behavior operation instruction through the behavior executor.
In some possible embodiments, if a target event is monitored, sending a trigger corresponding to the target event through an event listener includes:
if the first target event is monitored, sending an original trigger corresponding to the first target event through an event monitor;
or;
if the first target event is monitored and the condition parameters of the first target event meet the preset condition parameters, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through an event monitor;
or;
and if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor, includes:
if the first target event is monitored, generating a delay timer;
the timing time parameter on the delay timer meets a first preset time parameter, and a third target event is determined to be generated;
and sending a first time trigger corresponding to the third target event through the event listener.
In some possible embodiments, if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor, includes:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer when the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event listener.
According to a second aspect of the embodiments of the present disclosure, there is provided an object processing apparatus including:
the data acquisition module is configured to execute the acquisition of configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data;
a scene construction module configured to perform determining a sequence of scenes with scene information based on scene configuration data;
the view construction module is configured to render views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
the object generation module is configured to add the rendered scene sequence to the view container to obtain an object to be displayed;
the playing module is configured to execute a playing instruction responding to the exclusive share to be displayed and play the object to be displayed;
and the event execution module is configured to execute the action corresponding to the target event based on the trigger corresponding to the target event if the target event in the object to be displayed is monitored in the playing process of the object to be displayed.
In some possible embodiments, before the obtaining the configuration data device of the object to be displayed, the method further includes:
the controller creates a scene controller;
a scene building module configured to perform:
sending configuration data of an object to be displayed to a scene controller;
analyzing the configuration data of the object to be displayed by using a scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a sequence of scenes with scene information based on scene configuration data;
and the view construction module is configured to render the view in each scene in the scene sequence based on the view configuration data by using the scene controller, so as to obtain a rendered scene sequence.
In some possible embodiments, the scene construction module is configured to perform:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on scene configuration data;
and determining a scene sequence based on the feature information of the scene, the display feature information of the scene and the feature information among the scenes.
In some possible embodiments, the view construction module is configured to perform:
determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain a rendered scene sequence.
In some possible embodiments, the apparatus further comprises:
an event determination module configured to perform determining a target event on a scene in a sequence of scenes;
a trigger configuration module configured to execute a trigger to configure a target event;
and the instruction configuration module is configured to execute trigger identification based on the trigger and configure behavior operation instructions corresponding to the trigger on the instruction transit.
In some possible embodiments, the event execution module is configured to perform:
the monitoring module is configured to monitor the events on the scenes in the scene sequence through the event monitor in the playing process of the object to be displayed;
the sending module is configured to send a trigger corresponding to the target event through the event listener if the target event is monitored;
the instruction determining module is configured to execute the trigger identification of the trigger determined by the instruction transit device and determine the behavior operation instruction based on the trigger identification;
and the execution module is configured to execute the behaviors in the received behavior operation instruction through the behavior executor.
In some possible embodiments, the sending module is configured to perform:
if the first target event is monitored, sending an original trigger corresponding to the first target event through an event monitor;
or;
if the first target event is monitored and the condition parameters of the first target event meet the preset condition parameters, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through an event monitor;
or;
and if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the transmitting module is configured to perform:
if the first target event is monitored, generating a delay timer;
the timing time parameter on the delay timer meets a first preset time parameter, and a third target event is determined to be generated;
and sending a first time trigger corresponding to the third target event through the event listener.
In some possible embodiments, the transmitting module is configured to perform:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer when the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event listener.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any one of the first aspect as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of the first aspects of the embodiments of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program, the computer program being stored in a readable storage medium, from which at least one processor of a computer device reads and executes the computer program, causing the computer device to perform the method of any one of the first aspects of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; responding to a playing instruction of the object to be displayed, and playing the object to be displayed; in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event. The method and the device form a closed loop of an object processing (including object construction and display) scheme, so that different platforms can generate completely consistent objects to be displayed based on configuration data, interface display and logic unification is realized, and later maintenance cost is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an application environment in accordance with an illustrative embodiment;
FIG. 2 is a flow diagram illustrating a method of object processing in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating an implementation of a sequence of rendered scenes in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating an implementation of playing an object to be presented according to an exemplary embodiment;
FIG. 5 is a flow diagram illustrating a first time trigger implementation in accordance with an exemplary embodiment;
FIG. 6 is a flow diagram illustrating a second time trigger implementation in accordance with an illustrative embodiment;
FIG. 7 is a block diagram illustrating an object processing apparatus in accordance with an illustrative embodiment;
FIG. 8 is a block diagram illustrating an electronic device for object processing in accordance with an illustrative embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment of an object processing method according to an exemplary embodiment, and as shown in fig. 1, the application environment may include a terminal 01, and an object browser 011, a scene controller 012 and an instruction mediator 013, which are located in an application program of the terminal 01.
In some possible embodiments, the terminal 01 may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart wearable device, and the like. The software running on the client may also be an application program, an applet, or the like. Alternatively, the operating system running on the client may include, but is not limited to, an android system, an IOS system, linux, windows, Unix, and the like.
The object browser 011, the scene controller 012 and the instruction relay 013 may belong to the same application function of the terminal 01, and the scene controller 012 and the instruction relay 013 may be created by the object browser 011.
In some possible embodiments, the object browser 011 obtains configuration data of an object to be presented; the configuration data comprises scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; responding to a playing instruction of the object to be displayed, and playing the object to be displayed; in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event.
In addition, it should be noted that fig. 1 shows only one application environment of the object processing method provided by the present disclosure, and in practical applications, other application environments may also be included.
Fig. 2 is a flowchart illustrating an object processing method according to an exemplary embodiment, where as shown in fig. 2, the object processing method may be applied to a server or a client, mainly to an application program in the client, and includes the following steps:
in step S201, configuration data of an object to be displayed is acquired; the configuration data includes scene configuration data and view configuration data.
In the embodiment of the present application, the object processing method from step S201 to step S211 and the embodiment based on this extension
The method is developed based on the native functions of the terminal and applied to the application program of the terminal. The development based on the native functions of the terminal refers to the development of an application program on a platform by using a development language, a development class library, a development tool, and the like provided by the platform.
The beneficial effects of utilizing the native development are as follows:
firstly, the terminal can realize all functions of the terminal based on the stability of the original development.
Secondly, because the terminal is based on the native development, when the terminal uses the function of the application program, no additional virtual machine or the like is used for loading and starting, so that when the function of the application program is used, the running speed is the same as that of the native function, the speed is high, the performance is high, and the excellent use experience can be provided for a user.
And on the native framework, the terminal can support a large number of graphs and animations, is not stuck and is fast in response.
And fourthly, due to the fact that the function compatibility is high in the application program of the terminal and is established on the native framework.
And fifthly, the terminal uses the interface provided by the terminal relatively quickly, and the processing speed is superior.
In this embodiment, the object in the object processing method may be information displayed in an application program.
In order to enable devices of different systems, devices of the same system but different sizes, or different applications in the same device to obtain available fault-free objects to be presented based on their functions, the objects to be presented in the embodiment of the present application may be generated by an object browser based on configuration data.
Optionally, the object browser in the application may first obtain configuration data of the object to be displayed from another device, and generate the object to be displayed based on the configuration data.
In an alternative embodiment, the object browser may include an instruction translator and a behavior executor. In the embodiment of the application, before the object browser in the application program obtains the configuration data of the object to be displayed from other devices, the object browser is required to perform early preparation work, including initializing the object browser, then registering some general processors, routes and embedded points on the instruction translator, that is, registering some general functions on the instruction translator, and creating a view container of an information style in the object browser.
In step S203, a scene sequence having scene information is determined based on the scene configuration data.
In step S205, a view in each scene in the scene sequence is rendered based on the view configuration data, so as to obtain a rendered scene sequence.
In an alternative embodiment, the object browser may determine a scene sequence with scene information based on the scene configuration data, and render views in each scene in the scene sequence based on the view configuration data, so as to obtain a rendered scene sequence.
In another alternative embodiment, to avoid having to undertake too many tasks, the object browser may create a module to undertake some work, reducing the burden on the object browser, thereby increasing the speed of the overall solution process. Alternatively, the object browser may create a scene controller while creating the view container of the information style.
Fig. 3 is a flowchart illustrating an implementation of a sequence of rendered scenes, according to an exemplary embodiment, as shown in fig. 3, including:
in step S301, configuration data of an object to be presented is sent to the scene controller.
In the embodiment of the application, the object browser may send the configuration data of the object to be displayed to the scene controller.
In step S302, the scene controller is used to analyze the configuration data of the object to be displayed, so as to obtain scene configuration data and view configuration data.
Optionally, after the scene controller receives the configuration data, the configuration data may be analyzed to obtain scene configuration data and view configuration data.
In step S303, a scene sequence having scene information is determined based on the scene configuration data with the scene controller.
Alternatively, the scene controller may determine a sequence of scenes with scene information based on the scene configuration data.
In an alternative embodiment, the scene controller may determine feature information of scenes, presentation feature information of scenes, and feature information between scenes based on the scene configuration data, and determine each scene based on the feature information of scenes, the presentation feature information of scenes, and the feature information between scenes, and then compose an ordered sequence of scenes from a plurality of scenes. The feature information of the scenes may include the number of the scenes and the display position of each scene on a preset page. The display characteristic information of the scene comprises the display front-back sequence of the scene, the display time of the scene and the display duration of each scene. The inter-scene feature information includes switching contents between scenes.
The showing time of the scene refers to what condition a certain scene triggers to show, some scenes can be triggered based on a control, and some scenes can be triggered based on time. The switching content between scenes can be switching animation between scenes or switching special effects (such as pop-up windows, fly-in, fade, and the like).
In step S304, the scene controller is used to render the view in each scene in the scene sequence based on the view configuration data, so as to obtain a rendered scene sequence.
Optionally, the scene controller may render the view in each scene in the scene sequence based on the view configuration data, so as to obtain a rendered scene sequence.
In an alternative embodiment, the scene controller may determine, based on the view configuration data, an image-text parameter, a control parameter, and an animation effect parameter in each scene in the scene sequence, and render the corresponding scene based on the image-text parameter, the control parameter, and the animation effect parameter in each scene to obtain a rendered scene sequence.
The image-text parameters in each scene comprise image parameters, text parameters, character parameters and the like in each scene. The animation effect parameters may include animation parameters, expression parameters, background parameters, watermark parameters, and the like.
Wherein, the picture parameter may be an address of the picture. The text in the text parameter may be a notepad, and the text parameter may be a parameter for describing the specific content of the text. The view may further include rich text, the rich text may include text parameters, watermark parameters, background parameters, symbol parameters, segmentation parameters, and the like, and the scene controller may generate the rich text in the view based on the parameters included in the rich text. In addition, the control parameters may include button parameters that describe the location of the button in the scene, the shape, color, etc. of the button.
Optionally, the view may also include an animation, the animation may correspond to animation parameters, and the animation parameters may be used to describe the transparency of the animation, the position in the scene, whether and how to rotate, the degree of zoom, and so on.
Thus, the scene controller renders the corresponding scene based on the picture parameter, the text parameter, the character parameter, the watermark parameter, the background parameter, the control parameter and the animation parameter in each scene to obtain the text, the rich text, the picture, the button or the animation in the scene, and further obtains a rendered scene sequence.
In step S207, the rendered scene sequence is added to the view container to obtain the object to be displayed.
In an optional embodiment, before the rendered scene sequence is added to the view container to obtain the object to be displayed, the scene controller may register an event in the scene on the instruction relay, so that when an event in the scene occurs, a processor that determines the event based on the event registered in the instruction relay in advance may determine a behavior operation instruction corresponding to the event, so that the subsequent behavior executor may execute based on a behavior in the behavior operation instruction.
In an alternative embodiment, the object browser may configure the trigger of the target event by the scene controller determining the target event on the scene in the sequence of scenes. And configuring a behavior operation instruction corresponding to the trigger on the instruction translator based on the trigger identification of the trigger.
Specifically, for a target event on a scene in the scene sequence, the object browser may determine the target event through the scene controller, encapsulate the target event, configure a trigger of the target event, and configure a behavior operation instruction corresponding to the trigger on the instruction relay based on the trigger identifier. Therefore, when the target event is monitored, the trigger identifier corresponding to the target event can be obtained, the corresponding behavior operation instruction is determined based on the trigger identifier, and the behavior corresponding to the behavior operation instruction is executed.
The flip-flop configuration is explained below by way of example. For example, if it is desired to perform a certain action after the playing of the object to be displayed is completed. The scene controller determines that the playing of the object to be displayed is completed as the target event, and configures a trigger of the target event when the playing of the object to be displayed is completed. For example, if it is desired to back the time when a button recognized in the scene is clicked, the scene controller determines that the button recognized in the scene is clicked as the target event, and configures a trigger of the target event at the time when the button is clicked.
Of course, the above two examples are only possible embodiments in each scene of the object to be displayed, and do not limit other embodiments of the present application.
In step S209, in response to a play instruction of the object to be presented, the object to be presented is played.
Therefore, the object to be displayed is constructed, and when the playing instruction of the object to be displayed is received, the object to be displayed can be played.
In step S211, in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, the behavior corresponding to the target event is executed based on the trigger corresponding to the target event.
Fig. 4 is a flowchart illustrating an implementation of playing an object to be presented according to an exemplary embodiment, where as shown in fig. 4, the implementation includes:
in step S401, in the process of playing the object to be displayed, monitoring an event on a scene in the scene sequence by an event monitor;
in step S402, if the event listener has monitored the target event, go to step S403; otherwise, go to step S401;
in step S403, the event listener sends a trigger corresponding to the target event;
in step S404, the instruction relay determines a trigger identifier of the trigger;
in step S405, the instruction relay determines a behavior operation instruction based on the trigger identifier;
in step S406, the command relay sends a behavior operation command to the behavior executor;
in step S407, the behavior executor executes the behavior in the behavior operation instruction.
Specifically, the terminal may monitor an event on a scene in the scene sequence through the event listener, and if the event listener monitors a target event, the event listener may send a trigger corresponding to the target event to the instruction relay. If the event listener does not monitor the target event, other events on the scene in the scene sequence can be monitored. When the instruction transfer unit receives the trigger sent by the event monitor, the instruction transfer unit can determine the trigger identifier of the trigger, determine the behavior operation instruction based on the trigger identifier, and send the behavior operation instruction to the behavior executor. When the behavior executor receives the behavior operation instruction, the terminal can analyze the behavior operation instruction through the behavior executor to obtain an execution behavior and an execution object, and execute the execution object based on the execution behavior.
In the embodiments of the present application, there are many kinds of flip-flops, and different flip-flops will be described with reference to examples.
In an alternative embodiment, the flip-flop is a primitive flip-flop. When the terminal monitors the first target event through the event monitor, an original trigger can be generated, and the original trigger corresponding to the first target event is sent through the event monitor, wherein the original trigger can carry a trigger identifier.
For example, it is assumed that the object to be displayed is played completely as the first target event. When the terminal monitors that the object to be displayed on the scene is played through the event monitor, an original trigger corresponding to the first target event can be generated, the object to be displayed is played, the original trigger corresponding to the first target event is sent to the instruction transfer device, when the instruction transfer device receives the original trigger, a trigger identifier can be analyzed, a behavior operation instruction is determined based on the trigger identifier, and the behavior operation instruction can include behavior description information. Then, the command relay may send the behavior operation command to the behavior executor. Correspondingly, the behavior executor may analyze the behavior operation instruction to obtain an execution behavior and an execution object in the behavior description information. Assuming that the execution behavior is "pop a certain page", the execution object may be a link address of "end page". In this manner, the behavior executor may pop up the end page on the scene.
In another alternative embodiment, the trigger is a conditional trigger. When the terminal monitors a first target event through the event monitor and the condition parameter of the first target event meets the preset condition parameter, a second target event is determined to be generated, a condition trigger corresponding to the second target event is generated, and the condition trigger corresponding to the second target event is sent through the event monitor. The condition trigger may carry a trigger identifier.
For example, it is assumed that the object to be displayed is played completely as the first target event. After the terminal monitors that the playing of the object to be displayed on the scene is completed through the event listener, it may be determined whether the number of times the playing of the object to be displayed is completed satisfies the preset condition parameter, and if so (for example, the current playing completion number is equal to 1), the terminal determines that the second target event is completed (that is, the playing of the object to be displayed on the scene is completed, and the completion number is 1). And generating a condition trigger corresponding to the second target event, and sending the condition trigger corresponding to the second target event to the instruction transfer unit, wherein when the instruction transfer unit receives the condition trigger, the trigger identifier can be analyzed, and a behavior operation instruction is determined based on the trigger identifier, and the behavior operation instruction can include behavior description information. Then, the command relay may send the behavior operation command to the behavior executor. Correspondingly, the behavior executor may analyze the behavior operation instruction to obtain an execution behavior and an execution object in the behavior description information. It is assumed that the execution behavior is "reporting" and the execution object may be "buried point". Thus, the behavior executor can report the embedded point.
In the embodiment of the present application, when configuring the first time trigger, the execution condition of the condition trigger may be set to be equal to 1 time.
In another alternative embodiment, the trigger is a time trigger. When the terminal monitors a first target event through the event monitor and the time parameter of the first target event meets the preset time parameter, a third target event is determined to be generated, a time trigger corresponding to the third target event is generated, and the time trigger corresponding to the third target event is sent through the event monitor. Wherein, the time trigger can carry a trigger identifier.
In an alternative embodiment of the time trigger, the time trigger is a first time trigger. FIG. 5 is a flow diagram illustrating a first time trigger implementation according to an exemplary embodiment, as shown in FIG. 5, including:
in step S501, a delay timer is generated when a first target event is monitored.
In step S502, the timing time parameter on the delay timer satisfies the first preset time parameter, and it is determined that the third target event is generated.
In step S503, the first time trigger corresponding to the third target event is sent by the event listener.
In the embodiment of the application, when the terminal monitors a first target event through the event monitor, a delay timer may be generated, a timing time parameter on the delay timer meets a first preset time parameter, a third target event is determined to be generated, a first time trigger corresponding to the third target event is generated, and the first time trigger corresponding to the third target event is sent through the event monitor. The first time trigger may carry a trigger identifier.
For example, it is assumed that playing the object to be displayed in the scene is the first target event. When the terminal monitors that the object to be displayed is played in the scene through the event monitor, a delay timer can be generated and timed. And determining whether the timing time parameter on the delay timer meets a first preset time parameter, if so, determining that a third target event (namely playing the object to be displayed in the scene and the playing time meets 3 seconds) is generated by the terminal (for example, the timing time parameter is 3 seconds). And then, the terminal generates a first time trigger corresponding to the third target event and sends the first time trigger corresponding to the third target event to the instruction transfer device. When the instruction translator receives the first time trigger, the trigger identifier may be parsed, and the behavior operation instruction may be determined based on the trigger identifier, where the behavior operation instruction may include behavior description information. Then, the command relay may send the behavior operation command to the behavior executor. Correspondingly, the behavior executor may analyze the behavior operation instruction to obtain an execution behavior and an execution object in the behavior description information. It is assumed that the execution behavior is "play a certain animation scene", and the execution object may be a connection address of "a certain animation scene". In this way, the action executor can play the animation scene.
In this embodiment of the application, when configuring the first time trigger, the execution time of the first time trigger may be set to 3 seconds.
In an alternative embodiment of the time trigger, the time trigger is a second time trigger. FIG. 6 is a flow diagram illustrating a second time trigger implementation according to an example embodiment, as shown in FIG. 6, including:
in step S601, if the first target event is monitored, an interval timer is generated.
In step S602, whenever the interval time parameter on the interval timer satisfies the second preset time parameter, a third target event is generated and the interval timer is reset.
In step S603, a second time trigger corresponding to the third target event is sent by the event listener.
In the embodiment of the application, when the terminal monitors the first target event through the event monitor, the interval timer can be generated, and when the interval time parameter on the interval timer meets the second preset time parameter, the third target event is generated and the interval timer is reset. And generating a second time trigger corresponding to the third target event, and sending the first time trigger corresponding to the third target event through the event listener. Wherein the second time trigger may carry a trigger identifier.
For example, it is assumed that playing the object to be displayed in the scene is the first target event. When the terminal monitors that the object to be displayed is played in the scene through the event monitor, an interval timer can be generated and timed. Every time the interval time parameter on the interval timer meets the second preset time parameter (5 seconds), the terminal determines to generate a third target event (namely, the object to be displayed is played in the scene, and the playing time meets 5 seconds), and the interval timer can be reset. And then, the terminal generates a second time trigger corresponding to the third target event and sends the second time trigger corresponding to the third target event to the instruction transfer device. When the instruction relay receives the second time trigger, the trigger identifier may be analyzed, and the behavior operation instruction may be determined based on the trigger identifier, where the behavior operation instruction may include behavior description information. Then, the command relay may send the behavior operation command to the behavior executor. Correspondingly, the behavior executor may analyze the behavior operation instruction to obtain an execution behavior and an execution object in the behavior description information. Assuming that the execution behavior is "play a certain special effect", the execution object may be a connection address of "special effect". In this way, the action executor can play the special effect (such as fireworks animation). Therefore, as long as the object to be displayed is played on the scene, the firework animation can be played every 5 seconds.
In this embodiment of the application, when configuring the second time trigger, the execution time of the second time trigger may be set to 5 seconds at intervals, and the execution times is set to infinity.
Optionally, in a specific embodiment, the behavior executing party may be a scene controller, for example, when an end page pops up, the behavior executing party is the scene controller and controls the popping up of the scene of the end page, and optionally, the behavior executing party may also be another processor.
In the embodiment of the application, when each interface of the application program is configured with the trigger of the event, the control function is used for controlling the application program. One event listener may listen for the occurrence of different events at the same time, or different event listeners may listen for the occurrence of different events at the same time.
As mentioned above, in executing the behavior in the behavior operation command, the terminal may parse the behavior operation command through the behavior executor, determine the execution behavior and the execution object, and execute the execution object based on the execution behavior. Therefore, the method and the system complete various links related to information display, user clicking, playing and watching, information disappearance and the like of the front end and simple functions related to various functions of statistical data reporting, user behavior response, interactive logic issuing and the like of the back end on the basis of native development.
Optionally, the terminal parses the behavior operation instruction, and may parse to obtain the identifier of the sub-trigger. At this time, the behavior operation instruction is analyzed to be used as another target event, it is monitored that the behavior operation instruction is analyzed, the behavior operation instruction is sent to the instruction translator to analyze the sub-trigger corresponding to the event, and the behavior operation instruction is sent to the instruction translator to analyze the sub-trigger corresponding to the event. The command translator may then send the child behavior operation command to the behavior executor. Correspondingly, the behavior executor may analyze the child behavior operation instruction to obtain an execution behavior and an execution object in the behavior description information. As such, the behavior executor may execute the execution object in the child behavior operation instruction based on the execution behavior.
In the above embodiment, only one trigger is nested, in the actual application process, a plurality of triggers can be nested, and the specific nesting manner can refer to the content in the previous paragraph, which is not described herein again. Therefore, the embodiment of the application can complete the interaction of complex scenes through the combination of the triggers, and the applicability is stronger.
Alternatively, the first target event may be a single event, such as the completion of playing the object to be displayed.
Alternatively, the first target event may be a set of multiple events, such as the playing of the object to be displayed is completed and the click of the preset button is detected. In this way, when the terminal determines the first target event, the terminal can determine to monitor the first target event only when monitoring the first sub-target event and the second sub-target event.
Optionally, the first target event may be any one of a plurality of events, for example, the playing of the object to be displayed is completed or a preset button is detected to be clicked, so that when the terminal determines the first target event, it may be determined that the first target event is monitored only when the terminal monitors the first sub-target event or the second sub-target event.
Therefore, different events in the same behavior caused by different events can be associated with the same behavior operation instruction, and software resources are saved.
Optionally, if, in the playing process of the object to be displayed, an instruction for stopping playing of the object to be displayed is received, all subsequent triggers may be destroyed.
In summary, a closed loop of an object processing scheme is formed by the object browser, so that devices of different systems, devices of the same system but different sizes, or different application programs in the same device can generate completely consistent objects to be displayed based on configuration data, thereby realizing unification of interface display and logic, and reducing the later maintenance cost. In addition, the object browser is completely based on native development, is high in stability, does not need to depend on a third-party framework, is free of audit risk, and is simple and convenient.
Fig. 7 is a block diagram illustrating an object processing apparatus according to an example embodiment. Referring to fig. 7, the device object processing device is constructed in the native development of a terminal, and is applied to an application program of the terminal, and includes a data acquisition module 701, a scene construction module 702, a view construction module 703, an object generation module 704, a play module 705, and an event execution module 706:
a data obtaining module 701 configured to perform obtaining configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data;
a scene construction module 702 configured to perform determining a sequence of scenes with scene information based on scene configuration data;
a view construction module 703 configured to perform rendering on a view in each scene in the scene sequence based on the view configuration data, so as to obtain a rendered scene sequence;
an object generation module 704 configured to add the rendered scene sequence to the view container to obtain an object to be displayed;
the playing module 705 is configured to execute a playing instruction responding to the exclusive share to be displayed and play the object to be displayed;
the event executing module 706 is configured to execute, in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, the behavior corresponding to the target event based on the trigger corresponding to the target event.
In some possible embodiments, before the obtaining the configuration data device of the object to be displayed, the method further includes:
the controller creates a scene controller;
a scene building module configured to perform:
sending configuration data of an object to be displayed to a scene controller;
analyzing the configuration data of the object to be displayed by using a scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a sequence of scenes with scene information based on scene configuration data;
and the view construction module is configured to render the view in each scene in the scene sequence based on the view configuration data by using the scene controller, so as to obtain a rendered scene sequence.
In some possible embodiments, the scene construction module is configured to perform:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on scene configuration data;
and determining a scene sequence based on the feature information of the scene, the display feature information of the scene and the feature information among the scenes.
In some possible embodiments, the view construction module is configured to perform:
determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain a rendered scene sequence.
In some possible embodiments, the apparatus further comprises:
an event determination module configured to perform determining a target event on a scene in a sequence of scenes;
a trigger configuration module configured to execute a trigger to configure a target event;
and the instruction configuration module is configured to execute trigger identification based on the trigger and configure behavior operation instructions corresponding to the trigger on the instruction transit.
In some possible embodiments, the event execution module is configured to perform:
the monitoring module is configured to monitor the events on the scenes in the scene sequence through the event monitor in the playing process of the object to be displayed;
the sending module is configured to send a trigger corresponding to the target event through the event listener if the target event is monitored;
the instruction determining module is configured to execute the trigger identification of the trigger determined by the instruction transit device and determine the behavior operation instruction based on the trigger identification;
and the execution module is configured to execute the behaviors in the received behavior operation instruction through the behavior executor.
In some possible embodiments, the transmitting module is configured to perform:
if the first target event is monitored, sending an original trigger corresponding to the first target event through an event monitor;
or;
if the first target event is monitored and the condition parameters of the first target event meet the preset condition parameters, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through an event monitor;
or;
and if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the transmitting module is configured to perform:
if the first target event is monitored, generating a delay timer;
the timing time parameter on the delay timer meets a first preset time parameter, and a third target event is determined to be generated;
and sending a first time trigger corresponding to the third target event through the event listener.
In some possible embodiments, the transmitting module is configured to perform:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer when the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event listener.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an apparatus 2000 for object processing in accordance with an example embodiment. For example, the apparatus 2000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 2000 may include one or more of the following components: a processing component 2002, a memory 2004, a power component 2006, a multimedia component 2008, an audio component 2010, an input/output (I/O) interface 2012, a sensor component 2014, and a communications component 2016.
The processing component 2002 generally controls the overall operation of the device 2000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 2002 may include one or more processors 2020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 2002 can include one or more modules that facilitate interaction between the processing component 2002 and other components. For example, the processing component 2002 may include a multimedia module to facilitate interaction between the multimedia component 2008 and the processing component 2002.
The memory 2004 is configured to store various types of data to support operation at the device 2000. Examples of such data include instructions for any application or method operating on device 2000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 2004 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 2006 provides power to the various components of the device 2000. The power supply components 2006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 2000.
The multimedia component 2008 includes a screen providing an output interface between the device 2000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 2008 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 2000 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 2010 is configured to output and/or input audio signals. For example, audio component 2010 includes a Microphone (MIC) configured to receive external audio signals when apparatus 2000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 2004 or transmitted via the communication component 2016. In some embodiments, audio assembly 2010 also includes a speaker for outputting audio signals.
The I/O interface 2012 provides an interface between the processing component 2002 and peripheral interface modules, which can be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 2014 includes one or more sensors for providing various aspects of state assessment for the device 2000. For example, sensor assembly 2014 may detect an open/closed state of device 2000, a relative positioning of components, such as a display and keypad of apparatus 2000, a change in position of apparatus 2000 or a component of apparatus 2000, the presence or absence of user contact with apparatus 2000, an orientation or acceleration/deceleration of apparatus 2000, and a change in temperature of apparatus 2000. The sensor assembly 2014 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 2014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 2016 is configured to facilitate wired or wireless communication between the apparatus 2000 and other devices. The apparatus 2000 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 2016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 2016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 2000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 2004 comprising instructions, executable by the processor 2020 of the apparatus 2000 to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

Claims (10)

1. An object processing method, comprising:
acquiring configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data;
determining a scene sequence having scene information based on the scene configuration data;
rendering the view in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
adding the rendered scene sequence to a view container to obtain the object to be displayed;
responding to a playing instruction of the object to be displayed, and playing the object to be displayed;
in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event.
2. The object processing method according to claim 1, wherein before the method of obtaining the configuration data of the object to be displayed, the method further comprises:
creating a scene controller;
determining a scene sequence with scene information based on the scene configuration data, and rendering a view in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence, including:
sending the configuration data of the object to be displayed to the scene controller;
analyzing the configuration data of the object to be displayed by using the scene controller to obtain the scene configuration data and the view configuration data;
determining, with the scene controller, the scene sequence with scene information based on the scene configuration data;
rendering, by the scene controller, the view in each scene in the scene sequence based on the view configuration data to obtain the rendered scene sequence.
3. The object processing method of claim 2, wherein said determining the scene sequence with scene information based on the scene configuration data comprises:
determining feature information of the scene, display feature information of the scene and feature information among the scenes based on the scene configuration data;
and determining the scene sequence based on the feature information of the scene, the display feature information of the scene and the feature information among the scenes.
4. The object processing method according to claim 2, wherein the rendering the view in each scene in the sequence of scenes based on the view configuration data to obtain the rendered sequence of scenes comprises:
determining graphics-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain the rendered scene sequence.
5. The object handling method according to any of claims 1-4, the method further comprising:
determining a target event on a scene in the sequence of scenes;
configuring a trigger of the target event;
and configuring a behavior operation instruction corresponding to the trigger on an instruction transfer device based on the trigger identification of the trigger.
6. The object processing method according to claim 5, wherein, in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event includes:
monitoring events on scenes in the scene sequence through an event monitor in the playing process of the object to be displayed;
if the target event is monitored, sending a trigger corresponding to the target event through the event monitor;
determining a trigger identifier of the trigger through an instruction transfer unit, and determining a behavior operation instruction based on the trigger identifier;
and executing the behaviors in the received behavior operation instruction through a behavior executor.
7. An object processing apparatus, comprising:
the data acquisition module is configured to execute the acquisition of configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data;
a scene construction module configured to perform determining a sequence of scenes with scene information based on the scene configuration data;
the view construction module is configured to render views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
the object generation module is configured to add the rendered scene sequence to a view container to obtain the object to be displayed;
the playing module is configured to execute a playing instruction responding to the exclusive share to be displayed and play the object to be displayed;
and the event execution module is configured to execute a behavior corresponding to the target event based on a trigger corresponding to the target event if the target event in the object to be displayed is monitored in the playing process of the object to be displayed.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the object processing method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the object processing method of any one of claims 1 to 6.
10. A computer program product, characterized in that the computer program product comprises a computer program, which is stored in a readable storage medium, from which at least one processor of a computer device reads and executes the computer program, causing the computer device to perform the object processing method according to any one of claims 1 to 6.
CN202210344776.XA 2022-03-31 Object processing method and device, electronic equipment and storage medium Active CN114860358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344776.XA CN114860358B (en) 2022-03-31 Object processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344776.XA CN114860358B (en) 2022-03-31 Object processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114860358A true CN114860358A (en) 2022-08-05
CN114860358B CN114860358B (en) 2024-06-21

Family

ID=

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106792188A (en) * 2016-12-06 2017-05-31 腾讯数码(天津)有限公司 A kind of data processing method of live page, device and system
CN108961380A (en) * 2017-05-26 2018-12-07 阿里巴巴集团控股有限公司 Method for rendering graph and device
CN109040822A (en) * 2018-07-16 2018-12-18 北京奇艺世纪科技有限公司 Player configuration method and device, storage medium
CN112070863A (en) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 Animation file processing method and device, computer readable storage medium and computer equipment
CN112135161A (en) * 2020-09-25 2020-12-25 广州华多网络科技有限公司 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment
CN112150586A (en) * 2019-06-11 2020-12-29 腾讯科技(深圳)有限公司 Animation processing method, animation processing device, computer readable storage medium and computer equipment
CN112235604A (en) * 2020-10-20 2021-01-15 广州博冠信息科技有限公司 Rendering method and device, computer readable storage medium and electronic device
CN113204722A (en) * 2021-03-30 2021-08-03 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN113850898A (en) * 2021-10-18 2021-12-28 深圳追一科技有限公司 Scene rendering method and device, storage medium and electronic equipment
CN114040240A (en) * 2021-11-18 2022-02-11 北京达佳互联信息技术有限公司 Button configuration method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106792188A (en) * 2016-12-06 2017-05-31 腾讯数码(天津)有限公司 A kind of data processing method of live page, device and system
CN108961380A (en) * 2017-05-26 2018-12-07 阿里巴巴集团控股有限公司 Method for rendering graph and device
CN109040822A (en) * 2018-07-16 2018-12-18 北京奇艺世纪科技有限公司 Player configuration method and device, storage medium
CN112070863A (en) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 Animation file processing method and device, computer readable storage medium and computer equipment
CN112150586A (en) * 2019-06-11 2020-12-29 腾讯科技(深圳)有限公司 Animation processing method, animation processing device, computer readable storage medium and computer equipment
CN112135161A (en) * 2020-09-25 2020-12-25 广州华多网络科技有限公司 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment
CN112235604A (en) * 2020-10-20 2021-01-15 广州博冠信息科技有限公司 Rendering method and device, computer readable storage medium and electronic device
CN113204722A (en) * 2021-03-30 2021-08-03 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN113850898A (en) * 2021-10-18 2021-12-28 深圳追一科技有限公司 Scene rendering method and device, storage medium and electronic equipment
CN114040240A (en) * 2021-11-18 2022-02-11 北京达佳互联信息技术有限公司 Button configuration method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN107729522B (en) Multimedia resource fragment intercepting method and device
US20200007944A1 (en) Method and apparatus for displaying interactive attributes during multimedia playback
US10637804B2 (en) User terminal apparatus, communication system, and method of controlling user terminal apparatus which support a messenger service with additional functionality
CN110781080B (en) Program debugging method and device and storage medium
CN113268212A (en) Screen projection method and device, storage medium and electronic equipment
CN110971974B (en) Configuration parameter creating method, device, terminal and storage medium
CN111078325B (en) Application program running method and device, electronic equipment and storage medium
CN114691115A (en) Business process system generation method and device, electronic equipment and storage medium
CN112616053B (en) Transcoding method and device for live video and electronic equipment
CN110769311A (en) Method, device and system for processing live data stream
US10613622B2 (en) Method and device for controlling virtual reality helmets
CN112732250A (en) Interface processing method, device and storage medium
CN114860358B (en) Object processing method and device, electronic equipment and storage medium
CN110543265A (en) Page title bar generation method and device, electronic equipment and storage medium
CN113282268B (en) Sound effect configuration method and device, storage medium and electronic equipment
CN114860358A (en) Object processing method and device, electronic equipment and storage medium
CN111596980B (en) Information processing method and device
CN114268802A (en) Virtual space display method and device, electronic equipment and storage medium
CN113031781A (en) Augmented reality resource display method and device, electronic equipment and storage medium
CN111338961B (en) Application debugging method and device, electronic equipment and storage medium
CN114443160A (en) Message pushing method and device, electronic equipment and storage medium
CN107391128B (en) Method and device for monitoring virtual file object model vdom
CN108549570B (en) User interface updating method and device
CN114911477A (en) Event execution method and device, electronic equipment and storage medium
CN112581102A (en) Task management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant