CN114328072A - Exposure data acquisition method and device - Google Patents
Exposure data acquisition method and device Download PDFInfo
- Publication number
- CN114328072A CN114328072A CN202011074718.7A CN202011074718A CN114328072A CN 114328072 A CN114328072 A CN 114328072A CN 202011074718 A CN202011074718 A CN 202011074718A CN 114328072 A CN114328072 A CN 114328072A
- Authority
- CN
- China
- Prior art keywords
- target
- visual object
- exposure data
- display
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 92
- 230000000007 visual effect Effects 0.000 claims abstract description 248
- 230000001960 triggered effect Effects 0.000 claims abstract description 13
- 238000012800 visualization Methods 0.000 claims description 94
- 238000012544 monitoring process Methods 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 abstract description 30
- 238000012423 maintenance Methods 0.000 abstract description 7
- 239000012634 fragment Substances 0.000 description 57
- 230000006870 function Effects 0.000 description 37
- 230000000694 effects Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 19
- 230000008859 change Effects 0.000 description 12
- 238000007726 management method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 238000007405 data analysis Methods 0.000 description 5
- 238000013480 data collection Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application provides an exposure data acquisition method and device, which relate to the technical field of computers, and the method comprises the following steps: determining page information loaded by a target interface, wherein the page information comprises at least one visual object; when a notification message triggered when the display state of the visual object changes is monitored, determining a target visual object with a changed display state in the at least one visual object; and when the display state of the target visual object meets the exposure condition, reporting the target exposure data of the target visual object. The problems of high cost of the code and high later maintenance cost are solved, and the exposure data acquisition process is easy to implement.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an exposure data acquisition method and device.
Background
In the world of massive information, when more and more data and information are available from the internet, huge business opportunities may be generated while collecting, analyzing and deeply mining a large amount of data. The data index system is set up in profusion of industries such as e-commerce enterprises, tourism, internet finance, enterprise service and the like, a user portrait capable of performing ground fine operation and guiding business growth is built, and more requirements are provided for multiple aspects such as data scale, richness, accuracy, instantaneity and the like. On the way of data analysis, data collection is the most important. The quality of the data collection directly determines whether your analysis is accurate. As the requirement of enterprises for data is higher and higher, the point-burying technology is also widely applied.
With the continuous expansion of services and the application of data analysis scenes, various scenes need to be subjected to data acquisition by embedding points. Most of the point burying operations require developers to manually insert the point burying codes into the service codes, which wastes time and labor, and has overhigh labor cost and high later maintenance cost.
Disclosure of Invention
The embodiment of the application provides an exposure data acquisition method and device, which are used for accurately reporting exposure data and effectively reducing labor cost and later maintenance cost.
In one aspect, an embodiment of the present application provides an exposure data acquisition method, including:
determining page information loaded by a target interface, wherein the page information comprises at least one visual object;
when a notification message triggered when the display state of the visual object changes is monitored, determining a target visual object with a changed display state in the at least one visual object;
and when the display state of the target visual object meets the exposure condition, reporting the target exposure data of the target visual object.
In one aspect, an embodiment of the present application provides an exposure data acquisition apparatus, including:
the page information determining unit is used for determining page information loaded by the target interface, and the page information comprises at least one visual object;
the target visual object determining unit is used for determining a target visual object with a changed display state in at least one visual object when a notification message triggered when the display state of the visual object is changed is monitored;
and the exposure data reporting unit is used for reporting the target exposure data of the target visual object when the display state of the target visual object meets the exposure condition.
Optionally, the exposure data reporting unit is further configured to:
and determining that the target exposure data of the target visual object does not exist in the cached exposure data based on the identification information and the position information of the target visual object, wherein the cached exposure data is determined according to other target exposure data of other target reporting objects of the reported object data.
Optionally, the exposure data reporting unit is further configured to:
and determining the target visual object as an interface sliding display object.
Optionally, the exposure data reporting unit is further configured to:
if the target visual object is determined to be the interface sliding display object, and the target exposure data of the target visual object exists in the cached object data based on the identification information and the position information of the target visual object, the target exposure data is not reported, and the sliding display object represents the visual object which can be displayed in the target interface in a sliding manner by the target visual object.
Optionally, the apparatus further comprises:
and the duration counting unit is used for determining the display duration of the visual object according to a first moment that the visual object has an interface display focus in the target interface and a second moment that the visual object does not have the interface display focus in the target interface and reporting the display duration if the visual object is determined to have the page duration counting tag aiming at any visual object.
Optionally, the apparatus further comprises:
the screen turning rate statistic unit is used for determining a reference position in the target interface;
determining a sliding distance of the target page information based on a first visual object of which the target page information starts to slide and a second visual object of which the target page information stops sliding and is located at a reference position;
and determining the screen turning rate aiming at the target page information based on the sliding distance of the target page information and the interface height of the target interface.
Optionally, the apparatus further comprises:
the non-target visual object determining unit is used for determining the non-target visual object with unchanged display state and the reporting time of the exposure data of the non-target visual object in at least one visual object;
and if the time difference between the reporting time and the current time meets the condition of reporting time again, reporting the exposure data of the non-target visual object.
Optionally, the page information determining unit is further configured to:
and replacing the management object in the page information with a state monitoring object, and monitoring the notification message through the state monitoring object.
In one aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the exposure data acquisition method when executing the program.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer apparatus, which when the program is run on the computer apparatus, causes the computer apparatus to execute an exposure data acquisition method.
According to the exposure data acquisition method, when the display state of the visual object is changed in the page information loaded on the current target interface, the target visual object needing to report the exposure data can be determined, and when the display state of the target visual object meets the exposure condition, the target exposure data of the target visual object can be reported.
In the implementation of the method, developers are not required to insert the embedded point codes into the service codes of the user terminal, a complex code compiling process is not required, the problems of high cost of the codes and high later maintenance cost are solved, and the process of acquiring the exposure data is easy to implement.
Further, in the embodiment of the application, when the display state of the visual object changes, the display state of each visual object is determined, and the display state of each visual object is not repeatedly determined in real time, so that the state query process in the exposure data acquisition process is reduced, the data volume in the exposure data acquisition process is further reduced, and the efficiency of the whole exposure data acquisition is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of an interface, page information, and a visual object provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of page information provided in an embodiment of the present application;
fig. 3 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of an exposure data acquisition method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a state change of a visual object in a target interface according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a target interface provided by an embodiment of the present application;
fig. 7 is a schematic flowchart of an exposure data determining process and a reporting process according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a group information exposure method according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart illustrating a process of setting different exposure strategies according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a fixed display visualization object and a sliding display object according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a process of determining whether to report exposure data based on identification information and location information according to an embodiment of the present application;
fig. 12 is a schematic flowchart illustrating a process of determining a display time of a visual object according to an embodiment of the present application;
fig. 13 is a schematic flowchart of determining a non-display time of a visualized object according to an embodiment of the present application;
fig. 14 is a schematic flowchart of determining a display duration according to an embodiment of the present application;
fig. 15 is a schematic flowchart of determining a display duration according to an embodiment of the present application;
fig. 16 is a schematic flowchart of determining a display duration according to an embodiment of the present application;
fig. 17 is a schematic flowchart of determining a first visual object and a second visual object according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an SDK according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an exposure data acquisition apparatus according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and beneficial effects of the present application more clear and more obvious, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
For convenience of understanding, terms referred to in the embodiments of the present application are explained below.
Interface: the terminal device refers to an electronic device having at least a display function, and may be a mobile electronic device or a fixed electronic device. For example, a mobile phone, a tablet computer, various wearable devices, a vehicle-mounted device, a Personal Digital Assistant (PDA), a point of sale (POS), or other electronic devices capable of implementing the above functions may be used.
Page information: the data information refers to data information which is displayed in the interface in whole or in part, and the data information may be text information, image information, voice information, video information, link information, and the like, which is not limited in the embodiment of the present application.
Visualization object: refers to the data information bearing of page information, and the visual object can be a control in a page, and encapsulates data and methods. For example, the visualization object may be a View object View, which is a base class for all controls, whether a simple text object TextView, a control object Button, or a complex layout object LinearLayout and ViewlistView, whose common base class is the View object View.
In this embodiment of the application, the visualization object may be displayed in the interface, that is, the state of the visualization object in the page may be a display state, or the visualization object may not be displayed in the interface, that is, the state of the visualization object in the page may be a hidden state.
Illustratively, the contents of the above-described interface, page information, and visualization objects are described below in conjunction with fig. 1.
In fig. 1, a display area in the terminal used by the user is an interface in which web page information, which is page information, is displayed and partial page information of the entire web page information is displayed.
Specifically, the page information not shown in fig. 1 is indicated by a dotted line, specifically, the page information shown in fig. 1 is indicated by each of the visual objects, and the visual objects are displayed in the interface in fig. 1 when the visual objects are in the display state, and are not displayed in the end plane in fig. 1 when the visual objects are in the hidden state.
In the embodiment of the application, the page information includes each visualization object, a hierarchical relationship exists between the visualization objects, and a plurality of visualization objects can form a visualization group. Illustratively, as shown in fig. 2, the composition of the page information includes a plurality of visualization objects, wherein the visualization object 2, the visualization object 5, and the visualization object 6 are visualization objects in the visualization group 1.
Exposure: the display state of the visual object in the interface meets the set requirement.
Exposure data: the data refers to data reported after the display state of the visual object in the interface meets the set requirement, and the data may include attribute information, interface layout information and the like of the visual object, and may also include other information.
The life cycle is as follows: the method comprises the steps of loading a visual object into a memory, displaying and hiding the visual object, specifically, aiming at different visual object types, defining different life cycles, and explaining by taking the visual object as View, wherein the life cycle of the View is recording the View, after the View is loaded, the View is to be displayed, the View is to be laid out, the View is laid out completely, the View is displayed completely, the View is to disappear completely, and the View disappears completely.
And (4) callback function: is a function called by a function pointer. If a pointer (address) to a function is passed as a parameter to another function, the callback function is defined when the pointer is used to call the function to which it points. The callback function is not directly called by the implementer of the function, but is called by another party when a specific event or condition occurs, for responding to the event or condition.
Having introduced the above nouns, the concept of the present application will now be explained based on the problems that currently exist.
Big data analysis is the current trend in internet technology, where data collection is a core issue. As a relatively mature and widely adopted data access means, the front-end point-burying technology is a code point-burying technology, which is a common front-end point-burying technology at present.
The code embedded point is a code embedded point which sends data through a pre-written code when a certain control operation occurs. That is, in order to monitor the user's behavior on the website or on an App (Application), some program code needs to be added to each page of the website or App. Such program code is called monitoring code on the web site and SDK (Software Development Kit) in App.
Since the embedded points need a web page engineer (or App developer) to add a special monitoring code to each monitoring point, it is also necessary to ensure that the codes correspond to the monitoring points one by one (because each monitoring point is different, the added special event monitoring codes are different in naming and attribute setting, and each monitoring point needs to add an event monitoring code dedicated to itself), and the addition cannot be mistaken or omitted, which is a tedious work, and is easy to generate errors, and the maintenance cost is high.
Therefore, in view of the above problems, the inventors of the present application have first conceived an exposure data acquisition method for acquiring exposure data of a visualized object in an interface in a non-buried point manner. The non-embedded point means that after a developer collects the SDK in an integrated manner, the SDK directly starts to capture and monitor all behaviors of the user in the application and reports all the behaviors, and the developer does not need to add extra codes; or when the user shows the interface elements, the trigger events are bound through the controls, and when the events are triggered, the system has corresponding interfaces to enable the developer to process the behaviors. The point-burying-free technology is not completely free of point burying, but is not used for defining events or functions needing to be collected before setting codes, engineers do not need to continuously deploy the codes, after a client loads a section of monitoring codes, the point burying can be automatically carried out on a page or an application program, key user behaviors can be intelligently captured, and data can be quickly collected.
The overall thought of the buried-point-free exposure data acquisition method conceived by the inventor is that each visual object in the webpage information is traversed in a timed polling mode, the display state of each visual object is determined, and when the display state of the visual object with the changed display state meets the exposure condition, the exposure data of the visual object with the changed display state and the display state meeting the exposure condition is reported.
Therefore, the method for acquiring the exposure data without the buried point, which is conceived by the inventor of the application, can solve the problems of overhigh labor cost and high later maintenance cost in the method for determining the exposure data by the buried point technology.
However, when the inventors of the present application have verified the buried-point-less exposure data acquisition method of the above-described concept, the buried-point-less exposure data acquisition method of the above-described concept has a problem of performance loss.
In the above method for collecting exposure data without buried points, the display states of all the visual objects are traversed in a timed polling manner, for example, the set polling time is 0.01s, that is, the display states of all the visual objects are traversed once every 0.01s, but the display states of the visual objects are not changed within 0.01s, and the result of the traversal is that there is no exposure data of the visual objects that can be reported, which causes a problem of performance loss.
Therefore, based on the above problems, the inventor of the present application further conceived an exposure data acquisition method, and the exposure data acquisition method in the embodiment of the present application can determine a target visual object for which exposure data needs to be reported when it is determined that a display state of a visual object changes in page information loaded on a current target interface, and report target exposure data of the target visual object when the display state of the target visual object meets an exposure condition.
In the implementation of the method, developers are not required to insert the embedded point codes into the service codes of the user terminal, a complex code compiling process is not required, the problems of high cost of the codes and high later maintenance cost are solved, and the process of acquiring the exposure data is easy to implement.
Further, in the embodiment of the application, when the display state of the visual object changes, the display state of each visual object is determined, and the display state of each visual object is not repeatedly determined in real time, so that the state query process in the exposure data acquisition process is reduced, the data volume in the exposure data acquisition process is further reduced, and the efficiency of the whole exposure data acquisition is improved.
Having described the concepts and advantages of the present application, a system architecture diagram is described below as used by the embodiments of the present application.
Referring to fig. 3, it is a system architecture diagram applicable to the embodiment of the present application, where the system architecture at least includes M terminal devices 301 and a server 302, the M terminal devices 301 are terminal devices 301-1 to terminal devices 301-M shown in fig. 3, M is a positive integer, and the value of M is not limited in the embodiment of the present application.
A client is installed in the terminal device 301, and the client is served by the server 302. The client in the terminal device 301 may be a browser client, a video application client, an application client such as a software store, etc. The client in the terminal device 301 is a client of each application, that is, each application can be run through the terminal device 301, and the exposure data of each application in the terminal device 301 is reported to the application server 302 corresponding to each application.
Terminal device 301 may include one or more processors 3011, memory 3012, I/O interface 3013 to interact with server 302, display panel 3014, and the like. The terminal device 301 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like.
Further, the terminal device 301 may install each application client in an active manner (for example, through software store application downloading) or a passive manner (pre-installation), monitor the display state of each visual object included in each application page information displayed in the interface of the terminal device 301 after starting each application client, traverse the display states of all visual objects included in the application page information when it is determined that the display state of a visual object changes, determine a target visual object whose display state changes in at least one visual object, and report exposure data of the target visual object when it is determined that the display state of the target visual object satisfies an exposure condition.
In this embodiment, the server 302 is a terminal device providing computing power, and the server 302 performs data analysis and statistics according to the exposure data of the visualization object reported by the terminal device 301. The server 302 may include one or more processors 3021, memory 3022, and an I/O interface 3023 that interacts with the terminal device 301, among other things. The server 302 may also configure a database 3024. The server 302 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The terminal device 301 and the server 302 may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
In this embodiment, after the server 302 performs data analysis and statistics, the result may be stored in the memory 3022, or may be stored in another storage device, which is not limited herein.
Illustratively, a software store client is installed in the terminal device 301, and when the terminal device 301 downloads and installs the software store application, the SDK is installed at the same time, so as to monitor application page information displayed in the interface of the terminal device 301 when the terminal device 301 starts the software store client, where the application page information includes a plurality of visual objects, and when it is monitored that the display state of a visual object changes, the display states of all visual objects included in the application page information are traversed, a target visual object whose display state changes is determined, and when it is determined that the display state of the target visual object meets an exposure condition, exposure data of the target visual object is reported to a server 302 corresponding to the software store application.
The server 302 determines that the application that the user is interested in when using the application of the software store is a sports App based on the exposure data reported by the terminal device 301, and then sends the plurality of sports apps to the terminal device 301 as push messages.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 3, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 3 will be described in the following method embodiments, and will not be described in detail herein.
Based on the above design concept and the above application scenario, the living body detection method according to the embodiment of the present application is specifically described below.
As shown in fig. 4, an embodiment of the present application provides an exposure data acquisition method, which specifically includes:
step S401, determining page information loaded by the target interface, wherein the page information comprises at least one visual object.
Specifically, in the embodiment of the present application, a display interface of a terminal device used by a current user is used as a target interface. In the target interface, display information is loaded, the loaded display information is characterized as page information in the embodiment of the application, the page information includes at least one visual object, and certainly, the page information also includes other objects. The page information may be information of any webpage, or information that needs to be displayed in any App.
In the embodiment of the application, the page information that the user can see in the target interface of the terminal device is composed of all visual objects, for example, session information of the instant messaging application is displayed in the target interface, each displayed session information is a visual object, and all function controls of the instant messaging application are also displayed in the target interface, and the function controls are also visual objects.
Of course, in the embodiment of the present application, the page information may further include other objects, which are not limited herein.
In the embodiment of the present application, a visual object has at least two states, one state is a display state, that is, a user can see the visual object in a target interface, for example, session information and control information displayed in the target interface; one state is a hidden state, i.e., page information that is not viewable by the user in the target interface.
In the embodiment of the present application, the state of the visualized object may be changed from the display state to the hidden state, or may be changed from the hidden state to the display state.
In an alternative embodiment, the visual object in the target interface may change from the hidden state to the displayed state after a set time, for example, a visual object is included in the page information, and the visual object is hidden in the target interface at a first time and displayed in the target interface when the visual object arrives at a second time.
In another optional embodiment, in this application embodiment, the page information displayed in the target interface may be complete page information, or may be a part of the entire page information, that is, a user may change a state of a visualized object, in which a part of the page information is already displayed in the target interface, into a hidden state by sliding the page information, or change a state of a visualized object, in which the part of the page information is not displayed in the target interface, into a displayed state.
Illustratively, as shown in fig. 5, a schematic diagram of the state change of the visualized object in the target interface is shown in fig. 5.
The page information that the user first sees in the target interface is the visualization objects of three text types of "football game review article a", "football game new question B" and "football star interview report C", and the visualization of three control types of "friends are watching", "choiceness", "<", for convenience of explanation, the visualization objects that are not yet displayed in the page information are represented by dotted lines, that is, the page information also includes the visualization object of text type of "basketball game news C".
When a user browses page information in a page information sliding mode, the state of a visual object of a football game comment article A is changed from a display state to a hidden state, and the state of a visual object of a basketball game news C is changed from the hidden state to the display state.
In the embodiment of the present application, the target interface may load different page information, so in the embodiment of the present application, it is further required to determine which page information the page information loaded by the current target interface is specific.
Specifically, the specific information of the page may be determined by determining the identification information of the loaded page information, where the identification information is a unique representation of each page information.
The identification information of the page can be obtained through a monitoring instruction at the bottom layer or a life cycle callback function, that is, when the page in the page needs to be loaded, the identification information of the page can be obtained through the life cycle callback function, and which specific page is loaded in the target interface can be determined.
In an optional embodiment, in order to obtain the page information through the underlying monitoring instruction or lifecycle callback function, when the user downloads and installs the browser or the application, the monitoring authority is obtained or the lifecycle callback function is set by registering in the underlying layer of the terminal device.
Of course, in the embodiment of the present application, there are other ways to determine the page information, and the embodiment of the present application is not limited.
In the embodiment of the present application, after the page information is determined, all objects included in the page information are also obtained, and the objects may be visual objects, that is, objects that can be displayed in the target interface, or other objects that are not displayed in the target interface.
In an optional embodiment, each object included in the page information has a hierarchical or nested relationship, and after the page information is obtained, the hierarchical or nested relationship between the objects can also be obtained.
Optionally, in this embodiment of the application, in order to facilitate description of the relationship between the objects, a tree structure may be used to describe, that is, an object relationship tree, after determining the attribute information of the page information, the object relationship tree may be obtained, and the information of each object and the relationship between each object may be determined according to the object relationship tree.
In the embodiment of the present application, the concept of an object group may also be characterized in the object relationship tree, that is, a plurality of objects have a common characteristic, and the plurality of objects form an object group. Illustratively, when the page information loaded by the target interface is page information of an instant messaging application, the instant messaging application has a plurality of functional modules, including a session module, an address book module and a friend dynamic module, each module includes a plurality of objects, the objects have relevance, and each module is represented as an object group.
For example, in fig. 6, the instant application page information represented in the target interface, where the page information corresponding to the session module is currently displayed, the session module includes three session records, each session record includes an avatar visualization object and a session text visualization object, and then all the visualization objects in the session module are used as the objects in the object group corresponding to the session module.
In an alternative embodiment, each object belonging to the same object group has a group identifier, and each corresponding object group to which it belongs can be determined from the group identifiers.
Step S402, when a notification message triggered when the display state of the visual object changes is monitored, determining a target visual object with a changed display state in at least one visual object.
Specifically, in the embodiment of the present application, when it is determined that the notification message is heard, a target visual object with a changed display state is determined from among the at least one visual object.
In this embodiment of the application, the notification message may be triggered when the display state of any one of the visualization objects changes, or may be triggered when the display state of the visualization object changes, and may be set according to a service of acquiring exposure data, which is not limited in this embodiment of the application.
In the embodiment of the present application, the change in the display state of the visualization object may represent that the visualization object exists and changes from the hidden state to the display state, or may represent that the visualization object exists and changes from the display state to the hidden state.
Further, when a notification message triggered when the display state of the visualization object changes is monitored, it indicates that at least one visualization object currently has a change in display state, and the number of the visualization objects having a change in display state is not limited.
In the embodiment of the application, the notification message can be monitored through a bottom layer interface, and can also be monitored through a life cycle callback function.
In an alternative embodiment, in order to be able to monitor the notification message through the lifecycle function, the management object in the page information may be replaced with a status monitoring object, and the notification message may be monitored by the status monitoring object.
That is, the page information may further include a management object, where the management object is used to manage each object included in the page information, and generally, the management object is a non-visual object.
In a specific embodiment, the page information is managed by a root object to manage other objects included in the page information, so that the root object can be replaced by a state monitoring object, and the notification message is monitored by the state monitoring object.
In the embodiment of the application, the notification message can be directly monitored from the bottom layer by replacing the root object with the state monitoring object, and monitoring is not required to be carried out by calling each interface, so that the access cost is simplified.
In this embodiment of the application, a target visualization object with a changed display state may be determined by traversing all objects included in the page information, and by combining the contents in the above embodiments, the target visualization object may be a visualization object that is changed from a display state to a hidden state, or may be a visualization object that is changed from a hidden state to a display state.
In an alternative embodiment, all objects included in the page information may be traversed based on the object tree, and when all objects are traversed, information of an object group corresponding to each object can be determined.
And step S403, reporting target exposure data of the target visual object when the display state of the target visual object meets the exposure condition.
Specifically, in the embodiment of the present application, the exposure condition may be set based on the exposure data collection service, or may be set in other manners.
When the display state of the target visual object and the preset exposure condition are met, the target visual object can be regarded as the visual object required by the exposure data acquisition task, and the exposure data of the visual object is reported.
In the embodiment of the present application, the exposure condition may be a display time of the target visualization object, may also be a display area of the target visualization object, and may also consider the display time and the display area of the target visualization object at the same time.
Of course, the exposure condition may be other limiting conditions, such as a display position, and is not limited herein.
Further, in the embodiment of the present application, different exposure conditions may be set for different target visualization objects, for example, for a target visualization object 1, an exposure condition 1 is set, the exposure condition 1 includes a display time 1 and a display area 1, for a target visualization object 2, an exposure condition 2 is set, the exposure condition 2 includes a display time 2 and a display area 2, and the display time 1 is different from the display time 2, and the display area 1 is different from the display area 2.
Optionally, in this embodiment of the application, after exposure conditions are set for different visual objects, the exposure conditions corresponding to the identification information of the visual objects may be stored based on the identification information of the visual objects, and when it is required to determine whether the display state of the target visual object meets the exposure conditions, the corresponding exposure conditions may be obtained based on the identification information of the target visual object.
In this embodiment of the application, the display state of the target visualization object corresponds to the exposure condition, where the exposure condition includes display time, and then the obtained display state of the target visualization object is the display time of the target visualization object, and similarly, the exposure condition includes display area, and then the obtained display state of the target visualization object is the display area of the target visualization object.
In the embodiment of the application, when the display state of the target visual object is determined to be matched with the corresponding target exposure condition, the target exposure data of the target visual object is reported; and if the display state of the target visual object is determined not to be matched with the corresponding target exposure condition, not reporting the target exposure data of the target visual object.
Specifically, in this embodiment of the application, the target exposure data may be data such as a display duration and a display area, and may also include information such as an attribute and a page position of the target visualization object, which is not specifically limited herein.
After the above description is introduced, the determination process and the reporting process are explained in detail below.
Specifically, as shown in fig. 7, a visualization object is taken as View for example to explain; when traversing views included in all page information, the traversal is performed based on the View tree corresponding to the page information. Further, the exposure condition includes two pieces of information, namely, a display time and a display duration, and the current display time and the display duration of View need to be considered at the same time.
For any View, when the View is traversed, determining whether the View sets a set exposure condition, in fig. 7, first querying a display area included in the exposure condition corresponding to the View, and determining whether the display area is the set display area, if so, acquiring the set display area, and if not, acquiring the display area in the general exposure condition.
And determining whether the display area of the View is matched with the area in the exposure condition, if so, further judging the display time, wherein the judging method of the display time is similar to that of the display area, and the description is not given here.
When the display area of the View is matched with the area in the exposure condition, and the display duration of the View is matched with the duration in the exposure condition, reporting the exposure data of the View; and when the display area of the View is not matched with the area in the exposure condition, or the display duration of the View is not matched with the duration in the exposure condition, not reporting the exposure data of the View.
Specifically, as shown in fig. 7, the method includes:
step S701, traversing the View based on the View tree;
step S702, determining whether the View has a set display area condition, if so, executing step S703, otherwise, executing step S704;
step S703, acquiring a set display area condition;
step S704, obtaining a View display area;
step S705, determining whether the View display area meets the condition, wherein the condition is a set display area condition or a non-set display area condition, if so, executing the step S706, otherwise, executing the step S707;
step S706, determining whether the View has a set display duration condition, if so, executing step S708, otherwise, executing step S709;
step S707, the exposure data of the View is not reported;
step S708, acquiring a set display duration condition;
step S709, obtaining the View display duration;
step S710, determining whether the View display duration meets the condition, wherein the condition is a set display duration condition or a non-set display duration condition, if so, executing step S711, otherwise, executing step S707;
and step S711, reporting View exposure data.
In the embodiment of the present application, since there are possible group objects in the page information, in the embodiment of the present application, after the target exposure data of the target visualization object is reported, it is further required to determine whether the target visualization object is a group object in the target group.
The target group may be a set group or any group, and is not limited herein.
In an optional embodiment, it may be determined whether the target visualization object is a group object in the target group based on the group identifier of the target visualization object, and if the target visualization object is the group object in the target group, the group exposure data of the target group is reported. Optionally, if the target visualization object is not a group object in the target group, the report is not performed.
In the embodiment of the present application, the group exposure data may be the same as or different from the exposure data of the group object in the target group.
Illustratively, the target group is an object group formed by a plurality of session text visualization objects in the instant messaging application, and the exposure data of the target group may be the same as each session text visualization object, or may be different from the exposure data of each session text visualization object, such as session background information.
Further, in the embodiment of the present application, since one target group includes a plurality of group objects, there may be a plurality of group objects reporting exposure data of the target group, so to avoid the problem of repeated reporting, it is also necessary to determine whether exposure data has been reported for the target group.
Therefore, in the embodiment of the present application, when traversing each visual object, it is determined whether each visual object belongs to a target group, if not, the exposure data acquisition method of the non-target group object is used for the visual object, and when determining that the display state of the visual object satisfies the exposure condition, the exposure data of the visual object is reported; further, when the exposure data of the target group is not reported by other group objects, reporting the exposure data of the target group.
For example, as shown in fig. 8, a visualization object is taken as a View for an exemplary illustration, and for each View, group information corresponding to the View is determined, that is, if the View is a target group object, the View has a target group identifier, and if the View is not the target group object, the View does not have the target group identifier. In the embodiment of the present application, the target group is represented by GroupView.
In fig. 8, after the display status of View meets the exposure condition, it is continuously determined whether the View has a GroupView identifier, and if the View has the GroupView identifier, it is continuously determined whether the exposure data of the GroupView has been reported; and if the View does not have the identifier of the GroupView, reporting the exposure data of the View.
And when determining that the exposure data of the GroupView is reported, only reporting the exposure data of the View, and when determining that the exposure data of the GroupView is not reported, reporting the exposure data of the View and the exposure data of the GroupView.
In the embodiment of the application, whether exposure data of the GroupView is determined to be reported or not can be judged based on cache information, in the embodiment of the application, each reporting action is cached, and the cache can be cached according to identification information of a target visual object and identification information of a target group. Therefore, whether to report the exposure data of the GroupView can be determined in the cache based on the identification information of the target group.
In this embodiment of the application, for convenience of description, an exposure data acquisition method for reporting exposure data of a group View by View is used as a group exposure policy in the following, and then View is not an object in the group View, and only the exposure data acquisition method for reporting exposure data of View is used as a conventional exposure policy. The following is an exemplary description of how to determine the identity of adding a group exposure policy to a View.
As shown in fig. 9, traversing each View, determining an exposure policy of each View, determining whether the exposure policy of the View is a group exposure policy, if so, adding the View into a group View, and setting a identifier of the group View for the View; and if the exposure strategy of the View is a conventional exposure strategy, setting an identifier of the GroupView is not required, and determining whether the exposure data needs to be reported or not through the conventional exposure strategy.
Further, in the embodiment of the present application, in addition to determining whether exposure data of a target group is reported, it may also be determined whether there is a situation that exposure data of a target visualization object is repeatedly reported.
In the embodiment of the application, whether exposure data of the target visual object has been reported or not can be inquired in the cache based on the identification information of the target visual object; further, in the embodiment of the present application, in order to represent the uniqueness of the target visual object more accurately, the query may be performed according to the unique identifier based on the identification information and the location information of the target visual object, and similarly, the cache is also stored based on the identification information and the location information of the target visual object.
In this embodiment of the application, the position information refers to a position of the target visualization object in the page, and illustratively, the position of the target visualization object in the page may be represented by coordinates or may be represented by other position information, which is not limited in this embodiment of the application.
Further, in the embodiment of the present application, the part of the page information displayed in the target interface is fixed, that is, only after the page information is switched, the page information of the fixed display part is changed, so that the state of the visualized object of the fixed display part is changed only once before the page information is switched, that is, the state is changed from the hidden state to the display state.
Therefore, in the embodiment of the present application, when performing cache query based on the identification information and the location information of the target visual object as unique identifications, it is necessary to consider whether the target visual object is an object corresponding to the fixed display portion, and if so, cache query is not required to be performed, because the location information of the object corresponding to the fixed display portion is not changed.
If not, the identification information representing the target visualization object is an object capable of being displayed in a sliding manner in the target interface, and since the display position in the interface is changed, it is necessary to determine whether the object has been cached before reporting the exposure data.
Specifically, as described below with reference to a specific example, as shown in fig. 10, a fixed display visual object exists in the target interface, two visual objects of "friend is looking" and "friend is selected" in fig. 10, a "friend is looking" representation is a piece of page information, and when the page information of the "friend is looking" representation is not switched to a "friend is a piece of page information, the" friend is looking "as a fixed display visual object in the page information of the" friend is looking "representation; when the page information of the "friend is looking" token is switched to the "pick" token is one page information, then the "pick" is a fixed display visualization object in the page information of the "pick" token.
In fig. 10, in addition to the fixed display visualization object, there is a slide display object that can be slide-displayed in the target interface, and when the user slides page information, the slide display object also slides. The position of the sliding display process of the sliding display object in the target interface is changed, and the position of the sliding display object in the page information is not changed.
In fig. 10, the "soccer match news B" slide display object is displayed in the target interface all the time as the user slides, and when exposure data of the "soccer match news B" slide display object needs to be reported, it is determined whether the exposure data of the "soccer match news B" slide display object has been cached.
Further, in the embodiment of the present application, the position of the "football game comment article a" sliding display object in the target interface changes, and along with the sliding of the user, the "football game comment article a" sliding display object changes from the display state in the target interface to the hidden state and from the hidden state to the display state, so that the display state of the "football game comment article a" sliding display object changes, if the interval time of the display state change is less than the set time, it indicates that the user may be searching for the interesting content, and repeatedly reporting the exposure data of the "football game comment article a" sliding display object has no reference meaning, so before reporting the exposure data of the "football game comment article a" sliding display object, it is also necessary to determine the last reporting time of the "football game comment article a" sliding display object, the method is convenient for determining whether exposure data of the sliding display object of the football game comment article A needs to be reported or not.
Therefore, when the exposure data of the sliding display object of the football game comment article A is reported, the last reporting time of the exposure data of the sliding display object of the football game comment article A needs to be determined, whether the interval time of the display state change is less than the set time or not is determined, and if the interval time is less than the set time, the display state change is not reported. Optionally, in this embodiment of the present application, if it is determined that the time interval between the display state changes is less than the set time, reporting is not required, and if it is determined that the time interval between the display state changes is not less than the set time, it is required to determine whether the exposure data exists in the cache.
In summary, in the embodiment of the present application, it may be determined whether exposure data of the visual object already exists in the cache only according to the identification information of the visual object; further, it is also required to determine whether exposure data exists in the cache by determining whether the visual object is a sliding display object, and specifically, if the visual object is a sliding display object, determining whether exposure data exists in the cache based on the identification information of the visual object and the position information of the page.
The method also comprises a method for inquiring cache, wherein the time difference between two times of reporting exposure data is determined, and if the time difference is less than the set time, the reporting is not carried out; if the time difference is not smaller than the set time, it is required to determine whether exposure data exists in the cache.
Specifically, as shown in fig. 11, a process of determining whether to report exposure data based on the identification information and the position information is described by a flowchart.
In fig. 11, the visualization object is also used as View, after judgment, the display state of View already meets the exposure condition, it is determined whether View is a sliding display object, if View is a sliding display object, the identification information of View and the position information of View in the page are obtained, and the cache information is pulled; and determining whether exposure data of the View already exist in a cache or not based on the identification information of the View and the position information of the View in the page, if so, not reporting, and if not, reporting.
If the View is not a sliding display object, whether exposure data of the View already exists in a cache can be determined based on the identification information, if so, the View is not reported, and if not, the View is reported.
In the embodiment of the present application, an optional embodiment is further provided, when traversing the display state of each visualization object, processing may be performed on a non-target visualization object whose display state has not changed.
In the embodiment of the application, the reporting time of the exposure data of the non-target visual object is determined, and if the time difference between the reporting time and the current time is determined to meet the condition of reporting time again, the exposure data of the non-target visual object can be reported again.
That is to say, in the embodiment of the present application, for a visual object that has been displayed in the target interface and has reported exposure data, after a set time has elapsed, the exposure data may be reported again, so as to ensure the accuracy of exposure data acquisition.
In the embodiment of the application, in addition to acquiring and reporting the target exposure data of the target visualization object, data of other visualization objects may be determined, for example, information such as display duration of each object in a page may be determined.
In the embodiment of the application, the display duration of the visual object can be determined based on the display focus of the visual object in the target interface.
Specifically, for any visual object, if it is determined that the visual object has the page duration statistical tag, the display duration of the visual object is determined according to a first moment when the visual object has an interface display focus in the target interface and a second moment when the visual object does not have the interface display focus in the target interface, and the display duration is reported.
In the embodiment of the present application, obtaining the information with display focus of the visualization object may be determined by a lifecycle callback function of the visualization object.
In the embodiment of the present application, there are a plurality of visual objects, so it is necessary to determine the time of each visual object that has focus and is visible and the time that has no focus.
In another alternative embodiment, the display duration does not need to be counted for all the visual objects, only the set visual objects need to be counted, and the set visual objects can be determined by setting the tags. The tag may be a statistical tag or other tags, which are not limited in the embodiments of the present application.
Further, after the focus of the visual object is determined, whether the visual object has the statistical tag or not can be determined, if the statistical tag exists, the display duration of the visual object is counted, and if the statistical tag does not exist, the display duration of the visual object does not need to be counted.
In the embodiment of the application, in order to ensure that the display duration of the visual object is counted, that is, the accuracy of the statistical data is ensured, it is further required to determine whether the visual object is already displayed in the target interface, and if not, the display duration of the visual object cannot be counted.
Exemplarily, as shown in fig. 12, a step of defining a display time of the visual object is defined by taking the visual object as View.
Specifically, in fig. 12, the display information of View is obtained by a lifecycle callback function, attechthorootview.
Determining whether the View has a focus, if not, not determining the display starting time of the View, and if so, judging whether the View has a statistical label, wherein in the embodiment of the application, the statistical label indicates that the View is taken as a page; when the View does not have the statistical label, determining the display starting time of the View, and if the View has the statistical label, determining whether the View is visible in a target interface; and if the View is invisible, determining the display starting time of the View, and if the View is visible, determining the display starting time of the View.
Based on the same reasoning, the determination process when determining that View does not have focus can be as shown in FIG. 13.
Similarly, the lifecycle callback function attachToRootView obtains the View display information. Determining whether the View has a statistical label or not, and if not, determining the hidden time of the View; if yes, determining whether the View has a focus, and if not, determining the hidden time of the View; if the focus exists, continuously judging whether the View is visible in the target interface, and if the View is visible, not determining the hidden moment of the View; and if not, determining the hidden time of the View.
In the embodiment of the application, in addition to determining the display duration of the visualized object, the display durations of other objects in the target interface may also be determined, where the objects may include the overall display duration of the page information in the target interface and the display duration of part of the page information in the target interface.
For example, in the target interface, the whole page information is composed of a plurality of partial page information, and the display duration of the whole page information may be counted, or the display duration of each partial page information may be counted.
Specifically, the display duration of the whole page information may be counted based on a page period, that is, the time from the page loading to the page loading stop is taken as the statistical duration of the whole page information. In the embodiment of the application, the page information can be determined through a life cycle callback function of the page information.
For each piece of partial page information, a display duration of the partial page information may be determined based on time information of the piece of partial page information that is visible to invisible from the target interface. In the embodiment of the application, the page information of the partial page can be determined through the life cycle callback function of the partial page information.
For example, for the convenience of understanding, the whole page information is represented by Activity, the Fragment is used as the partial page information for description, and the process of counting the time-limited duration is specifically explained.
Active Activity represents a single screen with a user interface, such as a window or frame. Activity can be understood as that the user opens an App interface to be called Activity, interaction between the user and a screen is provided for the user to operate conveniently, and the Activity can fill the whole screen or cover a part of the screen.
Activity is managed in the system by the Activity stack. When a new Activity starts, it will be forced to the top of the stack and become the running Activity state. While the previous Activity is always placed in the stack below this Activity and will not be moved to the foreground until the new Activity exits.
In the embodiment of the application, the Activity entering event opportunity is determined, that is, the time of Activity. Determining the time of the page exit event, namely determining the time of Activity.
In the embodiment of the present application, the fragment Fragmen is a part of Activity, so that the Activity is more modular in design.
Fragment exists depending on Activity, and there is a case where a plurality of fragments are preloaded in one Activity, so that the duration of the current Fragment cannot be calculated simply by the life cycle of Fragment.
In this embodiment of the present application, the display duration of Fragment may be determined based on the visibility of Fragment, and specifically, in this embodiment of the present application, whether Fragment starts (ends) a page cycle may be determined by a lifecycle callback function Fragment of Fragment.
Similar to the process of determining the display duration of Activity, it is also necessary to determine the entry event time of Fragment and the exit event time of Fragment.
In the embodiment of the present application, because multiple fragments exist in Activity and there may be a nested relationship between the fragments, when Fragment enters an event, it is necessary to determine the display states of the parent Fragment and the child fragments.
Specifically, in this embodiment of the present application, the moment when the Fragment page enters the event is: current Fragment is visible (parent Fragment is required to be visible if there is one); fragment page exit event opportunity: fragment is not visible (if parent Fragment is not visible, its child Fragment is also not visible by default).
The following specifically explains the flow of determining the display time length of each Fragment with reference to fig. 14, fig. 15, and fig. 16.
First, in fig. 14, the entry time of each Fragment is determined by the lifecycle callback function onframerrencelled () of each Fragment, then the entry time is added into the statistical table, and then the next Fragment is traversed; meanwhile, in the embodiment of the present application, the exit time of each Fragment is determined by the lifecycle callback function onframentpause () of each Fragment. Therefore, it can be known from the above contents that two operations are specifically performed when the display time length of each Fragment is determined, one is to determine whether there is a newly added visible Fragment; one is whether any fragments become invisible.
In FIG. 15, a specific process of determining whether there is a newly added visible Fragment is described.
The specific process is that whether other fragments are not traversed is determined from the Fragment list, if so, the fragments are traversed, whether the fragments are visible is determined, and if so, the entering time of the fragments is added into the statistical table; if not, return to the step of determining from the Fragment list if there are more fragments that have not been traversed to.
In the embodiment of the present application, if it is determined from the Fragment list that no other fragments have not been traversed, the traversal is stopped.
As shown in FIG. 16, the introduction is a specific process of determining whether there is a new invisible Fragment. Similar to the specific process of determining whether there is a newly added visible Fragment, in the embodiment of the present application, it is determined whether there are other fragments that have not been traversed from the Fragment list, and if so, the Fragment is traversed, and it is determined whether the Fragment is visible; if not, determining the exit time of the Fragment, and deleting the Fragment from the Fragment list; if so, return to the step of determining from the Fragment list whether there are more fragments that have not been traversed to.
In the embodiment of the present application, if it is determined from the Fragment list that no other fragments have not been traversed, the traversal is stopped.
The above embodiments exemplarily explain the process of determining the display duration of the whole page information and the display duration of the partial page information, and certainly, other methods for determining the display duration are available, which are not described herein again.
In the embodiment of the present application, it may also be determined that the user has a screen flipping rate for the page information, that is, the user has triggered a screen flipping behavior in a sliding browsing manner, where the screen flipping is defined first.
In an embodiment of the application, the screen flipping rate may be determined based on the height of the target interface and the distance the user can slide in the target interface.
In an alternative embodiment, a reference position may be determined in the target interface when the user starts an activity, and a first visual object currently located in the page information of the reference position is determined; and after the user slides the page information in the interface, determining a second visual object in the page information which is currently positioned at the reference position.
In the embodiment of the present application, the second visual object may be a visual object located at the reference position after the user slides the maximum distance, or may be a visual object located at the set position.
An optional method for calculating the screen flipping rate is provided, where it is assumed that the height of the target interface is height, and the maximum sliding distance of the user in the target interface is scroll _ y, and then the screen flipping rate flip _ te of the browsing user is shown in formula 1:
flip _ te (scroll _ y + height)/height formula 1
In the embodiment of the application, the maximum sliding distance of the user in the target interface can be obtained through a bottom-layer function getpagemaxsroly, and the maximum sliding distance in the target interface can be determined through different bottom-layer functions according to different page information.
How to determine the first visual object and the second visual object when the user slides the maximum distance is explained below with reference to fig. 17, and determine the scroll _ y and height in formula 1.
In fig. 17 it is shown that the page information slides from the top visualization object and the bottom visualization object is displayed in the target interface, i.e. the maximum distance the page information can slide.
A reference position is set in the target interface, a first visual object in the page information is located at the reference position when the top visual object is displayed in the target interface, and a second visual object in the page information is located at the reference position when the bottom visual object is displayed in the target interface, and in fig. 17, the bottom visual object in the page information is located at the reference position, so that the distance between the first visual object and the bottom visual object of the page information is defined as scroll _ y. The height of the target interface is height in FIG. 17.
The screen flip _ date can be determined based on equation 1 when obtaining scroll _ y and height.
In the embodiment of the application, the page position where each current visual object slides can be judged through sliding callback, the sliding height is recorded, and when the sliding distance is obtained, the refined recording of the sliding distance is completed by judging the current state and then performing compensation operation;
in the embodiment of the application, when a user downloads and opens a browser according to application software, the SDK is installed at the same time, so that the exposure data acquisition method in the embodiment can be performed; in this embodiment of the application, a specific structure of the SDK may be as shown in fig. 18, where the SDK is found in fig. 18, and includes an access layer, an event layer, a monitoring layer, a reporting layer, and a configuration management layer, and specifically, the configuration management layer may manage other layers, may also set the SDK, and may also set exposure parameters and exposure policies for each visual object in the page information.
The access layer, the event layer and the access layer monitor the display state of each visual object together, and obtain the exposure data of each visual object, and the report layer is used for reporting the exposure data.
Of course, the above is only an optional SDK structure, and there are other SDK structures, which are not described herein again.
In the embodiment of the application, when the display state of the visual object changes, the display state of each visual object can be traversed, and in the embodiment of the application, a plurality of exposure methods can be flexibly realized by setting a conventional exposure strategy and a group exposure strategy; in the embodiment of the application, the display duration of the visual object and the screen turning rate of the user for the target interface can be counted, so that the statistics of the exposure data acquisition service is facilitated, and the information content of the statistics is enriched.
Based on the same technical concept, an embodiment of the present application provides an exposure data acquiring apparatus 1900, as shown in fig. 19, including:
a page information determining unit 1901, configured to determine page information loaded by a target interface, where the page information includes at least one visualization object;
a target visual object determining unit 1902, configured to determine, when a notification message triggered when a display state of a visual object changes is monitored, a target visual object whose display state changes in at least one visual object;
an exposure data reporting unit 1903, configured to report the target exposure data of the target visual object when the display state of the target visual object meets the exposure condition.
Optionally, the target visualization object determining unit 1902 is further configured to:
and determining a target exposure condition matched with the target identification information from at least one exposure condition based on the target identification information of any target visual object, and determining whether the display state of the target visual object is matched with the target exposure condition.
Optionally, the exposure data reporting unit 1903 is further configured to:
determining whether the target visualization object is a group object in a target group, the group comprising a plurality of group objects;
and if the target visual object is determined to be the group object, reporting the group exposure data of the target group.
Optionally, the exposure data reporting unit 1903 is further configured to:
and determining that the target exposure data of the target visual object does not exist in the cached exposure data based on the identification information and the position information of the target visual object, wherein the cached exposure data is determined according to other target exposure data of other target reporting objects of the reported object data.
Optionally, the exposure data reporting unit 1903 is further configured to:
and determining the target visual object as an interface sliding display object.
Optionally, the exposure data reporting unit 1903 is further configured to:
if the target visual object is determined to be the interface sliding display object, and the target exposure data of the target visual object exists in the cached object data based on the identification information and the position information of the target visual object, the target exposure data is not reported, and the sliding display object represents the visual object which can be displayed in the target interface in a sliding manner by the target visual object.
Optionally, the apparatus 1900 further comprises:
a duration counting unit 1904, configured to determine, for any visual object, if it is determined that the visual object has a page duration counting tag, a display duration of the visual object according to a first time at which the visual object has an interface display focus in the target interface and a second time at which the visual object does not have the interface display focus in the target interface, and report the display duration.
Optionally, the apparatus 1900 further comprises:
a screen-flipping rate statistics unit 1905, configured to determine a reference position in the target interface;
determining a sliding distance of the target page information based on a first visual object of which the target page information starts to slide and a second visual object of which the target page information stops sliding and is located at a reference position;
and determining the screen turning rate aiming at the target page information based on the sliding distance of the target page information and the interface height of the target interface.
Optionally, the apparatus 1900 further comprises:
a non-target visualized object determining unit 1906, configured to determine, in the at least one visualized object, a non-target visualized object whose display state does not change and a reporting time of exposure data of the non-target visualized object;
and if the time difference between the reporting time and the current time meets the condition of reporting time again, reporting the exposure data of the non-target visual object.
Optionally, the page information determining unit 1901 is further configured to:
and replacing the management object in the page information with a state monitoring object, and monitoring the notification message through the state monitoring object.
Based on the same technical concept, the embodiment of the present application provides a computer device, as shown in fig. 20, including at least one processor 2001 and a memory 2002 connected to the at least one processor, and the specific connection medium between the processor 2001 and the memory 2002 is not limited in the embodiment of the present application, and the processor 2001 and the memory 2002 are connected through a bus in fig. 20 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 2002 stores instructions executable by the at least one processor 2001, and the at least one processor 2001 may execute the steps included in the foregoing exposure data acquisition method by executing the instructions stored in the memory 2002.
The processor 2001 is a control center of the computer device, and may connect various parts of the computer device using various interfaces and lines, and create a virtual machine by executing or executing instructions stored in the memory 2002 and calling data stored in the memory 2002. Optionally, the processor 2001 may include one or more processing units, and the processor 2001 may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 2001. In some embodiments, the processor 2001 and the memory 2002 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 2001 may be a general-purpose processor such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, configured to implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The memory 2002, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 2002 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 2002 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 2002 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer apparatus, which when the program is run on the computer apparatus, causes the computer apparatus to perform the steps of the exposure data acquisition method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (15)
1. An exposure data acquisition method, characterized in that the method comprises:
determining page information loaded by a target interface, wherein the page information comprises at least one visual object;
when a notification message triggered when the display state of the visual object changes is monitored, determining a target visual object with a changed display state in the at least one visual object;
and when the display state of the target visual object meets the exposure condition, reporting the target exposure data of the target visual object.
2. The method of claim 1, further comprising:
and determining a target exposure condition matched with the target identification information from at least one exposure condition based on the target identification information of any target visual object, and determining whether the display state of the target visual object is matched with the target exposure condition.
3. The method of claim 1, wherein after reporting the target exposure data of the target visualization object, the method further comprises:
determining whether the target visualization object is a group object in a target group, the group comprising a plurality of group objects;
and if the target visual object is determined to be the group object, reporting the group exposure data of the target group.
4. The method according to claim 1, further comprising, after the display state of the target visualization object satisfies an exposure condition:
determining that the target exposure data of the target visualization object does not exist in the cached exposure data based on the identification information and the position information of the target visualization object, wherein the cached exposure data is determined according to other target exposure data of other target reporting objects of the reported object data.
5. The method of claim 4, wherein before determining that the target exposure data of the target visualization object does not exist in the cached exposure data based on the identification information and the location information of the target visualization object, further comprising:
and determining the target visualization object as an interface sliding display object.
6. The method of claim 5, further comprising:
if the target visualization object is determined to be an interface sliding display object, and it is determined that the target exposure data of the target visualization object exists in the cached object data based on the identification information and the position information of the target visualization object, the target exposure data is not reported, and the sliding display object represents a visualization object that can be slidably displayed in the target interface by the target visualization object.
7. The method of claim 1, further comprising:
and for any visual object, if the visual object is determined to have a page time statistical label, determining the display time length of the visual object according to a first moment when the visual object has an interface display focus in the target interface and a second moment when the visual object does not have the interface display focus in the target interface, and reporting the display time length.
8. The method of claim 1, wherein the page information loaded by the target interface is part of the target page information, and the method further comprises:
determining a reference position in the target interface;
determining a sliding distance of the target page information based on a first visual object of which the target page information sliding starting moment is located at the reference position and a second visual object of which the target page information sliding stopping moment is located at the reference position;
determining a screen turning rate for the target page information based on the sliding distance of the target page information and an interface height of the target interface.
9. The method according to claim 1, wherein after monitoring a notification message triggered when the display state of the visual object changes, the method further comprises:
determining the non-target visual objects with unchanged display states and the reporting time of the exposure data of the non-target visual objects in the at least one visual object;
and if the time difference between the reporting time and the current time is determined to meet the condition of reporting time again, reporting the exposure data of the non-target visual object.
10. The method of claim 1, wherein after determining page information loaded by the target interface, further comprising:
and replacing the management object in the page information with a state monitoring object, and monitoring the notification message through the state monitoring object.
11. An exposure data acquisition apparatus, comprising:
the page information determining unit is used for determining page information loaded by a target interface, and the page information comprises at least one visual object;
the target visual object determining unit is used for determining a target visual object with a changed display state in the at least one visual object when a notification message triggered when the display state of the visual object is changed is monitored;
and the exposure data reporting unit is used for reporting the target exposure data of the target visual object when the display state of the target visual object meets the exposure condition.
12. The apparatus according to claim 11, wherein the target visualization object determination unit is further configured to:
and determining a target exposure condition matched with the target identification information from at least one exposure condition based on the target identification information of any target visual object, and determining whether the display state of the target visual object is matched with the target exposure condition.
13. The apparatus of claim 11, wherein the exposure data reporting unit is further configured to:
determining whether the target visualization object is a group object in a target group, the group comprising a plurality of group objects;
and if the target visual object is determined to be the group object, reporting the group exposure data of the target group.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 1 to 10 are performed when the program is executed by the processor.
15. A computer-readable storage medium, in which a computer program is stored which is executable by a computer device, and which, when run on the computer device, causes the computer device to carry out the steps of the method as claimed in any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011074718.7A CN114328072B (en) | 2020-10-09 | 2020-10-09 | Exposure data acquisition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011074718.7A CN114328072B (en) | 2020-10-09 | 2020-10-09 | Exposure data acquisition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114328072A true CN114328072A (en) | 2022-04-12 |
CN114328072B CN114328072B (en) | 2024-07-26 |
Family
ID=81032309
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011074718.7A Active CN114328072B (en) | 2020-10-09 | 2020-10-09 | Exposure data acquisition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114328072B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0920193D0 (en) * | 2008-11-18 | 2010-01-06 | Mcknight Thomas R | Cooperative measurement technique for the determination of internet web page exposure and viewing behavior |
CN101894018A (en) * | 2010-05-31 | 2010-11-24 | 浪潮(北京)电子信息产业有限公司 | Method and device for maintaining control state information |
US20110047507A1 (en) * | 2009-05-15 | 2011-02-24 | Invensys Systems, Inc. | Graphically displaying manufacturing execution system information data elements according to a pre-defined spatial positioning scheme |
CN102833183A (en) * | 2012-08-16 | 2012-12-19 | 上海量明科技发展有限公司 | Instant messaging interactive interface moving method, client and system |
WO2017167042A1 (en) * | 2016-04-01 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Statistical method and apparatus for behaviors of front-end users |
CN108846116A (en) * | 2018-06-26 | 2018-11-20 | 北京京东金融科技控股有限公司 | Page Impression collecting method, system, electronic equipment and storage medium |
US20190026212A1 (en) * | 2013-10-04 | 2019-01-24 | Verto Analytics Oy | Metering user behaviour and engagement with user interface in terminal devices |
CN111125591A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Statistical method, device, terminal and storage medium of exposure information |
-
2020
- 2020-10-09 CN CN202011074718.7A patent/CN114328072B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0920193D0 (en) * | 2008-11-18 | 2010-01-06 | Mcknight Thomas R | Cooperative measurement technique for the determination of internet web page exposure and viewing behavior |
US20110047507A1 (en) * | 2009-05-15 | 2011-02-24 | Invensys Systems, Inc. | Graphically displaying manufacturing execution system information data elements according to a pre-defined spatial positioning scheme |
CN101894018A (en) * | 2010-05-31 | 2010-11-24 | 浪潮(北京)电子信息产业有限公司 | Method and device for maintaining control state information |
CN102833183A (en) * | 2012-08-16 | 2012-12-19 | 上海量明科技发展有限公司 | Instant messaging interactive interface moving method, client and system |
US20190026212A1 (en) * | 2013-10-04 | 2019-01-24 | Verto Analytics Oy | Metering user behaviour and engagement with user interface in terminal devices |
WO2017167042A1 (en) * | 2016-04-01 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Statistical method and apparatus for behaviors of front-end users |
CN108846116A (en) * | 2018-06-26 | 2018-11-20 | 北京京东金融科技控股有限公司 | Page Impression collecting method, system, electronic equipment and storage medium |
CN111125591A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Statistical method, device, terminal and storage medium of exposure information |
Non-Patent Citations (1)
Title |
---|
QINGLIANGHU: "大厂经验:Android端埋点自动采集技术原理剖析", pages 1 - 8, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/147602858> * |
Also Published As
Publication number | Publication date |
---|---|
CN114328072B (en) | 2024-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108363602B (en) | Intelligent UI (user interface) layout method and device, terminal equipment and storage medium | |
US20160364772A1 (en) | Graphical user interface for high volume data analytics | |
CN108874289B (en) | Application history record viewing method and device and electronic equipment | |
CN105898209A (en) | Video platform monitoring and analyzing system | |
CN110457615A (en) | Method for displaying and processing, device, equipment and the readable storage medium storing program for executing of personal page | |
CN108804299A (en) | Application exception processing method and processing device | |
CN103443781A (en) | Data delivery | |
CN112394908A (en) | Method and device for automatically generating embedded point page, computer equipment and storage medium | |
CN109803152A (en) | Violation checking method, device, electronic equipment and storage medium | |
CN110908880B (en) | Buried point code injection method, event reporting method and related equipment thereof | |
CN113010795B (en) | User dynamic image generation method, system, storage medium and electronic device | |
CN111581069A (en) | Data processing method and device | |
WO2021189766A1 (en) | Data visualization method and related device | |
CN115544183A (en) | Data visualization method and device, computer equipment and storage medium | |
CN115168166A (en) | Method, device and equipment for recording business data change and storage medium | |
CN115445212A (en) | Game gift bag pushing method and device, computer equipment and storage medium | |
CN112506733A (en) | Method, device, equipment and medium for finely analyzing user behavior data | |
CN111683280A (en) | Video processing method and device and electronic equipment | |
CN114328072B (en) | Exposure data acquisition method and device | |
CN116186119A (en) | User behavior analysis method, device, equipment and storage medium | |
CN113672660B (en) | Data query method, device and equipment | |
CN116860541A (en) | Service data acquisition method, device, computer equipment and storage medium | |
CN115129809A (en) | Method and device for determining user activity, electronic equipment and storage medium | |
CN114090392A (en) | Page browsing time duration statistical method and device, electronic equipment and storage medium | |
CN110020166A (en) | A kind of data analysing method and relevant device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |