CN115134352B - Buried point data uploading method, device, equipment and medium - Google Patents

Buried point data uploading method, device, equipment and medium Download PDF

Info

Publication number
CN115134352B
CN115134352B CN202210744360.7A CN202210744360A CN115134352B CN 115134352 B CN115134352 B CN 115134352B CN 202210744360 A CN202210744360 A CN 202210744360A CN 115134352 B CN115134352 B CN 115134352B
Authority
CN
China
Prior art keywords
queue
data
uploading
buried
buried point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210744360.7A
Other languages
Chinese (zh)
Other versions
CN115134352A (en
Inventor
饶伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210744360.7A priority Critical patent/CN115134352B/en
Publication of CN115134352A publication Critical patent/CN115134352A/en
Application granted granted Critical
Publication of CN115134352B publication Critical patent/CN115134352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of communication, and particularly discloses a buried point data uploading method, a buried point data uploading device, buried point data uploading equipment and a buried point data uploading medium, wherein the buried point data uploading method comprises the following steps of: initializing the buried point in response to an application program starting instruction to obtain an initialized buried point; acquiring embedded point data and embedded point data types in the running process of an application program through the initialized embedded points, and storing the embedded point data in a blocking queue; acquiring a current network environment, and storing the buried data in the blocking queue in the concurrent queue according to the corresponding relation between the preset buried data type and the concurrent queue when the current network environment meets the preset condition; and uploading the buried point data meeting the preset uploading condition in the concurrent queue to the cloud. According to the invention, the data are stored in the blocking queue and the concurrent queue successively, and are uploaded to the cloud when the preset uploading condition is met, so that the data are prevented from being lost when no network exists, the data safety is ensured, and the data uploading rate is improved.

Description

Buried point data uploading method, device, equipment and medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a medium for uploading buried point data.
Background
Communication technology is rapidly developed, data transmission efficiency is increasing, and users can experience more colorful internet services on limited screen space. In order to enrich the life of users and improve the user experience, companies in various industries of large and small sizes are internetworked, and home services are packaged into APP (Application) and sent to user equipment. In the automobile intelligent control field, in order to make the automobile be inferior to the intelligent terminal of cell-phone, provide convenient and practical's intelligent with the car experience for the user, and the smart mobile phone is as the equipment that is closest to the user, in order to let the automobile be closer to the user, partial automobile enterprise has developed the APP that can conveniently acquire vehicle information and accuse car function, supplies the user to install on the cell-phone in order to acquire vehicle information and carry out remote control. Meanwhile, in order to better serve the vehicle owners, data of all functions in the APP are collected in the range of the user permission, the function experience of the APP is optimized and improved according to the data, and the data collected in the APP can be uploaded to the cloud end of a company when appropriate, so that team analysis and research can be conducted to formulate the improvement direction of the APP.
In APP developed based on Android (Android) system, most applications are buried, so that the Android buried site has many buried site SDKs (Software Development Kit, software development tool packages) until now, the buried site SDKs provide a convenient buried site use method for many APPs, but the existing buried site SDKs have the problems of data security and incapability of being modified according to requirements.
In addition, in the prior art, part of technologies aim to improve the statistical analysis efficiency of buried point data and realize the construction of a multi-service buried point database. However, the existing buried point data uploading method is low in efficiency and cannot support increasingly strong buried point data analysis requirements.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application provide a method, an apparatus, a device, and a medium for uploading buried point data, which can overcome the problems of data security, incapability of modifying according to requirements, and low efficiency of uploading buried point data in related technologies.
Additional features and advantages of the application will be set forth in the detailed description which follows, or in part may be learned by practice of the application.
In one embodiment of the present application, a method for uploading buried point data includes:
initializing the buried point in response to an application program starting instruction to obtain an initialized buried point;
acquiring embedded point data and embedded point data types in the running process of an application program through the initialized embedded points, and storing the embedded point data in a blocking queue;
acquiring a current network environment, and storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue when the current network environment meets the preset condition;
And uploading the buried point data meeting the preset uploading condition in the concurrent queue to a cloud.
In an embodiment of the present application, after uploading the embedded point data meeting the preset uploading condition in the concurrency queue to the cloud, the method includes:
and deleting the buried point data in the blocking queue and the concurrent queue in response to an application program closing instruction so as to release the storage space in the blocking queue and the concurrent queue.
In an embodiment of the present application, the initializing the buried point, after obtaining the initialized buried point, includes:
and if the initialized embedded point has the embedded point data left when the application program runs last time, storing the left embedded point data into the blocking queue.
In an embodiment of the present application, initializing the buried point to obtain an initialized buried point includes:
initializing a buried point instance, wherein the buried point instance is used for responding to an external control instruction;
creating a current network environment receiving object, wherein the current network environment receiving object is used for receiving the current network environment of the application program in real time;
creating a data storage queue, wherein the data storage queue comprises a blocking queue and a concurrency queue;
Creating a buried point program, wherein the buried point program is used for controlling the storage and uploading of buried point data generated in the running process of the application program;
and obtaining the initialized buried point according to the initialized buried point instance, the current network environment receiving object, the storage queue and the buried point program.
In an embodiment of the present application, the obtaining the current network environment, when the current network environment meets a preset condition, stores the embedded data in the blocking queue in the concurrency queue according to a preset correspondence between the embedded data type and the concurrency queue, including:
acquiring the current network environment from the embedded point sub-thread within a preset time interval;
if the current network environment meets the preset condition, storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue;
if the current network environment does not meet the preset conditions, blocking the embedded point main thread, and when the current network environment is changed to meet the preset conditions, removing thread blocking, and storing embedded point data in the blocking queue in the concurrent queue.
In an embodiment of the present application, uploading the embedded point data meeting the preset uploading condition in the concurrency queue to the cloud end includes:
traversing the buried point data in the concurrent queue according to preset uploading conditions, and screening out the buried point data meeting the preset uploading conditions;
and storing the buried data meeting the preset uploading condition in a preset object, and uploading the object to a cloud end through a public network framework.
In one embodiment of the present application, the method further comprises:
when initializing the buried point, sending an initialization message to a message center in the buried point function for controlling the sub-thread to perform buried point initialization;
when the buried point data is stored, the functional internal message center forwards a data storage message for controlling a sub-thread to store the buried point data and uploading the buried point data after a preset delay time;
when uploading the buried point data, the functional internal message center forwards a data uploading message, and the controlled sub-thread uploads the buried point data meeting the preset uploading condition in the concurrent queue to the cloud;
after the embedded point data meeting the preset uploading condition in the concurrent queue is uploaded to the cloud, the functional internal message center forwards a data deleting message for controlling the sub-thread to delete the embedded point data in the blocking queue and the concurrent queue.
In one embodiment of the present application, there is provided a buried point data uploading apparatus, including:
the initialization module is used for responding to the starting instruction of the application program and initializing the buried point to obtain an initialized buried point;
the first storage module is used for acquiring buried point data and buried point data types in the running process of the application program through the initialized buried points and storing the buried point data in the blocking queue;
the second storage module is used for acquiring a current network environment, and storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue when the current network environment meets the preset condition;
and the embedded point data uploading module is used for uploading the embedded point data meeting the preset uploading condition in the concurrent queue to the cloud.
In one embodiment of the present application, there is provided an electronic device including:
one or more processors;
and a storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the buried point data uploading method as described above.
In one embodiment of the present application, a computer readable medium is provided, on which a computer program is stored which, when executed by a processor, implements a buried point data uploading method as described above.
In the technical scheme provided by the embodiment of the application, the prior art is based on the buried point SDK provided by the third party company to collect and upload the buried point data, the data security cannot be ensured in the mode, and the existing buried point data uploading method is low in efficiency and cannot meet the requirement of efficient transmission. In the method, the embedded point is initialized in response to an application program starting instruction, whether the embedded point has legacy data or not can be detected in the initialization process, so that the loss of the data is avoided, and meanwhile, the configuration file can be updated and the thread can be created; acquiring embedded point data and embedded point data types in the running process of an application program through the initialized embedded point, and storing the embedded point data in a blocking queue; acquiring a current network environment, and storing the buried data in the blocking queue in the concurrent queue according to the corresponding relation between the preset buried data type and the concurrent queue when the current network environment meets the preset condition; the embedded point data are sequentially stored in the blocking queue and the concurrent queue, and the data are stored and uploaded according to two preset conditions, so that data congestion and thread blocking are avoided, and the embedded point data uploading efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It will be apparent to those of ordinary skill in the art that the drawings described below are merely examples of some of the embodiments of this application and that other drawings may be made from these drawings without the exercise of inventive effort. In the drawings:
FIG. 1 is a schematic illustration of one implementation environment to which the present application relates;
FIG. 2 is a flow chart illustrating a method of uploading buried point data according to an exemplary embodiment of the present application;
FIG. 3 is a life cycle schematic of a buried point shown in an exemplary embodiment of the present application;
FIG. 4 is a diagram illustrating the types of data received by a message center within a buried point function according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a buried point data upload flow according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a buried point data uploading device according to an exemplary embodiment of the present application;
Fig. 7 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Reference to "a plurality" in this application means two or more than two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Herein, abbreviations and key term definitions related to the present application are presented:
burying: event Tracking (Event Tracking), which is a related technology for capturing, processing and transmitting specific user behaviors or events and an implementation process thereof, adds a section of code to a place where user behavior data need to be detected, collects related embedded point data to a cloud end, and finally presents the data in the cloud end. The embedded point gathers some information in the application specific flow, which is used to track the usage of the application, and then to further optimize the product or provide data support for the operation, including the number of accesses (Visits), the number of visitors (visitors), the stay Time (Time On Site), the number of Page Views (Page Views), and the jump-out Rate (Bounce Rate). Such information collection can be roughly classified into two types: page statistics (track this virtual page view), statistics of operational behavior (track this button by an event).
LinkedBilockingQueue: is a blocking queue implemented as a singly linked list. The queue orders the elements in a FIFO (first in first out), new elements are inserted at the end of the queue, and the queue fetch operation will get the elements at the head of the queue. The throughput of a linked queue is typically higher than an array-based queue, but in most concurrent applications its predictable performance is low. In addition, linkedBlockingQueue is also of optional capacity (to prevent over-inflation), i.e., the capacity of the queue can be specified. If not specified, the default capacity size is equal to Integer.
Blocking queues: when the blocking queue is empty, the operation of acquiring data (wake) is blocked; when the blocking queue is full, the add (put) operation blocks.
ConcurrentLinkedQueue: concurrentLinkedQueue is a link node-based, thread-free, safe queue, and the elements are ordered according to the first-in first-out principle. New elements are inserted from the tail of the queue and queue elements are acquired, which need to be acquired from the head of the queue. A link node-based thread-free secure queue. This queue orders the elements according to FIFO (first in first out) principles. The head of the queue is the longest element in the queue. The tail of the queue is the element in the queue that is the shortest in time.
Thread: thread is the minimum unit that the operating system can perform operation scheduling. It is included in the process and is the actual unit of operation in the process. One thread refers to a single sequential control flow in a process, and multiple threads can be concurrent in a process, each thread executing different tasks in parallel.
At present, the acquisition of buried point data is mostly realized by adopting a third-party SDK, and the method mainly comprises three methods of code buried point, visual buried point and no buried point. The third party buried point has data leakage and red risk, and can not support the later data driving, and the realization of user portraits and the like still needs to depend on the realization of the self-developed buried point. In the buried point design, besides the buried point mode, the uploading mode of buried point data also influences the subsequent data analysis result. The schemes commonly used at present when uploading buried point data comprise: the conventional XHR request (XMLHttpRequest, extensible hypertext transfer request), image object (embedded Image), and Beacon API (Beacon interface), the above three common embedded point data uploading methods limit the data volume, and may cause data uploading failure when the page is closed.
The method, device, equipment and medium for uploading buried point data provided in the embodiments of the present application relate to the blocking queue and the concurrency queue described above, and these embodiments will be described in detail below.
Referring first to fig. 1, fig. 1 is a schematic diagram of an implementation environment according to the present application. The implementation environment comprises terminal devices 101, 102, 103, a network 104 and a cloud 105. The network 104 may provide a medium for communication links between the terminal devices 101, 102, 103 and the cloud. The network 104 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, among others.
A user may interact with the cloud 105 through the network 104 using the terminal devices 101, 102, 103 to receive or transmit data or the like. The terminal devices 101, 102, 103 may be various electronic devices that have a display screen and support web browsing, executing applications, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, smart voice interaction devices, smart home appliances, car terminals, etc. The embodiment of the invention can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent transportation, auxiliary driving and the like.
The cloud 105 may be a cloud providing various services, for example, a cloud acquiring the embedded data collected by the terminal devices 101, 102, 103 for storage and analysis.
The cloud 105 may be an independent physical cloud, or may be a cloud cluster or a distributed system formed by a plurality of physical clouds, or may be a cloud that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), and basic cloud computing services such as big data and artificial intelligence platforms, which are not limited in this disclosure.
The terminal devices 101, 102, 103 may, for example, respond to an application program start instruction to initialize the buried point, and obtain an initialized buried point; acquiring embedded point data in the running process of an application program through the initialized embedded point, and storing the embedded point data in a blocking queue; acquiring a current network environment, and storing the embedded data in the blocking queue in the concurrency queue according to the corresponding relation between the preset embedded data type and the concurrency queue when the current network environment meets the preset condition; and uploading the buried point data meeting the preset uploading condition in the concurrent queue to a cloud.
Fig. 2 is a flowchart illustrating a method for uploading buried point data according to an exemplary embodiment of the present application. The method provided in the embodiment of the present application may be performed by any electronic device having computing processing capabilities, for example, the method may be performed by the terminal devices 101, 102, 103 in the embodiment of fig. 1, or may be performed by the cloud 105 and the terminal devices 101, 102, 103 together. In the following embodiments, terminal apparatuses 101, 102, 103 are exemplified as execution subjects, but the present disclosure is not limited thereto.
In some embodiments, before performing the following buried point data uploading method, the terminal devices 101, 102, 103 may set the buried point at the front end or the back end by a pre-designed buried point scheme.
Referring to fig. 2, the method for uploading buried point data provided in the embodiment of the present application may include the following steps.
In step S210, in response to the application program start instruction, the embedded point is initialized, and the initialized embedded point is obtained.
In an embodiment of the present application, the buried point initialization needs to be performed before the buried point data is collected, so as to obtain the processed buried point. In this embodiment, the life cycle of the buried point is also predefined, and referring to fig. 3, fig. 3 is a schematic diagram of the life cycle of the buried point according to an exemplary embodiment of the present application. The APP has three complete states including 'APP start', 'APP in operation', 'APP exit', which correspond to the life cycles of the three embedded components, namely 'initialization', 'data entry', 'memory release'. The initialization is the initialization of the buried point in the embodiment, the input data is the acquisition of the buried point data and uploading to the cloud end, and the release of the memory is the release of the buried point data storage memory after the APP is closed. The definition of the life cycle can facilitate normal use of functions related to the buried point, and give consideration to energy consumption of terminal equipment, so that the memory occupied by the buried point data can be cleaned while APP is withdrawn.
In this embodiment, the initialization process is called when the APP is started, and may initialize files related to the buried point (read and write buried point data), broadcast (monitor network change), threads (process uploading network), messages (handler), and the like; a method for calling when the APP buries the data is recorded, wherein all the buries data enter a buries processing flow from the method; and releasing the memory, namely a data destruction process, for the APP to be called when closed, for properly processing the current embedded point data, destroying the embedded point related memory object and releasing part of the memory.
In an embodiment of the present application, the step S210 specifically includes the following steps:
initializing a buried point instance, wherein the buried point instance is used for responding to an external control instruction;
creating a current network environment receiving object, wherein the current network environment receiving object is used for receiving the current network environment of the application program in real time;
creating a data storage queue, wherein the data storage queue comprises a blocking queue and a concurrency queue;
creating a buried point program, wherein the buried point program is used for controlling the storage and uploading of buried point data generated in the running process of the application program;
and obtaining the initialized buried point according to the initialized buried point instance, the current network environment receiving object, the storage queue and the buried point program.
In an embodiment of the present application, step S210 further includes the following steps:
and if the initialized buried point has the buried point data left when the last application program runs, storing the left buried point data into the blocking queue.
The traditional buried point scheme has the defect that buried point data cannot be effectively reserved before the buried point SDK is initialized; in this embodiment, before the initialization of the buried point is not completed, the temporary storage of the buried point data in the blocking queue is supported, and after the initialization is completed, the buried point data is taken out from the blocking queue and written into the local file, and is subsequently uploaded to the cloud.
In step S220, the embedded point data and the embedded point data type in the running process of the application program are acquired through the initialized embedded point, and the embedded point data are stored in the blocking queue.
In an embodiment of the present application, the collected buried point data is stored in the blocking queue LinkedBlockingQueue first, where the blocking queue LinkedBlockingQueue may store buried point data left when the last application program is running, and may also store buried point data collected when the application program is running.
In step S230, a current network environment is obtained, and when the current network environment meets a preset condition, the buried data in the blocking queue is stored in the concurrency queue according to a preset correspondence between the buried data type and the concurrency queue.
In an embodiment of the present application, when the current network environment meets a preset condition, the point embedding data left when the last application program runs in the blocking queue and the point embedding data collected when the application program is executed this time may be transferred to the concurrent queue. In this embodiment, when the network is available, the data at the head of the blocking queue is stored in the concurrent queue, and when the network is unavailable, the synchronous lock blocking is performed, and the data is not added at the tail of the blocking queue.
In an embodiment of the present application, step S230 specifically includes the following steps:
acquiring the current network environment from the embedded point sub-thread within a preset time interval;
if the current network environment meets the preset condition, storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue;
if the current network environment does not meet the preset conditions, blocking the embedded point main thread, and when the current network environment is changed to meet the preset conditions, removing thread blocking, and storing embedded point data in the blocking queue in the concurrent queue.
Considering that a network is required when the buried point data is uploaded to the cloud, in order to reduce data uploading failure caused by network unavailability and further data loss, in this embodiment, if no network is detected before uploading, a thread is actively blocked until a broadcast of network availability is received, and the thread blocking is released. In this embodiment, the concurrency queue uses ConcurrentLinkedQueue, and the buried point data transferred in the blocking queue is stored through ConcurrentLinkedQueue.
In step S240, the embedded point data satisfying the preset uploading condition in the concurrency queue is uploaded to the cloud.
In an embodiment of the present application, step S240 specifically includes the following steps:
traversing the buried point data in the concurrent queue according to preset uploading conditions, and screening out the buried point data meeting the preset uploading conditions;
and storing the buried data meeting the preset uploading condition in a preset object, and uploading the object to a cloud end through a public network framework.
In this embodiment, when the embedded point data meets a preset uploading condition, the embedded point data directly traverses the data in ConsurrentLinkedQueue, then is put into a preset map object, and uploads the map object containing the embedded point data to the cloud through the public network frame.
In an embodiment of the present application, after step S240, the following steps are further included:
and deleting the buried point data in the blocking queue and the concurrent queue in response to an application program closing instruction so as to release the storage space in the blocking queue and the concurrent queue.
In this embodiment, after the uploading of the embedded point data is completed according to the predefined life cycle, the embedded point data is deleted when the APP is closed, so as to release the storage space.
In an embodiment of the present application, the method for uploading embedded point data in the present application further includes the following:
when initializing the buried point, sending an initialization message to a message center in the buried point function for controlling the sub-thread to perform buried point initialization;
when the buried point data is stored, the functional internal message center forwards a data storage message for controlling a sub-thread to store the buried point data and uploading the buried point data after a preset delay time;
when uploading the buried point data, the functional internal message center forwards a data uploading message, and the controlled sub-thread uploads the buried point data meeting the preset uploading condition in the concurrent queue to the cloud;
after the embedded point data meeting the preset uploading condition in the concurrent queue is uploaded to the cloud, the functional internal message center forwards a data deleting message for controlling the sub-thread to delete the embedded point data in the blocking queue and the concurrent queue.
In this embodiment, when initializing the buried point, the buried point function internal message center is ready to formally support the task of forwarding internal messages as needed. Referring to fig. 4, fig. 4 is a schematic diagram illustrating the types of data received by a message center within a buried point function according to an exemplary embodiment of the present application. The following 4 big messages exist inside the buried point function: the upload_direct_msg is used for reading local data and preparing for uploading;
The RECORD_DATA_MSG is used for storing DATA, and is ready to upload after delaying for a preset time;
align_data_msg to delete local uploaded DATA; the INIT_DATA_MSG is used for initializing a log thread, reading buried point DATA and uploading the cloud.
Illustratively, at the time of the initialization of the embedded point function, the 4 th message "INIT_DATA_MSG" is sent, and after the message is received by the message center inside the embedded point function, the embedded point thread is created to process the embedded point DATA and determine whether to upload the remote end. The message center is enabled in the embodiment because part of behavior triggering needs to be delayed, and the handler of the android system is used for processing the messages efficiently and conveniently, which is more friendly than the traditional mode of dormancy of threads.
In an embodiment of the present application, a detailed description is given of a buried point data uploading process. Referring to fig. 5, fig. 5 is a schematic diagram illustrating a buried point data uploading flow according to an exemplary embodiment of the present application. The blocking queue LinkedBlockingQueue performs synchronous lock blocking when the network is unavailable, so that data loss is avoided, and a piece of data is extracted from the blocking queue LinkedBlockingQueue when the network is available; when a preset uploading condition is met or the APP is closed, uploading local buried point data stored in ConsurrentLinkedQueue to the cloud; or storing the buried point data into the corresponding ConsumentLinkedQueue according to the data type, and uploading the buried point data in the ConsumentLinkedQueue to the cloud when the preset uploading quantity is reached. The above steps are looped in the sub-thread until the APP is turned off.
It should be noted that although the steps of the methods in the present application are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
The following describes an embodiment of the apparatus of the present application, which may be used to execute the buried point data uploading method in the foregoing embodiment of the present application. Referring to fig. 6, fig. 6 is a schematic structural diagram of a buried point data uploading apparatus according to an exemplary embodiment of the present application. The buried point data uploading device comprises an initialization module 610, a first storage module 620, a second storage module 630 and a buried point data uploading module 640.
The initializing module 610 is configured to initialize the buried point in response to an application program start instruction, to obtain an initialized buried point;
the first storage module 620 is configured to obtain, through the initialized buried point, buried point data and a buried point data type in an application running process, and store the buried point data in a blocking queue;
The second storage module 630 is configured to obtain a current network environment, and store, when the current network environment meets a preset condition, the embedded data in the blocking queue in the concurrency queue according to a preset correspondence between the embedded data type and the concurrency queue;
and the embedded point data uploading module 640 is configured to upload embedded point data meeting a preset uploading condition in the concurrency queue to the cloud.
It should be noted that, the apparatus provided in the foregoing embodiments and the method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiments, which is not repeated herein.
In an embodiment of the present application, there is further provided an electronic device including one or more processors, and a storage device, where the storage device is configured to store one or more programs, and when the one or more programs are executed by the one or more processors, cause the electronic device to implement the buried point data uploading method as described above.
Fig. 7 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a central processing unit (Central Processing Unit, CPU) 701 that can perform various appropriate actions and processes, such as performing the methods in the above-described embodiments, according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a random access Memory (Random Access Memory, RAM) 703. In the RAM 703, various programs and data required for the system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. When executed by a Central Processing Unit (CPU) 701, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
In an embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a buried point data uploading method as before. The computer-readable storage medium may be contained in the terminal device described in the above embodiment or may exist alone without being incorporated in the terminal device.
In an embodiment of the present application, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the buried point data uploading method provided in the above embodiments.
In an embodiment of the present application, there is also provided a computer system including a central processing unit (Central Processing Unit, CPU) that can perform various appropriate actions and processes, such as performing the methods in the above embodiments, according to a program stored in a Read-Only Memory (ROM) or a program loaded from a storage section into a random access Memory (Random Access Memory, RAM). In the RAM, various programs and data required for the system operation are also stored. The CPU, ROM and RAM are connected to each other by a bus. An Input/Output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface: an input section including a keyboard, a mouse, etc.; an output section including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section including a hard disk or the like; and a communication section including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drives are also connected to the I/O interfaces as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
The foregoing is merely a preferred exemplary embodiment of the present application and is not intended to limit the embodiments of the present application, and those skilled in the art may make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A method for uploading buried point data, the method comprising:
Initializing the buried point in response to an application program starting instruction to obtain an initialized buried point;
acquiring embedded point data and embedded point data types in the running process of an application program through the initialized embedded points, and storing the embedded point data in a blocking queue;
acquiring a current network environment, and storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue when the current network environment meets the preset condition;
uploading the buried point data meeting the preset uploading conditions in the concurrent queue to a cloud;
initializing the buried point to obtain an initialized buried point, including:
initializing a buried point instance, wherein the buried point instance is used for responding to an external control instruction;
creating a current network environment receiving object, wherein the current network environment receiving object is used for receiving the current network environment of the application program in real time;
creating a data storage queue, wherein the data storage queue comprises a blocking queue and a concurrency queue;
creating a buried point program, wherein the buried point program is used for controlling the storage and uploading of buried point data generated in the running process of the application program;
Obtaining the initialized buried point according to the initialized buried point instance, the current network environment receiving object, the storage queue and the buried point program;
the obtaining the current network environment, when the current network environment meets a preset condition, storing the embedded data in the blocking queue in the concurrency queue according to a preset corresponding relation between the embedded data type and the concurrency queue, including:
acquiring the current network environment from the embedded point sub-thread within a preset time interval;
if the current network environment meets the preset condition, storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue;
if the current network environment does not meet the preset conditions, blocking the embedded point main thread, and when the current network environment is changed to meet the preset conditions, removing thread blocking, and storing embedded point data in the blocking queue in the concurrent queue;
uploading the embedded point data meeting the preset uploading condition in the concurrent queue to the cloud, wherein the uploading comprises the following steps:
traversing the buried point data in the concurrent queue according to preset uploading conditions, and screening out the buried point data meeting the preset uploading conditions;
And storing the buried data meeting the preset uploading condition in a preset object, and uploading the object to a cloud end through a public network framework.
2. The method for uploading embedded point data according to claim 1, wherein after uploading the embedded point data meeting the preset uploading condition in the concurrent queue to the cloud, the method comprises:
and deleting the buried point data in the blocking queue and the concurrent queue in response to an application program closing instruction so as to release the storage space in the blocking queue and the concurrent queue.
3. The method for uploading embedded point data according to claim 1, wherein the initializing the embedded point to obtain the initialized embedded point comprises:
and if the initialized embedded point has the embedded point data left when the application program runs last time, storing the left embedded point data into the blocking queue.
4. A method of uploading buried point data according to any of claims 1 to 3, further comprising:
when initializing the buried point, sending an initialization message to a message center in the buried point function for controlling the sub-thread to perform buried point initialization;
When the buried point data is stored, the functional internal message center forwards a data storage message for controlling a sub-thread to store the buried point data and uploading the buried point data after a preset delay time;
when uploading the buried point data, the functional internal message center forwards a data uploading message, and the controlled sub-thread uploads the buried point data meeting the preset uploading condition in the concurrent queue to the cloud;
after the embedded point data meeting the preset uploading condition in the concurrent queue is uploaded to the cloud, the functional internal message center forwards a data deleting message for controlling the sub-thread to delete the embedded point data in the blocking queue and the concurrent queue.
5. A buried point data uploading apparatus, the apparatus comprising:
the initialization module is used for responding to the starting instruction of the application program and initializing the buried point to obtain an initialized buried point;
the first storage module is used for acquiring buried point data and buried point data types in the running process of the application program through the initialized buried points and storing the buried point data in the blocking queue;
the second storage module is used for acquiring a current network environment, and storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue when the current network environment meets the preset condition;
The embedded point data uploading module is used for uploading embedded point data meeting preset uploading conditions in the concurrent queue to the cloud;
initializing the buried point to obtain an initialized buried point, including:
initializing a buried point instance, wherein the buried point instance is used for responding to an external control instruction;
creating a current network environment receiving object, wherein the current network environment receiving object is used for receiving the current network environment of the application program in real time;
creating a data storage queue, wherein the data storage queue comprises a blocking queue and a concurrency queue;
creating a buried point program, wherein the buried point program is used for controlling the storage and uploading of buried point data generated in the running process of the application program;
obtaining the initialized buried point according to the initialized buried point instance, the current network environment receiving object, the storage queue and the buried point program;
the obtaining the current network environment, when the current network environment meets a preset condition, storing the embedded data in the blocking queue in the concurrency queue according to a preset corresponding relation between the embedded data type and the concurrency queue, including:
acquiring the current network environment from the embedded point sub-thread within a preset time interval;
If the current network environment meets the preset condition, storing the buried data in the blocking queue in the concurrency queue according to the corresponding relation between the preset buried data type and the concurrency queue;
if the current network environment does not meet the preset conditions, blocking the embedded point main thread, and when the current network environment is changed to meet the preset conditions, removing thread blocking, and storing embedded point data in the blocking queue in the concurrent queue;
uploading the embedded point data meeting the preset uploading condition in the concurrent queue to the cloud, wherein the uploading comprises the following steps:
traversing the buried point data in the concurrent queue according to preset uploading conditions, and screening out the buried point data meeting the preset uploading conditions;
and storing the buried data meeting the preset uploading condition in a preset object, and uploading the object to a cloud end through a public network framework.
6. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the buried point data uploading method of any of claims 1 to 4.
7. A computer readable medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the buried point data uploading method of any of claims 1 to 4.
CN202210744360.7A 2022-06-27 2022-06-27 Buried point data uploading method, device, equipment and medium Active CN115134352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210744360.7A CN115134352B (en) 2022-06-27 2022-06-27 Buried point data uploading method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210744360.7A CN115134352B (en) 2022-06-27 2022-06-27 Buried point data uploading method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115134352A CN115134352A (en) 2022-09-30
CN115134352B true CN115134352B (en) 2023-06-20

Family

ID=83379941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210744360.7A Active CN115134352B (en) 2022-06-27 2022-06-27 Buried point data uploading method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115134352B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115941669A (en) * 2022-11-22 2023-04-07 中国第一汽车股份有限公司 Multi-application buried point data uploading method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825731A (en) * 2019-09-18 2020-02-21 平安科技(深圳)有限公司 Data storage method and device, electronic equipment and storage medium
CN111309550A (en) * 2020-02-05 2020-06-19 江苏满运软件科技有限公司 Data acquisition method, system, equipment and storage medium of application program
CN111752803A (en) * 2020-06-28 2020-10-09 厦门美柚股份有限公司 Method, device and medium for collecting and reporting buried point data
CN114185776A (en) * 2021-11-30 2022-03-15 平安付科技服务有限公司 Big data point burying method, device, equipment and medium for application program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418580A (en) * 2019-08-22 2021-02-26 上海哔哩哔哩科技有限公司 Risk control method, computer equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825731A (en) * 2019-09-18 2020-02-21 平安科技(深圳)有限公司 Data storage method and device, electronic equipment and storage medium
CN111309550A (en) * 2020-02-05 2020-06-19 江苏满运软件科技有限公司 Data acquisition method, system, equipment and storage medium of application program
CN111752803A (en) * 2020-06-28 2020-10-09 厦门美柚股份有限公司 Method, device and medium for collecting and reporting buried point data
CN114185776A (en) * 2021-11-30 2022-03-15 平安付科技服务有限公司 Big data point burying method, device, equipment and medium for application program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Kafka、Disruptor技术对传统ETL的改进;王梓;梁正和;吴莹莹;;计算机技术与发展(第11期);全文 *

Also Published As

Publication number Publication date
CN115134352A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US10521393B2 (en) Remote direct memory access (RDMA) high performance producer-consumer message processing
CN108108286A (en) Method of data capture and device, server, storage medium
US20190138375A1 (en) Optimization of message oriented middleware monitoring in heterogenenous computing environments
CN115134352B (en) Buried point data uploading method, device, equipment and medium
CN106933589B (en) Message queue assembly based on configuration and integration method thereof
CN105354090B (en) The management method and device of virtual unit
CN109150956A (en) A kind of implementation method, device, equipment and computer storage medium pushing SDK
CN111158779B (en) Data processing method and related equipment
CN110868324A (en) Service configuration method, device, equipment and storage medium
CN113051055A (en) Task processing method and device
CN111966508A (en) Message batch sending method and device, computer equipment and storage medium
CN111596864A (en) Method, device, server and storage medium for data delayed deletion
US8498622B2 (en) Data processing system with synchronization policy
CN116107988A (en) Vehicle-mounted log system, vehicle-mounted log storage method, device, medium and vehicle
CN109254856A (en) Intelligent POS server-side provides interface to the method for client
CN109189705A (en) A kind of usb expansion method, apparatus, equipment, storage medium and system
CN115134254A (en) Network simulation method, device, equipment and storage medium
CN106484536B (en) IO scheduling method, device and equipment
CN110365839A (en) Closedown method, device, medium and electronic equipment
CN115794353B (en) Cloud network service quality optimization processing method, device, equipment and storage medium
CN111367751B (en) End-to-end data monitoring method and device
US20230288786A1 (en) Graphic user interface system for improving throughput and privacy in photo booth applications
CN113391896B (en) Task processing method and device, storage medium and electronic equipment
WO2023201648A1 (en) File operation apparatus, computer device and operation device
CN109918209B (en) Method and equipment for communication between threads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant