CN113515368B - Data integration method combining big data and edge calculation and storage medium - Google Patents

Data integration method combining big data and edge calculation and storage medium Download PDF

Info

Publication number
CN113515368B
CN113515368B CN202110396750.5A CN202110396750A CN113515368B CN 113515368 B CN113515368 B CN 113515368B CN 202110396750 A CN202110396750 A CN 202110396750A CN 113515368 B CN113515368 B CN 113515368B
Authority
CN
China
Prior art keywords
service
information
data
processing result
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110396750.5A
Other languages
Chinese (zh)
Other versions
CN113515368A (en
Inventor
陈顺发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Jikuai Technology Co ltd
Original Assignee
Xiamen Jikuai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Jikuai Technology Co ltd filed Critical Xiamen Jikuai Technology Co ltd
Priority to CN202110396750.5A priority Critical patent/CN113515368B/en
Publication of CN113515368A publication Critical patent/CN113515368A/en
Application granted granted Critical
Publication of CN113515368B publication Critical patent/CN113515368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity

Abstract

The disclosed data integration method and storage medium combining big data and edge computing first calculate a service processing result uploaded by a target edge computing node device, then generate corresponding service transmission information and transmission path information among the service transmission information respectively for each service processing result by searching pre-established service data label distribution, then screen the transmission path information based on a device log text extracted from each target edge computing node device to determine a target path parameter, and finally integrate each service processing result according to the target path parameter to obtain a comprehensive processing result. Therefore, the business processing results can be cloud-ended, so that clear division of business data processing and business result integration is realized, the edge computing node equipment can efficiently process the business data, and the cloud big data center can effectively integrate the business processing results reported by the edge computing node equipment.

Description

Data integration method combining big data and edge calculation and storage medium
The application is a divisional application with the application number of CN202010853210.0, the application date of 08-23.2020, and the invention name of a data integration method based on big data and edge calculation and a cloud big data center.
Technical Field
The present disclosure relates to the field of big data and edge computing technologies, and in particular, to a data integration method and a storage medium combining big data and edge computing.
Background
With the development of science and technology, the data scale and the data category of various service data (such as industrial production data, smart city monitoring data, vehicle automatic driving control data, intelligent medical scheduling data and the like) are gradually increased, which brings great data processing challenges to the traditional cloud computing mode and often causes overload of the traditional cloud computing mode.
To improve this technical problem, the prior art gradually converts the cloud computing mode into the edge computing mode. Therefore, on one hand, a large amount of business data processing can be marginalized to reduce load pressure of a cloud computing mode on a cloud server, and on the other hand, a large amount of business data can be configured in a distributed mode, so that the efficiency and timeliness of data processing are improved.
However, the inventor finds that the existing service data processing method based on edge calculation has difficulty in realizing effective integration between service data.
Disclosure of Invention
In order to improve the technical problem that effective integration between service data is difficult to realize in the related art, the present disclosure provides a data integration method and a storage medium that combine big data and edge calculation.
In a first aspect, a data integration method combining big data and edge computing is provided, and is applied to a cloud big data center, and the method includes the following steps:
acquiring service processing results which are generated after processing issued to-be-processed service data and uploaded by any two target edge computing node devices in a plurality of edge computing node devices respectively;
respectively generating service transmission information corresponding to each service processing result and transmission path information among the service transmission information aiming at each service processing result by searching pre-established service data label distribution;
screening the transfer path information based on an equipment log text extracted from each target edge computing node equipment to determine target path parameters which do not change with the updating of each group of equipment log texts in the transfer path information;
and integrating the service processing results according to the target path parameters to obtain a comprehensive processing result.
Alternatively, the service data label distribution is established by:
acquiring service behavior track information respectively recorded by two target edge computing node devices and service interaction data of the corresponding target edge computing node devices in each service behavior track information;
detecting a behavior track pointing parameter of each service behavior track information; fitting the behavior track pointing parameters of the business behavior track information through the time sequence characteristic matching of the behavior track pointing parameters to obtain a queue label of a business behavior queue of the business behavior track information;
aiming at each service behavior track information, determining the service type of target edge computing node equipment for recording the service behavior track information based on a queue label of a service behavior queue of the service behavior track information; respectively performing service mapping on each service behavior track information according to the service type of each target edge computing node device so as to generate a service data mapping path corresponding to the service behavior track information in the cloud big data center;
clustering all the service data mapping paths to obtain path network distribution maps corresponding to all the service data mapping paths in the cloud big data center;
according to the service interaction data of each target edge computing node device, obtaining a path label list in the path network distribution diagram; and extracting the service data labels from the path network distribution map based on the path label list to obtain service data label distribution.
Alternatively, according to the target path parameter, integrating the service processing results to obtain a comprehensive processing result, including:
listing the target path parameter relative to the service data logic information of each service processing result; the service data logic information comprises a logic relation of a corresponding service processing result on a control thread corresponding to the cloud big data center;
and performing iterative integration on each service processing result according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result.
Alternatively, iteratively integrating the service processing results according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result, including:
determining a target sorting sequence of the logic priority, wherein the service data logic information corresponding to the target sorting sequence is service data logic information of which the correlation coefficient is greater than a set coefficient value and the difference between the logic priority and the median of all the logic priorities is not less than a preset difference;
switching the coding script of the logic coding data of the business data logic information corresponding to the target sequencing sequence into a target coding script corresponding to the cloud big data center;
judging whether the business data logic information corresponding to the target sequencing sequence has iteration weight or not according to the target coding script; if the business data logic information corresponding to the target sorting sequence has the iteration weight, calibrating the correlation coefficient among all the business processing results according to the magnitude sequence of the iteration weight to obtain a plurality of first business processing results corresponding to the calibrated correlation coefficient and a plurality of second business processing results which are not calibrated;
and iterating the first service processing result based on the iteration weight, adding the service processing result with the maximum service influence degree in the second service processing results to the iteration process in each iteration process, and adding the service processing result with the minimum service influence degree in the second service processing results to the next iteration process in the next iteration process until the cross iteration between the first service processing result and the second service processing result is completed to obtain the comprehensive processing result.
Alternatively, the screening the transfer path information based on the device log text extracted from each target edge computing node device to determine a target path parameter that does not change with the update of each group of device log texts in the transfer path information, further includes:
sending a text extraction request carrying a request field and a first verification result to each target edge computing node device; the first verification result is obtained by the cloud big data center performing cyclic redundancy check calculation on the request field based on a pre-stored first dynamic random number and a first identity verification code; when an authorization instruction fed back by the target edge computing node device based on the request field is received, accessing a set storage area corresponding to the target edge computing node device and acquiring a device log text corresponding to the target edge computing node device from the set storage area; wherein the target edge computing node device feeds back the authorization instruction by: determining a second dynamic random number corresponding to the first dynamic random number and a second identity check code corresponding to the first identity check code according to an authentication relationship established with the cloud big data center in advance, performing cyclic redundancy check calculation on the request field by adopting the second dynamic random number and the second identity check code to obtain a second check result, and feeding back the authorization instruction to the cloud big data center when the first check result is consistent with the second check result;
determining text updating information corresponding to each group of device log texts, constructing a text updating list based on the determined text updating information, and mapping list structure information of the text updating list into a preset coordinate plane in a graph data form so as to draw a graph data set of the text updating list in the coordinate plane;
performing feature extraction on each graph data node in the graph data set to obtain a plane feature corresponding to each graph data node, performing model parameter adjustment on a preset identification model according to the feature dimension of the plane feature, and identifying the plane feature by adopting the identification model with the adjusted model parameter to obtain a plurality of feature clusters;
calculating the clustering index weight of each feature cluster, defining the feature cluster corresponding to the maximum clustering index weight as a dynamic feature cluster, and respectively determining text regions corresponding to the updatable texts in each group of equipment log texts according to the dynamic feature cluster; and judging whether each group of path parameters in the transmission path information are converged in each text region or not, and determining the path parameters converged in each text region in the transmission path information as target path parameters.
Alternatively, by searching for pre-established service data label distribution, service delivery information corresponding to each service processing result and delivery path information between each service delivery information are respectively generated for each service processing result, which specifically includes:
determining processing evaluation labels corresponding to the service processing results from the service data label distribution;
judging whether each processing evaluation label is matched with a stability label of the service interaction stability of the corresponding service processing result, if so, generating service transmission information corresponding to the service processing result according to mapping information of the processing evaluation label in the service processing result;
determining an information packet with periodically changed information capacity among the service transmission information, extracting an information set with at least two address identifiers in the information packet, and integrating the information set according to the time sequence priority of the service transmission information to obtain the transmission path information among the service transmission information.
Alternatively, obtaining a service processing result, which is generated after processing issued to-be-processed service data and uploaded by any two target edge computing node devices in the plurality of edge computing node devices respectively, includes:
judging whether the protocol text similarity between a first data transmission protocol of one target edge computing node device and a second data transmission protocol of another target edge computing node device is greater than a set similarity or not;
when the protocol text similarity is greater than the set similarity, acquiring a service processing result of one target edge computing node device according to a first set transceiving frequency and acquiring a service processing result of the other target edge computing node device according to a second set transceiving frequency; wherein the first set transceiving frequency and the second set transceiving frequency are complementary.
In a second aspect, a cloud big data center is provided, where the cloud big data center includes a data integration device, and modules deployed in the data integration device implement the above method when running.
In a third aspect, a cloud big data center is provided, where the cloud big data center includes a processor and a memory that communicate with each other; wherein:
the memory is used for storing a computer program;
the processor is used for reading the computer program from the memory and executing the computer program to realize the method.
In a fourth aspect, a storage medium for a computer is provided, on which a computer program is stored, which computer program realizes the above-mentioned method when running.
The technical scheme provided by the embodiment of the disclosure can include the following beneficial effects.
Firstly, service processing results uploaded by target edge computing node equipment are searched, then corresponding service transmission information and transmission path information among the service transmission information are respectively generated aiming at the service processing results through searching pre-established service data label distribution, then the transmission path information is screened based on equipment log texts extracted from each target edge computing node equipment to determine target path parameters, and finally, the service processing results are integrated according to the target path parameters to obtain comprehensive processing results. Therefore, the business processing results can be cloud-ended, so that clear division of business data processing and business result integration is realized, the edge computing node equipment can efficiently process the business data, and the cloud big data center can effectively integrate the business processing results reported by the edge computing node equipment. Therefore, the comprehensive processing result corresponding to the service data to be processed can be analyzed globally.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an implementation environment according to the present disclosure;
FIG. 2 is a flow diagram illustrating a method of data integration that combines big data and edge computation in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a data consolidation apparatus that combines big data and edge computation in accordance with an exemplary embodiment;
fig. 4 is a schematic diagram illustrating a hardware structure of a cloud big data center according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Regarding the technical problems mentioned in the background art, the inventors have studied and analyzed the existing edge computing architecture, and found that there is usually heterogeneity between edge computing node devices in the existing edge computing architecture, where one edge computing node device is incompatible with the service data of other edge computing devices, and the service data processed by each edge computing node device is allocated according to the service scenario corresponding to the edge computing node device, which may result in that effective integration of the service data processing results cannot be achieved on the edge computing node device side.
To solve the technical problem, embodiments of the present invention provide a data integration method and a storage medium combining big data and edge computing, which can cloud a service data processing result to realize clear division of service data processing and service data integration, that is: the edge computing node equipment efficiently processes the service data, and the cloud big data center effectively integrates the service data processing results reported by the edge computing node equipment.
To achieve the above solution, please refer to fig. 1, which is a schematic diagram of an architecture of a data integration system 100 combining big data and edge computing according to an embodiment of the present invention, wherein the data integration system 100 may include a cloud big data center 200 and a plurality of edge computing node devices 400. The cloud big data center 200 and each edge computing node device 400 are connected in communication, so as to implement the method described in the following steps S21-S24 shown in fig. 2.
Step S21, obtaining service processing results generated after processing the issued to-be-processed service data uploaded by any two target edge computing node devices in the plurality of edge computing node devices, respectively.
For example, the to-be-processed business data may be industrial production data, smart city monitoring data, vehicle automatic driving control data, intelligent medical scheduling data, and the like, which is not limited herein.
Step S22, by searching for the pre-established service data label distribution, service delivery information corresponding to the service processing result and delivery path information between the service delivery information are respectively generated for each service processing result.
For example, the service data label distribution is used for representing the association relationship between the service data to be processed issued by the cloud big data center. The service delivery information is used for representing result-oriented description between service processing results. The transfer path information is used for characterizing the cascading priority among the service transfer information.
Step S23, screening the transfer path information based on the device log text extracted from each target edge computing node device, so as to determine a target path parameter that does not change with the update of each group of device log texts in the transfer path information.
For example, the device log text records the whole process of the target edge computing node device when processing the to-be-processed service data. The target path parameters are used for characterizing the path architecture of the path transfer information.
And step S24, integrating the service processing results according to the target path parameters to obtain a comprehensive processing result.
For example, the comprehensive processing result is used to represent a global processing result corresponding to the to-be-processed service data before being distributed by the cloud big data.
Further, based on the content described in the above steps S21 to S24, the service processing result uploaded by the target edge computing node device is first calculated, then corresponding service delivery information and delivery path information between the service delivery information are respectively generated for each service processing result by searching for pre-established service data label distribution, then the delivery path information is screened based on the device log text extracted from each target edge computing node device to determine a target path parameter, and finally each service processing result is integrated according to the target path parameter to obtain an integrated processing result. Therefore, the business processing results can be cloud-ended, so that clear division of business data processing and business result integration is realized, the edge computing node equipment can efficiently process the business data, and the cloud big data center can effectively integrate the business processing results reported by the edge computing node equipment. Therefore, the comprehensive processing result corresponding to the service data to be processed can be analyzed globally.
The cloud big data center 200 can be applied to not only a smart city, but also a smart medical service, a smart industrial park, and a smart industrial internet, and the data integration system 100 can be applied to scenes such as big data, cloud computing, and edge computing, including but not limited to new energy automobile system management, intelligent online office, intelligent online education, cloud game data processing, e-commerce live delivery processing, cloud internet processing, block chain digital financial currency service, block chain supply chain financial service, and the like, without limitation. It is understood that when applied to the above corresponding fields, the types of the service data are adjusted and further refined, and are not listed here.
In the implementation process, the inventor finds that in order to ensure the accuracy of the service delivery information and the delivery path information between the service delivery information, the integrity and the real-time performance of the service data label distribution need to be ensured. To achieve this, the service data label distribution in step S22 can be specifically realized by the following description of step S221 to step S225.
Step S221, acquiring service behavior trace information recorded by the two target edge computing node devices, respectively, and service interaction data of the corresponding target edge computing node device in each service behavior trace information.
Step S222, detecting behavior track pointing parameters of the behavior track information of each service; and matching the behavior trace pointing parameters of the business behavior trace information through the time sequence characteristic matching of the behavior trace pointing parameters to obtain a queue label of a business behavior queue of the business behavior trace information.
Step S223, aiming at each service behavior track information, determining the service type of the target edge computing node equipment for recording the service behavior track information based on the queue label of the service behavior queue of the service behavior track information; and respectively performing service mapping on the service behavior track information according to the service type of each target edge computing node device so as to generate a service data mapping path corresponding to the service behavior track information in the cloud big data center.
Step S224, clustering the service data mapping paths to obtain a path network distribution map corresponding to all service data mapping paths in the cloud big data center.
Step S225, a path label list in the path network distribution diagram is obtained according to the service interaction data of each target edge computing node device; and extracting the service data labels from the path network distribution map based on the path label list to obtain service data label distribution.
In this way, through the steps S221 to S225, the service data label distribution can be completely determined in real time based on the service behavior trace information recorded by the target edge computing node device, so as to ensure the accuracy of the service delivery information and the delivery path information between the service delivery information.
In a specific implementation, in order to ensure continuity of the comprehensive processing result in the data logic, the integration of the business processing results according to the target path parameter described in step S24 to obtain the comprehensive processing result may specifically include the contents described in step S241 and step S242 below.
Step S241, listing the target path parameter relative to the service data logic information of each service processing result; the service data logic information comprises a logic relation of a corresponding service processing result on a control thread corresponding to the cloud big data center.
Step S242, iteratively integrating the service processing results according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result.
By implementing the contents described in the above steps S241 and S242, the logic priority and the correlation coefficient of the service data logic information of the target path parameter can be analyzed, so as to implement iterative integration of the service processing results, and thus, the continuity of the comprehensive processing results on the data logic can be ensured.
In an implementation manner, the iteratively integrating the service processing results according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain the comprehensive processing result, which is described in step S242, may further include what is described in the following step S2421 to step S2424.
Step S2421, determining a target sorting sequence of the logic priorities, where the service data logic information corresponding to the target sorting sequence is service data logic information whose correlation coefficient is greater than a set coefficient value and whose difference between the logic priority and the median of all logic priorities is not less than a preset difference.
Step S2422, switching the coding script of the logic coding data of the service data logic information corresponding to the target sorting sequence into a target coding script corresponding to the cloud big data center.
Step S2423, judging whether the business data logic information corresponding to the target sequencing sequence has iteration weight according to the target coding script; and if the business data logic information corresponding to the target sorting sequence has the iteration weight, calibrating the correlation coefficient among all the business processing results according to the magnitude sequence of the iteration weight to obtain a plurality of first business processing results corresponding to the calibrated correlation coefficient and a plurality of second business processing results which are not calibrated.
Step S2424, iterating the first service processing result based on the iteration weight, adding, in each iteration process, a service processing result with a largest service influence degree among the second service processing results to the iteration process, and adding, in a next iteration process, a service processing result with a smallest service influence degree among the second service processing results to the next iteration process, until cross iteration between the first service processing result and the second service processing result is completed to obtain the comprehensive processing result.
It can be understood that through the descriptions of the above steps S2421 to S2424, precise iteration on each business processing result can be realized to ensure the accuracy and reliability of the comprehensive processing result.
In an implementation manner, in order to screen out the target path parameters that do not carry the logical cluster field from the delivery path information, so as to ensure the parameter stability of the target path parameters, in step S23, the delivery path information is screened based on the device log text extracted from each target edge computing node device, so as to determine the target path parameters that do not change with the update of each set of device log text in the delivery path information, further including the details described in the following steps S231 to S234.
Step S231, a text extraction request carrying a request field and a first verification result is sent to each target edge computing node device; the first verification result is obtained by the cloud big data center performing cyclic redundancy check calculation on the request field based on a pre-stored first dynamic random number and a first identity verification code; when an authorization instruction fed back by the target edge computing node device based on the request field is received, accessing a set storage area corresponding to the target edge computing node device and acquiring a device log text corresponding to the target edge computing node device from the set storage area; wherein the target edge computing node device feeds back the authorization instruction by: and determining a second dynamic random number corresponding to the first dynamic random number and a second identity check code corresponding to the first identity check code according to an authentication relationship established with the cloud big data center in advance, performing cyclic redundancy check calculation on the request field by adopting the second dynamic random number and the second identity check code to obtain a second check result, and feeding back the authorization instruction to the cloud big data center when the first check result is consistent with the second check result.
Step S232, determining text update information corresponding to each group of device log texts, constructing a text update list based on the determined text update information, mapping list structure information of the text update list to a preset coordinate plane in a graph data form, and drawing a graph data set of the text update list in the coordinate plane.
Step S233, performing feature extraction on each graph data node in the graph data set to obtain a plane feature corresponding to each graph data node, performing model parameter adjustment on a preset identification model according to a feature dimension of the plane feature, and identifying the plane feature by using the identification model with the adjusted model parameter to obtain a plurality of feature clusters.
Step S234, calculating the clustering index weight of each feature cluster, defining the feature cluster corresponding to the maximum clustering index weight as a dynamic feature cluster, and respectively determining text areas corresponding to the updatable texts in each group of equipment log texts according to the dynamic feature cluster; and judging whether each group of path parameters in the transmission path information are converged in each text region or not, and determining the path parameters converged in each text region in the transmission path information as target path parameters.
It can be understood that, by performing the steps S231 to S234, the target path parameters that do not carry the logical clustering field can be screened from the delivery path information, so as to ensure the parameter stability of the target path parameters.
In a specific embodiment, the step S22 is to search a pre-established service data label distribution, and generate service delivery information corresponding to the service processing result and delivery path information between the service delivery information for each service processing result, further including the following steps S221 to S223.
Step S221, determining a processing evaluation tag corresponding to each service processing result from the service data tag distribution.
Step S222, determining whether each processing evaluation tag matches with a stability tag of the service interaction stability of the corresponding service processing result, and if so, generating service delivery information corresponding to the service processing result according to mapping information of the processing evaluation tag in the service processing result.
Step S223, determining an information packet with periodically changing information capacity between service delivery information, and extracting an information set with at least two address identifiers in the information packet to integrate according to the time sequence priority of the service delivery information to obtain the delivery path information between service delivery information.
In this way, based on the above steps S221 to S223, the confidence and the real-time performance of the traffic delivery information and the delivery path information can be ensured.
In practical application, in order to ensure the accuracy of the obtained service processing result and avoid a defect of the service processing result, in step S21, the service processing result generated after processing the issued to-be-processed service data uploaded by any two target edge computing node devices in the plurality of edge computing node devices is obtained, which may specifically include the contents described in the following steps S211 and S212.
Step S211, determining whether the protocol text similarity between the first data transmission protocol of one of the target edge computing node devices and the second data transmission protocol of another one of the target edge computing node devices is greater than a set similarity.
Step S212, when the similarity of the protocol text is greater than the set similarity, acquiring a service processing result of one target edge computing node device according to a first set transceiving frequency and acquiring a service processing result of another target edge computing node device according to a second set transceiving frequency; wherein the first set transceiving frequency and the second set transceiving frequency are complementary.
It can be understood that, through the above steps S211 and S212, the service processing result corresponding to the higher protocol text similarity can be obtained by using the complementary transceiving frequencies, so that the accuracy of the obtained service processing result can be ensured on the premise of ensuring the timeliness of obtaining the service processing result, and the defect of the service processing result is avoided.
In an alternative embodiment, in order to implement the service correction on the comprehensive processing result, on the basis of the steps S21-S24, the method may specifically further include the contents described in step S25: and carrying out service correction on the comprehensive processing result according to the result separation identifier in the comprehensive processing result. Therefore, the service correction can be carried out on the comprehensive processing result based on different result separation identifiers, and the reliability and the accuracy of the comprehensive processing result are ensured.
In a specific embodiment, the performing of the service correction on the integrated processing result according to the result separation flag in the integrated processing result described in step S25 may specifically include what is described in the following step S251 to step S253.
Step S251, sorting the result separation identifiers according to a magnitude order of a time sequence priority and a magnitude order of an authority priority, respectively, to obtain a first sorting sequence corresponding to the time sequence priority and a second sorting sequence corresponding to the authority priority.
Step S252, performing traversal comparison on the first sorted sequence and the second sorted sequence, that is, comparing each result separation identifier in the first sorted sequence with the result separation identifier at the same position in the second sorted sequence to obtain a comparison result.
Step S253, when the comparison result indicates that the sequence correlation coefficient between the first sequencing sequence and the second sequencing sequence is greater than a target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation marks in the first sequencing sequence as references; when the comparison result represents that the sequence correlation coefficient between the first sorting sequence and the second sorting sequence is smaller than or equal to the target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation marks in the second sorting sequence as references; the service correction comprises correction of the data credibility and the data valid time period of the comprehensive processing result.
It can be understood that, through the above steps S251 to S253, flexible service correction can be performed on the integrated processing result according to the result separation flag.
Based on the similar inventive concept, please refer to fig. 3, which provides a data integration apparatus 300 combining big data and edge calculation, applied to a cloud big data center, the apparatus includes the following functional modules:
a processing result obtaining module 310, configured to obtain service processing results, which are generated after processing the issued to-be-processed service data and are respectively uploaded by any two target edge computing node devices in the multiple edge computing node devices;
a path information generating module 320, configured to search pre-established service data label distribution, and respectively generate service delivery information corresponding to each service processing result and delivery path information between each service delivery information for each service processing result;
a path parameter obtaining module 330, configured to screen the transfer path information based on an equipment log text extracted from each target edge computing node equipment, so as to determine a target path parameter that does not change with update of each group of equipment log texts in the transfer path information;
a processing result integration module 340, configured to integrate the processing results of the services according to the target path parameter to obtain a comprehensive processing result;
and a processing result correcting module 350, configured to perform service correction on the comprehensive processing result according to the result separation identifier in the comprehensive processing result.
Further, the path information generating module 320 is specifically configured to:
acquiring service behavior track information respectively recorded by two target edge computing node devices and service interaction data of the corresponding target edge computing node devices in each service behavior track information;
detecting a behavior track pointing parameter of each service behavior track information; matching the behavior trace pointing parameters of the business behavior trace information through the time sequence characteristic matching of the behavior trace pointing parameters to obtain a queue label of a business behavior queue of the business behavior trace information;
aiming at each service behavior track information, determining the service type of target edge computing node equipment for recording the service behavior track information based on a queue label of a service behavior queue of the service behavior track information; according to the service type of each target edge computing node device, service mapping is carried out on each service behavior track information, so that a service data mapping path corresponding to the service behavior track information is generated in the cloud big data center;
clustering all the service data mapping paths to obtain path network distribution maps corresponding to all the service data mapping paths in the cloud big data center;
according to the service interaction data of each target edge computing node device, obtaining a path label list in the path network distribution diagram; and extracting service data labels from the path network distribution map based on the path label list to obtain service data label distribution.
Further, the processing result integration module 340 is specifically configured to:
listing the target path parameter relative to the service data logic information of each service processing result; the service data logic information comprises a logic relation of a corresponding service processing result on a control thread corresponding to the cloud big data center;
and performing iterative integration on each service processing result according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result.
Further, the processing result integration module 340 is specifically configured to:
determining a target sorting sequence of the logic priorities, wherein the service data logic information corresponding to the target sorting sequence is service data logic information of which the correlation coefficient is greater than a set coefficient value and the difference between the logic priorities and the median of all the logic priorities is not less than a preset difference;
switching the coding script of the logic coding data of the business data logic information corresponding to the target sequencing sequence into a target coding script corresponding to the cloud big data center;
judging whether the business data logic information corresponding to the target sequencing sequence has an iteration weight or not according to the target coding script; if the business data logic information corresponding to the target sorting sequence has the iteration weight, calibrating the correlation coefficient among all the business processing results according to the magnitude sequence of the iteration weight to obtain a plurality of first business processing results corresponding to the calibrated correlation coefficient and a plurality of second business processing results which are not calibrated;
and iterating the first service processing result based on the iteration weight, adding the service processing result with the maximum service influence degree in the second service processing results to the iteration process in each iteration process, and adding the service processing result with the minimum service influence degree in the second service processing results to the next iteration process in the next iteration process until the cross iteration between the first service processing result and the second service processing result is completed to obtain the comprehensive processing result.
Further, the path parameter obtaining module 330 is configured to:
sending a text extraction request carrying a request field and a first verification result to each target edge computing node device; the first verification result is obtained by performing cyclic redundancy check calculation on the request field by the cloud big data center based on a pre-stored first dynamic random number and a first identity check code; when an authorization instruction fed back by the target edge computing node device based on the request field is received, accessing a set storage area corresponding to the target edge computing node device and acquiring a device log text corresponding to the target edge computing node device from the set storage area; wherein the target edge computing node device feeds back the authorization instruction by: determining a second dynamic random number corresponding to the first dynamic random number and a second identity check code corresponding to the first identity check code according to an authentication relationship established with the cloud big data center in advance, performing cyclic redundancy check calculation on the request field by adopting the second dynamic random number and the second identity check code to obtain a second check result, and feeding back the authorization instruction to the cloud big data center when the first check result is consistent with the second check result;
determining text updating information corresponding to each group of device log texts, constructing a text updating list based on the determined text updating information, and mapping list structure information of the text updating list into a preset coordinate plane in a graph data form so as to draw a graph data set of the text updating list in the coordinate plane;
performing feature extraction on each graph data node in the graph data set to obtain a plane feature corresponding to each graph data node, performing model parameter adjustment on a preset identification model according to the feature dimension of the plane feature, and identifying the plane feature by adopting the identification model with the adjusted model parameter to obtain a plurality of feature clusters;
calculating the clustering index weight of each feature cluster, defining the feature cluster corresponding to the maximum clustering index weight as a dynamic feature cluster, and respectively determining text regions corresponding to the updatable texts in each group of equipment log texts according to the dynamic feature cluster; and judging whether each group of path parameters in the transfer path information are converged in each text region or not, and determining the path parameters converged in each text region in the transfer path information as target path parameters.
Further, the path information generating module 320 is configured to:
determining processing evaluation labels corresponding to the service processing results from the service data label distribution;
judging whether each processing evaluation label is matched with a stability label of the service interaction stability of the corresponding service processing result, if so, generating service transmission information corresponding to the service processing result according to the mapping information of the processing evaluation label in the service processing result;
determining an information packet with periodically changed information capacity among the service transmission information, extracting an information set with at least two address identifiers in the information packet, and integrating the information set according to the time sequence priority of the service transmission information to obtain the transmission path information among the service transmission information.
Further, the processing result obtaining module 310 is specifically configured to:
judging whether the protocol text similarity between a first data transmission protocol of one target edge computing node device and a second data transmission protocol of another target edge computing node device is larger than a set similarity or not;
when the protocol text similarity is greater than the set similarity, acquiring a service processing result of one target edge computing node device according to a first set transceiving frequency and acquiring a service processing result of the other target edge computing node device according to a second set transceiving frequency; wherein the first set transceiving frequency and the second set transceiving frequency are complementary.
Further, the processing result correcting module 350 is specifically configured to:
sorting the result separation identifiers according to the magnitude sequence of the time sequence priority and the magnitude sequence of the authority priority respectively to obtain a first sorting sequence corresponding to the time sequence priority and a second sorting sequence corresponding to the authority priority;
traversing and comparing the first sequencing sequence and the second sequencing sequence, namely comparing each result separation identifier in the first sequencing sequence with the result separation identifier at the same position in the second sequencing sequence to obtain a comparison result;
when the comparison result indicates that the sequence correlation coefficient between the first sequencing sequence and the second sequencing sequence is greater than a target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation marks in the first sequencing sequence as references; when the comparison result represents that the sequence correlation coefficient between the first sorting sequence and the second sorting sequence is smaller than or equal to the target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation identifiers in the second sorting sequence as references; and the service correction comprises the correction of the data credibility and the data valid time period of the comprehensive processing result.
For the description of the processing result obtaining module 310, the path information generating module 320, the path parameter obtaining module 330, the processing result integrating module 340, and the processing result correcting module 350, please refer to the description of the steps of the method shown in fig. 2, which is not described herein again.
Based on the same inventive concept, the data integration system combining big data and edge computing is further provided, and comprises a cloud big data center and a plurality of edge computing node devices, wherein the cloud big data center is communicated with the edge computing node devices;
any two target edge computing node devices of the plurality of edge computing node devices are respectively configured to: uploading a service processing result generated after processing the issued to-be-processed service data to the cloud big data center;
the cloud big data center is used for:
acquiring service processing results which are generated after any two target edge computing node devices in the plurality of edge computing node devices respectively upload processing to-be-processed service data to be issued;
respectively generating service transmission information corresponding to each service processing result and transmission path information among the service transmission information aiming at each service processing result by searching pre-established service data label distribution;
screening the transfer path information based on an equipment log text extracted from each target edge computing node equipment to determine target path parameters which do not change with the updating of each group of equipment log texts in the transfer path information;
integrating the service processing results according to the target path parameters to obtain comprehensive processing results;
and carrying out service correction on the comprehensive processing result according to the result separation identifier in the comprehensive processing result.
For the description of the data integration system combining big data and edge calculation, reference may be made to the description of the method shown in fig. 2, which is not described herein again.
On the basis of the above, please refer to fig. 4 in combination, a schematic diagram of a hardware structure of a cloud big data center 200 is provided, where the cloud big data center 200 includes a processor 210 and a memory 220 that are in communication with each other; wherein:
the memory 220 is used for storing computer programs;
the processor 210 is configured to read the computer program from the memory 220 and execute the computer program to implement the method shown in fig. 2.
On the basis of the above, a storage medium for a computer, on which a computer program is stored, which computer program, when running, implements the method as shown in fig. 2.
To sum up, the data integration method and the storage medium combining big data and edge computing disclosed by the present disclosure first perform service processing results uploaded by target edge computing node devices, then generate corresponding service delivery information and delivery path information between the service delivery information for each service processing result by searching pre-established service data label distribution, then screen the delivery path information based on device log text extracted from each target edge computing node device to determine target path parameters, and finally integrate each service processing result according to the target path parameters to obtain a comprehensive processing result. Therefore, the business processing results can be cloud-ended, so that clear division of business data processing and business result integration is realized, the edge computing node equipment can efficiently process the business data, and the cloud big data center can effectively integrate the business processing results reported by the edge computing node equipment.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A data integration method combining big data and edge calculation is applied to a cloud big data center and comprises the following steps:
acquiring service processing results which are generated after processing issued to-be-processed service data and uploaded by any two target edge computing node devices in a plurality of edge computing node devices respectively;
respectively generating service transmission information corresponding to each service processing result and transmission path information among the service transmission information aiming at each service processing result by searching pre-established service data label distribution; the service data label distribution is used for representing the incidence relation between service data to be processed issued by the cloud big data center, the service transmission information is used for representing the result guide description between service processing results, and the transmission path information is used for representing the cascade priority between the service transmission information;
screening the transfer path information based on an equipment log text extracted from each target edge computing node equipment to determine target path parameters which do not change with the updating of each group of equipment log texts in the transfer path information; the device log text records the whole process of the target edge computing node device when processing the to-be-processed service data, and the target path parameter is used for representing the path architecture of the path transmission information;
integrating the service processing results according to the target path parameters to obtain comprehensive processing results; the comprehensive processing result is used for representing a corresponding global processing result of the to-be-processed business data before being distributed by the cloud big data center.
2. The method of claim 1, wherein the method further comprises:
and carrying out service correction on the comprehensive processing result according to the result separation identifier in the comprehensive processing result.
3. The method of claim 2, wherein performing traffic correction on the integrated processing result according to the result separation flag in the integrated processing result comprises:
sorting the result separation identifiers according to the magnitude sequence of the time sequence priority and the magnitude sequence of the authority priority respectively to obtain a first sorting sequence corresponding to the time sequence priority and a second sorting sequence corresponding to the authority priority;
traversing and comparing the first sequencing sequence and the second sequencing sequence, namely comparing each result separation identifier in the first sequencing sequence with the result separation identifier at the same position in the second sequencing sequence to obtain a comparison result;
when the comparison result represents that the sequence correlation coefficient between the first sequencing sequence and the second sequencing sequence is greater than a target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation identifications in the first sequencing sequence as references; when the comparison result represents that the sequence correlation coefficient between the first sorting sequence and the second sorting sequence is smaller than or equal to the target coefficient, performing service correction on the comprehensive processing result by taking at least two result separation marks in the second sorting sequence as references; and the service correction comprises the correction of the data credibility and the data valid time period of the comprehensive processing result.
4. The method of claim 1, wherein integrating the business process results to obtain a composite process result according to the target path parameters comprises:
listing the target path parameter relative to the service data logic information of each service processing result; the service data logic information comprises a logic relation of a corresponding service processing result on a control thread corresponding to the cloud big data center;
and performing iterative integration on each service processing result according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result.
5. The method of claim 4, wherein iteratively integrating the service processing results according to the logic priority of the service data logic information and the correlation coefficient between the service data logic information to obtain a comprehensive processing result comprises:
determining a target sorting sequence of the logic priority, wherein the service data logic information corresponding to the target sorting sequence is service data logic information of which the correlation coefficient is greater than a set coefficient value and the difference between the logic priority and the median of all the logic priorities is not less than a preset difference;
switching the coding script of the logic coding data of the business data logic information corresponding to the target sequencing sequence into a target coding script corresponding to the cloud big data center;
judging whether the business data logic information corresponding to the target sequencing sequence has an iteration weight or not according to the target coding script; if the business data logic information corresponding to the target sorting sequence has the iteration weight, calibrating the correlation coefficient among all the business processing results according to the magnitude sequence of the iteration weight to obtain a plurality of first business processing results corresponding to the calibrated correlation coefficient and a plurality of second business processing results which are not calibrated;
and iterating the first service processing result based on the iteration weight, adding the service processing result with the maximum service influence degree in the second service processing results to the iteration process in each iteration process, and adding the service processing result with the minimum service influence degree in the second service processing results to the next iteration process in the next iteration process until the cross iteration between the first service processing result and the second service processing result is completed to obtain the comprehensive processing result.
6. The method of claim 1, wherein the delivery path information is screened based on device log text extracted from each target edge computing node device to determine target path parameters in the delivery path information that do not change with updates to each set of device log text, further comprising:
sending a text extraction request carrying a request field and a first verification result to each target edge computing node device; the first verification result is obtained by the cloud big data center performing cyclic redundancy check calculation on the request field based on a pre-stored first dynamic random number and a first identity verification code; when an authorization instruction fed back by the target edge computing node device based on the request field is received, accessing a set storage area corresponding to the target edge computing node device and acquiring a device log text corresponding to the target edge computing node device from the set storage area; wherein the target edge computing node device feeds back the authorization instruction by: determining a second dynamic random number corresponding to the first dynamic random number and a second identity check code corresponding to the first identity check code according to an authentication relationship established with the cloud big data center in advance, performing cyclic redundancy check calculation on the request field by adopting the second dynamic random number and the second identity check code to obtain a second check result, and feeding back the authorization instruction to the cloud big data center when the first check result is consistent with the second check result;
determining text updating information corresponding to each group of device log texts, constructing a text updating list based on the determined text updating information, and mapping list structure information of the text updating list into a preset coordinate plane in a graph data form so as to draw a graph data set of the text updating list in the coordinate plane;
performing feature extraction on each graph data node in the graph data set to obtain a plane feature corresponding to each graph data node, performing model parameter adjustment on a preset identification model according to the feature dimension of the plane feature, and identifying the plane feature by adopting the identification model with the adjusted model parameter to obtain a plurality of feature clusters;
calculating the clustering index weight of each feature cluster, defining the feature cluster corresponding to the maximum clustering index weight as a dynamic feature cluster, and respectively determining text regions corresponding to the updatable texts in each group of equipment log texts according to the dynamic feature cluster; and judging whether each group of path parameters in the transmission path information are converged in each text region or not, and determining the path parameters converged in each text region in the transmission path information as target path parameters.
7. The method according to claim 1, wherein by searching for a pre-established service data label distribution, service delivery information corresponding to each service processing result and delivery path information between each service delivery information are respectively generated for each service processing result, specifically comprising:
determining processing evaluation labels corresponding to the service processing results from the service data label distribution;
judging whether each processing evaluation label is matched with a stability label of the service interaction stability of the corresponding service processing result, if so, generating service transmission information corresponding to the service processing result according to mapping information of the processing evaluation label in the service processing result;
determining an information packet with periodically changed information capacity among the service transmission information, extracting an information set with at least two address identifiers in the information packet, and integrating the information set according to the time sequence priority of the service transmission information to obtain the transmission path information among the service transmission information.
8. The method of claim 1, wherein obtaining service processing results generated after processing the issued to-be-processed service data uploaded by any two target edge computing node devices in the plurality of edge computing node devices respectively comprises:
judging whether the protocol text similarity between a first data transmission protocol of one target edge computing node device and a second data transmission protocol of another target edge computing node device is greater than a set similarity or not;
when the protocol text similarity is greater than the set similarity, acquiring a service processing result of one target edge computing node device according to a first set transceiving frequency and acquiring a service processing result of the other target edge computing node device according to a second set transceiving frequency; wherein the first set transceiving frequency and the second set transceiving frequency are complementary.
9. The method of claim 1, wherein the service data label distribution is established by:
acquiring service behavior track information respectively recorded by two target edge computing node devices and service interaction data of the corresponding target edge computing node devices in each service behavior track information;
detecting a behavior track pointing parameter of each service behavior track information; matching the behavior trace pointing parameters of the business behavior trace information through the time sequence characteristic matching of the behavior trace pointing parameters to obtain a queue label of a business behavior queue of the business behavior trace information;
aiming at each service behavior track information, determining the service type of target edge computing node equipment for recording the service behavior track information based on a queue label of a service behavior queue of the service behavior track information; respectively performing service mapping on each service behavior track information according to the service type of each target edge computing node device so as to generate a service data mapping path corresponding to the service behavior track information in the cloud big data center;
clustering all the service data mapping paths to obtain path network distribution maps corresponding to all the service data mapping paths in the cloud big data center;
according to the service interaction data of each target edge computing node device, obtaining a path label list in the path network distribution diagram; and extracting the service data labels from the path network distribution map based on the path label list to obtain service data label distribution.
10. A storage medium for a computer, having stored thereon a computer program which, when executed, implements the method of any of claims 1-9.
CN202110396750.5A 2020-08-23 2020-08-23 Data integration method combining big data and edge calculation and storage medium Active CN113515368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110396750.5A CN113515368B (en) 2020-08-23 2020-08-23 Data integration method combining big data and edge calculation and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110396750.5A CN113515368B (en) 2020-08-23 2020-08-23 Data integration method combining big data and edge calculation and storage medium
CN202010853210.0A CN111949410B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and cloud big data center

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010853210.0A Division CN111949410B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and cloud big data center

Publications (2)

Publication Number Publication Date
CN113515368A CN113515368A (en) 2021-10-19
CN113515368B true CN113515368B (en) 2022-09-09

Family

ID=73359140

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110396749.2A Active CN113515367B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and storage medium
CN202010853210.0A Active CN111949410B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and cloud big data center
CN202110396750.5A Active CN113515368B (en) 2020-08-23 2020-08-23 Data integration method combining big data and edge calculation and storage medium

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202110396749.2A Active CN113515367B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and storage medium
CN202010853210.0A Active CN111949410B (en) 2020-08-23 2020-08-23 Data integration method based on big data and edge calculation and cloud big data center

Country Status (1)

Country Link
CN (3) CN113515367B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111625A (en) * 2021-04-30 2021-07-13 善诊(上海)信息技术有限公司 Medical text label generation system and method and computer readable storage medium
CN113382073B (en) * 2021-06-08 2022-06-21 重庆邮电大学 Monitoring system and method for edge nodes in cloud edge-side industrial control system
CN113419856B (en) * 2021-06-23 2023-06-23 平安银行股份有限公司 Intelligent current limiting method, device, electronic equipment and storage medium
CN113873042B (en) * 2021-10-11 2022-06-07 北京国信未来城市数字科技研究院有限公司 Edge intelligent controller and data processing method
CN115118465B (en) * 2022-06-13 2023-11-28 北京寰宇天穹信息技术有限公司 Cloud edge end cooperative zero trust access control method and system based on trusted label
CN115859159B (en) * 2023-02-16 2023-05-05 北京爱企邦科技服务有限公司 Data evaluation processing method based on data integration
CN116774946B (en) * 2023-07-17 2024-01-05 广州华企联信息科技有限公司 Geometric data storage optimization method and system based on cloud edge fusion
CN116896483B (en) * 2023-09-08 2023-12-05 成都拓林思软件有限公司 Data protection system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197128A (en) * 2019-05-08 2019-09-03 华南理工大学 The recognition of face architecture design method planned as a whole based on edge calculations and cloud
CN110209716A (en) * 2018-02-11 2019-09-06 北京华航能信科技有限公司 Intelligent internet of things water utilities big data processing method and system
CN111339183A (en) * 2020-02-11 2020-06-26 腾讯云计算(北京)有限责任公司 Data processing method, edge node, data center and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9912638B2 (en) * 2012-04-30 2018-03-06 Zscaler, Inc. Systems and methods for integrating cloud services with information management systems
US10007513B2 (en) * 2015-08-27 2018-06-26 FogHorn Systems, Inc. Edge intelligence platform, and internet of things sensor streams system
CN105357041A (en) * 2015-10-30 2016-02-24 上海帝联信息科技股份有限公司 Edge node server, and log file uploading method and system
US20180113578A1 (en) * 2016-10-24 2018-04-26 Oracle International Corporation Systems and methods for identifying process flows from log files and visualizing the flow
US10574547B2 (en) * 2018-04-12 2020-02-25 Cisco Technology, Inc. Anomaly detection and correction in wireless networks
CN108737569B (en) * 2018-06-22 2020-04-28 浙江大学 Service selection method facing mobile edge computing environment
US11157478B2 (en) * 2018-12-28 2021-10-26 Oracle International Corporation Technique of comprehensively support autonomous JSON document object (AJD) cloud service
US11210126B2 (en) * 2019-02-15 2021-12-28 Cisco Technology, Inc. Virtual infrastructure manager enhancements for remote edge cloud deployments
CN111131379B (en) * 2019-11-08 2021-06-01 西安电子科技大学 Distributed flow acquisition system and edge calculation method
CN110968478B (en) * 2019-11-21 2023-04-25 掌阅科技股份有限公司 Log acquisition method, server and computer storage medium
CN111145843A (en) * 2019-11-27 2020-05-12 陕西医链区块链集团有限公司 Multi-center integration platform and method based on medical big data
CN111131421B (en) * 2019-12-13 2022-07-29 中国科学院计算机网络信息中心 Method for interconnection and intercommunication of industrial internet field big data and cloud information
CN112765217A (en) * 2020-07-14 2021-05-07 袁媛 Data processing method and system based on edge calculation and path analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110209716A (en) * 2018-02-11 2019-09-06 北京华航能信科技有限公司 Intelligent internet of things water utilities big data processing method and system
CN110197128A (en) * 2019-05-08 2019-09-03 华南理工大学 The recognition of face architecture design method planned as a whole based on edge calculations and cloud
CN111339183A (en) * 2020-02-11 2020-06-26 腾讯云计算(北京)有限责任公司 Data processing method, edge node, data center and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Big Data Cleaning Based on Mobile Edge》;Tian Wang;《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》;20200229;全文 *
《基于边缘计算的数据获取与处理系统设计与实现》;刘洋;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20190515;全文 *

Also Published As

Publication number Publication date
CN113515367A (en) 2021-10-19
CN113515368A (en) 2021-10-19
CN111949410B (en) 2021-05-07
CN111949410A (en) 2020-11-17
CN113515367B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN113515368B (en) Data integration method combining big data and edge calculation and storage medium
CN111695697B (en) Multiparty joint decision tree construction method, equipment and readable storage medium
CN113298197B (en) Data clustering method, device, equipment and readable storage medium
CN111667015B (en) Method and device for detecting state of equipment of Internet of things and detection equipment
CN113422782A (en) Cloud service vulnerability analysis method and artificial intelligence analysis system based on big data
CN111797435B (en) Data analysis method based on Internet of things interaction and cloud computing communication and cloud server
CN111881164B (en) Data processing method based on edge computing and path analysis and big data cloud platform
CN114676444A (en) Block chain-based storage system
CN111949720B (en) Data analysis method based on big data and artificial intelligence and cloud data server
CN112766560B (en) Alliance blockchain network optimization method, device, system and electronic equipment
CN112069269B (en) Big data and multidimensional feature-based data tracing method and big data cloud server
CN103455491A (en) Method and device for classifying search terms
CN111274301B (en) Intelligent management method and system based on data assets
CN115205699B (en) Map image spot clustering fusion processing method based on CFSFDP improved algorithm
CN115455426A (en) Business error analysis method based on vulnerability analysis model development and cloud AI system
CN113239034A (en) Big data resource integration method and system based on artificial intelligence and cloud platform
CN112866374A (en) Communication data processing method combined with block chain payment network and big data server
CN112003733A (en) Comprehensive management method and management platform for smart park Internet of things
CN115168916B (en) Digital object credible evidence storing method and system for mobile terminal application
CN112883020B (en) Big data application-based analysis and management system
CN114676740A (en) User identification method, device, equipment and storage medium
CN117150356A (en) Service policy adjustment method and device, storage medium and electronic equipment
CN110856253A (en) Positioning method, positioning device, server and storage medium
CN116796265A (en) Object classification method, device, computer equipment and storage medium
CN116796264A (en) Object classification method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220512

Address after: 250100 room 1-204, building 25, Shanghai Garden, No. 100 Dongchen street, Licheng District, Jinan City, Shandong Province

Applicant after: Jinan Yize Information Technology Co.,Ltd.

Address before: 510700 1st floor, building F, Guangdong Software Park, Guangzhou hi tech Industrial Development Zone, Guangzhou City, Guangdong Province

Applicant before: Chen Shunfa

TA01 Transfer of patent application right

Effective date of registration: 20220627

Address after: 510700 1st floor, building F, Guangdong Software Park, Guangzhou hi tech Industrial Development Zone, Guangzhou City, Guangdong Province

Applicant after: Chen Shunfa

Address before: 250100 room 1-204, building 25, Shanghai Garden, No. 100 Dongchen street, Licheng District, Jinan City, Shandong Province

Applicant before: Jinan Yize Information Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220810

Address after: Room 701, No. 65, Chengyi North Street, Phase III, Software Park, Torch High-tech Zone, Xiamen, Fujian 361000

Applicant after: Xiamen jikuai Technology Co.,Ltd.

Address before: 510700 1st floor, building F, Guangdong Software Park, Guangzhou hi tech Industrial Development Zone, Guangzhou City, Guangdong Province

Applicant before: Chen Shunfa

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data integration method and storage medium combining big data and edge computing

Granted publication date: 20220909

Pledgee: Agricultural Bank of China Limited Xiamen Pilot Free Trade Zone Branch

Pledgor: Xiamen jikuai Technology Co.,Ltd.

Registration number: Y2024980005198