CN110781378B - Data graphical processing method and device, computer equipment and storage medium - Google Patents

Data graphical processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110781378B
CN110781378B CN201910841759.5A CN201910841759A CN110781378B CN 110781378 B CN110781378 B CN 110781378B CN 201910841759 A CN201910841759 A CN 201910841759A CN 110781378 B CN110781378 B CN 110781378B
Authority
CN
China
Prior art keywords
data
target
graphic
graph
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910841759.5A
Other languages
Chinese (zh)
Other versions
CN110781378A (en
Inventor
章育涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910841759.5A priority Critical patent/CN110781378B/en
Publication of CN110781378A publication Critical patent/CN110781378A/en
Application granted granted Critical
Publication of CN110781378B publication Critical patent/CN110781378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a data graphical processing method, a data graphical processing device, computer equipment and a storage medium. The method comprises the following steps: receiving a graphic browsing request sent by a client, and acquiring data screening conditions and graphic keywords contained in the graphic browsing request; acquiring behavior data meeting data screening conditions from a preset database; determining the graphic characteristics of the target graphic according to the graphic keywords; generating a target graph by using the graph characteristics and the behavior data; generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data; the target graphic and the target user portrait are sent to the client. The technical scheme of the invention realizes the customization of the target graph, accurately matches the requirement of the client user, generates the target user portrait which can embody the corresponding user characteristics, and improves the flexibility of the graphical presentation of the data.

Description

Data graphical processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to a data graphics processing method, apparatus, computer device, and storage medium.
Background
Often when analyzing large amounts of data, it is often necessary to graphically present the data.
In the conventional method, a preset fixed pattern structure is generally adopted to graphically display data. However, for different data browsing users, browsing habits are different, even for the same data browsing user, different graphic structures with different data types may be required, and the existing fixed graphic structure cannot accurately match the browsing requirements of the user, so that the flexibility of the data graphical presentation is low.
Disclosure of Invention
The embodiment of the invention provides a data graphical processing method, a device, computer equipment and a storage medium, which are used for solving the problem that the data graphical presentation mode in the prior art cannot be accurately matched with the user requirement, so that the flexibility of the data graphical presentation is low.
A data patterning method, comprising:
receiving a graphic browsing request sent by a client, and acquiring data screening conditions and graphic keywords contained in the graphic browsing request;
acquiring behavior data meeting the data screening conditions from a preset database;
Determining the graphic characteristics of the target graphic according to the graphic keywords;
generating the target graph by using the graph characteristics and the behavior data;
generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data;
and sending the target graph and the target user portrait to the client.
A data-patterned processing device, comprising:
the request receiving module is used for receiving a graphic browsing request sent by a client and acquiring data screening conditions and graphic keywords contained in the graphic browsing request;
the data screening module is used for acquiring behavior data meeting the data screening conditions from a preset database;
the feature determining module is used for determining the graphic features of the target graphics according to the graphic keywords;
a graph generating module, configured to generate the target graph using the graph feature and the behavior data;
the portrait generation module is used for generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data;
and the data transmission module is used for transmitting the target graph and the target user portrait to the client.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the data patterning method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the data patterning method described above.
According to the data graphical processing method, the device, the computer equipment and the storage medium, according to the data screening conditions and the graphic keywords contained in the graphic browsing request sent by the client, behavior data meeting the data screening conditions are obtained from the preset database, the graphic characteristics of the target graphics are determined according to the graphic keywords, then the graphic characteristics and the behavior data are used for generating the target graphics, the target user portraits corresponding to the target graphics are generated according to the data screening conditions and the behavior data, and then the target graphics and the target user portraits are sent to the client for presentation, so that the target graphics are customized according to the data screening conditions and the graphic keywords, the requirements of users of the client are accurately matched, the flexibility of the data graphical presentation is improved, meanwhile, the target user portraits capable of reflecting the corresponding user characteristics are generated according to the data screening conditions and the behavior data and are provided for visual presentation of the client, and the flexibility of the data graphical presentation is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a data patterning method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of processing data in a graphic manner in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart showing a step S4 in the data patterning method according to an embodiment of the present invention;
FIG. 4 is a flowchart of step S42 of the data patterning method according to an embodiment of the present invention;
FIG. 5 is a flowchart showing a step S5 in the data patterning method according to an embodiment of the present invention;
FIG. 6 is a flowchart of step S6 in a data patterning method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a data patterning device according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The data graphical processing method provided by the application can be applied to an application environment shown in fig. 1, wherein the application environment comprises a server and a client, the server and the client are connected through a network, the network can be a wired network or a wireless network, the client specifically comprises but is not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be specifically realized by using an independent server or a server cluster formed by a plurality of servers. The client sends the graphic browsing request to the server, the server analyzes the graphic browsing request, and the target graphic and the target user portrait are returned to the client.
In an embodiment, as shown in fig. 2, a data graphics processing method is provided, and the method is applied to the server in fig. 1 for illustration, and specifically includes steps S1 to S6, which are described in detail as follows:
s1: and receiving a graphic browsing request sent by the client, and acquiring data screening conditions and graphic keywords contained in the graphic browsing request.
Specifically, the user may initiate a graphical browsing request for the target data at the client, where the graphical browsing request includes the data filtering condition and the graphical keyword.
The data filtering conditions include, but are not limited to, filtering conditions of user categories, user attributes, behavior data types, and the like in the data to be filtered. The user category is used for identifying post information of the user, the user attribute is used for identifying personal information of the user, and the behavior data type is used for identifying the data type of target data needing to be graphically displayed. For example, the user type may be "sales person" or "technical developer" or the like, the user attribute may include a plurality of different sub-attributes of "gender", "age", "region", "job" or the like, and the behavior data type may be "sales performance", "yield" or the like.
In a specific embodiment, the data filtering condition may specifically be that the user type is "sales person", the user attribute is "sex man, age is older than 30 years, the region is Shenzhen, the position is manager", and the behavior data type is "sales performance".
The graphic key is used for identifying graphic features such as colors, shapes, sizes and the like of the graphic and the data display dimension of the graphic. For example, the graphic keywords may include keywords representing graphic features such as "color blue", "shape pie charts", and keywords representing data presentation dimensions such as "age" or "region".
It should be noted that, the user may perform the condition selection and the keyword selection through the optional condition attribute and the optional keyword displayed on the man-machine interaction interface provided by the client, for example, the user may select a desired user type from a plurality of preset optional user types provided by the client, and may also select a desired color from a plurality of preset optional colors provided by the client.
S2: and acquiring behavior data meeting the data screening conditions from a preset database.
Specifically, the server screens target data meeting the data screening conditions from the data to be screened in the preset database according to the received data screening conditions, wherein the target data is behavior data.
In a specific embodiment, if the data filtering conditions include filtering conditions of a user category, a user attribute and a behavior data type, and each data record in a preset database includes fields of the user category, the user attribute, the behavior data type, the behavior data and the like, the server screens out data records conforming to the filtering conditions of the user category, the user attribute and the behavior data type from the preset database, reads values of the behavior data fields from the screened data records, and accumulates the read values of each data field to obtain the behavior data.
S3: and determining the graphic characteristics of the target graphic according to the graphic keywords.
Specifically, the server extracts the graphic features of the target graphics from the graphic keywords according to the received graphic keywords.
The target graph is a graph showing data screening conditions and behavior data thereof, the graph features comprise attribute features of the target graph and data showing dimensions, the attribute features comprise, but are not limited to, colors, sizes, shapes and the like, the data showing dimensions are used for identifying types of showing data of the target graph, and the data showing dimensions can be a certain optional condition attribute in the data screening conditions.
For example, if the data screening conditions are: the user type is "sales person", the user attribute is "sex man, age is greater than 30 years old, the region is Shenzhen, beijing, shanghai, the job position is manager", and the behavior data type is "sales performance", the data display dimension may be region.
It should be noted that, there is no necessary sequence of execution between the step S2 and the step S3, which may be a parallel execution relationship, which is not limited herein.
S4: using the graphical features and behavior data, a target graphic is generated.
Specifically, the server performs graphical processing on the behavior data obtained in the step S2 according to the graphical features obtained in the step S3 to obtain a target graph.
If the behavior data are multiple, the server generates a corresponding target subgraph for each behavior data according to the graph characteristics, and combines the multiple target subgraphs to obtain the target graph.
S5: and generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data.
Specifically, the server extracts user characteristics from data records matched with data screening conditions and behavior data in a preset database as user labels, and generates a target user portrait according to the extracted user labels.
Wherein the target user portrayal is used for reflecting common characteristics of users matched with the data screening conditions and the behavior data.
S6: the target graphic and the target user portrait are sent to the client.
Specifically, the server side sends the target graph obtained in the step S4 and the target user portrait obtained in the step S5 to the client side, and the client side displays the target graph and the target user portrait in a preset display area of the display terminal so as to be convenient for a user to view.
According to the method, behavior data meeting the data screening conditions are obtained from a preset database according to the data screening conditions and the graphic keywords contained in the graphic browsing request sent by the client, graphic features of the target graphics are determined according to the graphic keywords, then the target graphics are generated by using the graphic features and the behavior data, target user portraits corresponding to the target graphics are generated according to the data screening conditions and the behavior data, and then the target graphics and the target user portraits are sent to the client for presentation.
In one embodiment, the data filtering condition includes N specific condition sub-items, where N is a positive integer.
Specifically, the data filtering condition can be split into a plurality of specific condition sub-items, and each specific condition sub-item can be obtained according to a selected value combination in selectable values of selectable condition attributes of the client.
Further, the specific condition sub-items can be obtained by splitting the data screening conditions according to the data display dimensions, and each specific condition sub-item corresponds to a selectable value in the data display dimensions.
For example, if the data screening condition X is: the user type is sales personnel, the user attribute is sex male, the age is over 30 years old, the region is Shenzhen, beijing, shanghai, the job position is manager, and the behavior data type is sales performance; and, the data presentation dimension is "region", then the specific condition sub-item may include three, respectively:
specific condition sub-item a: the user type is sales personnel, the user attribute is sex male, the age is over 30 years old, the region is Shenzhen, the position is manager, and the behavior data type is sales performance;
specific condition sub-item B: the user type is sales personnel, the user attribute is sex male, the age is over 30 years old, the region is Beijing, the job position is manager, and the behavior data type is sales performance;
Specific condition sub-item C: the user type is "sales person", the user attribute is "sex man, age over 30 years, region is Shanghai, job position is manager", and the behavior data type is "sales performance".
Further, as shown in fig. 3, in step S4, using the graphic features and the behavior data, a target graphic is generated, specifically including steps S41 to S43, as follows:
s41: and screening out the specific behavior data corresponding to each specific condition sub-item from the behavior data.
Specifically, according to each specific condition sub-item, the behavior data is decomposed, so that specific behavior data meeting each specific condition sub-item is obtained.
Continuing with the above data filtering condition X as an example, if the behavior data corresponding to the data filtering condition X is "1500 ten thousand", after decomposing the behavior data according to the specific condition sub-item a, the specific condition sub-item B, and the specific condition sub-item C, the specific behavior data a satisfying the specific condition sub-item a is "500 ten thousand", the specific behavior data B satisfying the specific condition sub-item B is "600 ten thousand", and the specific behavior data C satisfying the specific condition sub-item C is "400 ten thousand" can be obtained.
S42: and generating a target subgraph corresponding to each specific behavior data by using the graphic features and each specific behavior data to obtain N target subgraphs.
Specifically, using the attribute characteristics of the target graph contained in the graph characteristics, converting each specific behavior data into a corresponding target subgraph according to the requirements of the attribute characteristics, and obtaining N target subgraphs.
S43: and combining the N target subgraphs according to a preset combination mode to obtain a target graph.
Specifically, the preset combination mode may be a combination mode of sorting from large to small according to specific behavior data. The server side sorts the specific behavior data of each specific condition sub-item according to the order from big to small, determines the position sequence of each target sub-image according to the sorted specific behavior data, and then combines N target sub-images into a target image according to the position sequence.
Taking the data filtering condition X in step S41 as an example, the specific behavior data a corresponds to the target sub-graph 1, the specific behavior data b corresponds to the target sub-graph 2, the specific behavior data c corresponds to the target sub-graph 3, and the specific behavior data a, the specific behavior data b, and the specific behavior data c are obtained after sorting from large to small: specific behavior data b > specific behavior data a > specific behavior data c, the order of the positions of the target subgraphs is: target sub-graph 2, target sub-graph 1, target sub-graph 3, the server will place the position of each target sub-graph in the target graph according to the position order.
It should be noted that, the preset combination manner may also be a manner of combining according to the correlation between the specific conditional sub-items, for example, placing the target sub-images corresponding to the specific conditional sub-items with higher correlation at adjacent positions, and separating the target sub-images corresponding to the specific conditional sub-items with lower correlation by a longer distance. It can be specifically set according to the needs of practical application, and is not limited herein.
In this embodiment, the data filtering condition is split into N specific condition sub-items, specific behavior data corresponding to each specific condition sub-item is determined, and a target sub-image corresponding to each specific behavior data is generated by using a graphic feature and each specific behavior data, then N target sub-images are combined according to a preset combination mode to obtain a target image, the generation efficiency of the target image can be improved by generating each target sub-image in parallel and combining the target sub-images according to the preset combination mode, and the obtained target image can clearly and intuitively represent each specific behavior data, and meanwhile, the differentiation between each specific behavior data can be clearly represented, so that the data representation is improved.
In one embodiment, the graphical features include a graphical shape, a graphical color, and a reference size.
The graphic shape may be a shape feature such as a circle, a column, a broken line, etc., the graphic color is a display color of the target graphic, such as a red light and a blue light, and the reference size is a unit size of the graphic shape, for example, the reference size of the column is 5mm by 5mm.
Further, as shown in fig. 4, in step S42, using the graphic feature and each specific behavior data, a target subgraph corresponding to each specific behavior data is generated, specifically including steps S421 to S424, which are described in detail below:
s421: and acquiring the specific behavior data with the minimum value from the N pieces of specific behavior data as reference data, wherein other N-1 pieces of specific behavior data are all used as other data.
Specifically, according to the N pieces of specific behavior data obtained in step S41, the specific behavior data having the smallest value is used as the reference data, and the other specific behavior data are used as the other data.
S422: and calculating a reference ratio between the reference data and the preset value unit according to the preset value unit, and scaling the reference size according to the reference ratio to obtain the graph size corresponding to the reference data.
In this embodiment, the preset value unit is a preset minimum data unit, different row data types may correspond to different preset value units, the same row data type may also set different preset value units according to the application requirement, for example, for a row data type of "sales performance", if the row data is in the ten-thousand-element level, the preset value unit may be set to "1 kilo", and if the row data is in the million-element level, the preset value unit may be set to "1 ten-thousand".
Specifically, the ratio between the reference data and the preset value unit is taken as the reference ratio, for example, if the reference data is "10 ten thousand", the preset value unit is "1 thousand", the calculated reference ratio is 100.
And the server performs equal-proportion amplification or reduction processing on the reference size according to the reference ratio to obtain the graph size corresponding to the reference data. For example, if the pattern shape is a column, and the reference size is 2mm×2mm, the pattern size corresponding to the reference data is 200mm×200mm.
S433: and generating a target subgraph corresponding to the reference data by using the graph shape, the graph color and the graph size.
Specifically, the server generates the target subgraph corresponding to the reference data according to the graph shape and the graph color contained in the graph feature and the graph size obtained in step S432.
For example, if the pattern shape is a cylinder, the pattern color is blue, and the pattern size is 200mm by 200mm, then the generated target subgraph is a blue cylinder pattern of 200mm by 200 mm.
S434: and calculating the data ratio between each other data and the reference data, and generating a target subgraph corresponding to each other data according to the data ratio and the target subgraph corresponding to the reference data.
Specifically, for N-1 other data, calculating the ratio between each other data and the reference data to obtain N-1 data ratio, and performing equal-proportion amplification or reduction processing on the target subgraph corresponding to the reference data according to the data ratio of each other data to obtain the target subgraph corresponding to the other data.
For example, if the reference data is "10 ten thousand", the target subgraph corresponding to the reference data is a blue column graph of 200mm×200mm, and the other data is "20 ten thousand", the data ratio is 2, and the server performs 2-time amplification processing on the blue column graph of 200mm×200mm according to the data ratio, so as to obtain a blue column graph of 400mm×400mm, which is used as the target subgraph corresponding to the other data.
Further, in other embodiments, the graph area occupation ratio of the target subgraph corresponding to each specific behavior data may be determined according to the data occupation ratio of each specific behavior data in the behavior data, the graph area of each target subgraph is obtained according to the reference size, and then each target subgraph is obtained by combining the graph shape and the graph color.
In this embodiment, the specific behavior data with the smallest value in the N specific behavior data is used as reference data, the other N-1 specific behavior data is used as other data, scaling is performed on the reference size according to the reference ratio between the reference data and the preset value unit to obtain the graph size corresponding to the reference data, the graph shape and the graph color are combined to generate the target subgraph corresponding to the reference data, then the data ratio between each other data and the reference data is calculated, the target subgraph corresponding to each other data is generated according to the data ratio and the target subgraph corresponding to the reference data, the target subgraph corresponding to the smallest specific behavior data is generated first, and then the scaling is performed on the target subgraph according to the ratio relation between the other specific behavior data and the smallest specific behavior data, so that other target subgraphs can be generated rapidly, and the generation efficiency of each target subgraph is improved.
In one embodiment, as shown in fig. 5, in step S5, a target user portrait corresponding to a target graphic is generated according to data filtering conditions and behavior data, and specifically includes steps S51 to S53, which are described in detail below:
s51: and extracting information keywords from the user information corresponding to each specific condition sub-item by adopting a preset keyword extraction mode to obtain the user tag corresponding to each specific condition sub-item.
Specifically, the server searches user information contained in a data record meeting the specific condition sub-item in a preset database according to the specific condition sub-item, wherein the user information can comprise user attribute information and user behavior information, the user attribute information comprises information identifying user attributes such as name, gender, age, occupation, hobbies, academic, working time and the like, and the user behavior information comprises data information related to behavior data.
The preset keyword extraction mode may be a TextRank keyword extraction algorithm, which is used for matching according to preset keywords from the user information corresponding to each specific condition sub-item, extracting information keywords matched with the preset keywords, and taking the extracted information keywords as user labels.
The number of the user labels corresponding to each specific condition sub-item can be one or a plurality of the user labels corresponding to different specific condition sub-items can be the same or different.
S52: and determining the tag weight of each user tag according to the specific behavior data corresponding to each specific condition sub-item.
Specifically, according to a preset proportional correspondence between a value of specific behavior data and a tag weight, namely, the tag weight is larger as the value of the specific behavior data is larger, the tag weight is smaller as the value of the specific behavior data is smaller, the tag weight corresponding to the specific behavior data is obtained according to the specific behavior data corresponding to each specific condition sub-item, and the obtained tag weight is used as the tag weight of each user tag corresponding to the specific condition sub-item.
It should be noted that, if different specific condition sub-items correspond to the same user tag, the tag weight of the same user tag in each specific condition sub-item may be calculated according to a preset calculation mode, so as to obtain the tag weight of the user tag.
The preset calculation mode may be an arithmetic average value of the tag weight, a weighted average value of the tag weight, a maximum value of the tag weight, or the like, which may be specifically set according to the needs of practical applications, and is not limited herein.
S53: inputting the user labels and the label weight of each user label into a preset word cloud image generating tool to generate a word cloud image, and taking the word cloud image as a target user portrait.
Specifically, a user tag and tag weight thereof are input into a preset word cloud image generating tool to generate word cloud images, and the generated word cloud images are used as target user portraits.
Word cloud pictures, also called word cloud, are a display mode for visualizing user labels, and the user labels are displayed by using different colors, shapes and sizes, so that the gist of a text can be intuitively embodied.
The preset word cloud diagram generating tool can select existing word cloud software, such as Wordle, wordItOut, according to the actual application requirement.
The user tag with a larger tag weight is more obviously embodied in the word cloud. The degree of the user tag in the word cloud can be embodied by one or more combinations of various characteristics such as the text size, the text color, the text thickness and the like of the user tag in the word cloud. For example, the larger the tag weight, the larger the text in the word cloud, the more vivid the color, or the thicker the text font lines.
In this embodiment, a preset keyword extraction manner is adopted to extract user tags from user information corresponding to each specific condition sub-item, and the tag weight of the user tags is determined according to specific behavior data, then a preset word cloud image generating tool is used to generate a word cloud image according to the user tags and the tag weights thereof, and the word cloud image is used as a target user portrait, so that the more obvious the user tags with larger tag weights are in the target user portrait, each user tag can be accurately and fully embodied in the target user portrait, visual display of common characteristics of users matched with data screening conditions and behavior data is achieved, and flexibility of graphical data presentation is effectively improved.
In one embodiment, as shown in fig. 6, in step S6, the target graphic and the target user portrait are sent to the client, specifically including steps S61 to S64, which are described in detail below:
s61: splitting the target graph into a preset number of sub-graph data.
Specifically, the server splits the target graph according to a preset splitting mode to obtain a preset number of sub-graph data.
The preset splitting mode may be equal data size splitting, splitting may be performed according to the degree of data association, and specifically may be set according to the needs of practical applications, which is not limited herein.
S62: and compressing each piece of sub-graph data according to a preset compression mode to obtain a preset number of compressed graph data.
Specifically, the server performs compression processing on each piece of sub-graphics data obtained in step S61 according to a preset compression manner, so as to obtain compressed graphics data corresponding to each piece of sub-graphics data.
S63: and sending the preset number of compressed graphic data to the client so that the client decompresses the compressed graphic data and combines the compressed graphic data to obtain a target graphic, and displaying the target graphic in a preset display area.
Specifically, the server may send each compressed graphics data to the client in parallel, after receiving each compressed graphics data, the client decompresses each compressed graphics data, and after completing the decompression processing of the preset number of compressed graphics data, combines the decompressed compressed graphics data, completes the recovery processing of the target graphics, and displays the target graphics in the preset display area of the display interface.
S64: and if the user portrait viewing request sent by the client is received, the target user portrait is sent to the client.
Specifically, when the client side views the target graph, the user can initiate a viewing request of the user portrait corresponding to the target graph to the server side. The server receives a user portrait viewing request sent by the client, sends the target user portrait generated in the step S5 to the client, and provides the target user portrait for viewing.
In this embodiment, the target graphic is split into the preset number of sub-graphic data, and each sub-graphic data is compressed and then sent to the client, so that the network transmission data volume between the server and the client can be reduced, the data transmission efficiency is improved, and meanwhile, the target user portrait is sent to the client when the user requests to view the user portrait, so that the single-time data transmission volume is reduced, and the graphic display is more flexible.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a data patterning device is provided, where the data patterning device corresponds to the data patterning method in the foregoing embodiment one by one. As shown in fig. 7, the data graphic processing apparatus includes: a request receiving module 10, a data screening module 20, a feature determining module 30, a graph generating module 40, a portrait generating module 50 and a data transmitting module 60. The functional modules are described in detail as follows:
the request receiving module 10 is configured to receive a graphic browsing request sent by a client, and obtain a data filtering condition and a graphic keyword included in the graphic browsing request;
The data screening module 20 is configured to obtain, from a preset database, behavior data that satisfies a data screening condition;
a feature determining module 30, configured to determine a graphic feature of the target graphic according to the graphic keyword;
a graphic generation module 40 for generating a target graphic using the graphic features and the behavior data;
a portrait creation module 50 for creating a target user portrait corresponding to the target graphic based on the data screening conditions and the behavior data;
a data transmission module 60 for transmitting the target graphic and the target user portrait to the client.
Further, the data filtering condition includes N specific condition sub-items, where N is a positive integer, and the graphics generating module 40 includes:
a specific data screening sub-module 401, configured to screen behavior data corresponding to each specific condition sub-item from the behavior data;
a target sub-graph generating sub-module 402, configured to generate a target sub-graph corresponding to each specific behavior data by using the graphic feature and each specific behavior data, to obtain N target sub-graphs;
and the graph combining sub-module 403 is configured to combine the N target subgraphs according to a preset combination manner to obtain a target graph.
Further, the graphical features include a graphical shape, a graphical color, and a reference size, and the target subgraph generation submodule 402 includes:
The behavior data filtering unit 4021 is configured to obtain, from N pieces of specific behavior data, specific behavior data with the smallest value as reference data, and other N-1 pieces of specific behavior data all as other data;
the graphic size determining unit 4022 is configured to calculate a reference ratio between the reference data and the preset value unit according to the preset value unit, and scale the reference size according to the reference ratio to obtain a graphic size corresponding to the reference data;
a first sub-graph generating unit 4023 for generating a target sub-graph corresponding to the reference data using the graphic shape, the graphic color, and the graphic size;
the second sub-graph generating unit 4024 is configured to calculate a data ratio between each other data and the reference data, and generate a target sub-graph corresponding to each other data according to the data ratio and the target sub-graph corresponding to the reference data.
Further, the image generation module 50 includes:
the tag extraction sub-module 501 is configured to extract information keywords from user information corresponding to each specific condition sub-item by adopting a preset keyword extraction manner, so as to obtain a user tag corresponding to each specific condition sub-item;
the weight determining sub-module 502 is configured to determine a tag weight of each user tag according to the specific behavior data corresponding to each specific condition sub-item;
The word cloud image generating sub-module 503 is configured to input the user tags and the tag weight of each user tag into a preset word cloud image generating tool, generate a word cloud image, and take the word cloud image as a target user portrait.
Further, the data transmission module 60 includes:
the splitting module 601 is configured to split the target graphic into a preset number of sub-graphic data;
the compression sub-module 602 is configured to compress each sub-graphics data according to a preset compression manner to obtain a preset number of compressed graphics data;
a first sending sub-module 603, configured to send a preset number of compressed graphics data to the client, so that the client decompresses the compressed graphics data and combines the compressed graphics data to obtain a target graphic, and display the target graphic in a preset display area;
a second transmitting sub-module 604, configured to transmit the target user portrait to the client when receiving the user portrait viewing request transmitted by the client.
For specific limitation of the data patterning device, reference may be made to the limitation of the data patterning method hereinabove, and the description thereof will not be repeated here. The respective modules in the above-described data-patterning device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data patterning method.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the data patterning method in the above embodiment, such as steps S1 to S6 shown in fig. 2. Alternatively, the processor may implement the functions of the modules/units of the data-patterning device in the above-described embodiment, such as the functions of the modules 10 to 60 shown in fig. 7, when executing the computer program. To avoid repetition, no further description is provided here.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program when executed by a processor implements the method for data graphics processing in the method embodiment described above, or where the computer program when executed by a processor implements the functions of the modules/units in the data graphics processing apparatus in the apparatus embodiment described above. To avoid repetition, no further description is provided here.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (5)

1. A data patterning method, the data patterning method comprising:
receiving a graphic browsing request sent by a client, and acquiring data screening conditions and graphic keywords contained in the graphic browsing request;
Acquiring behavior data meeting the data screening conditions from a preset database;
determining the graphic characteristics of the target graphic according to the graphic keywords;
generating the target graph by using the graph characteristics and the behavior data;
generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data;
transmitting the target graphic and the target user representation to the client;
the data filtering condition comprises N specific condition sub-items, wherein N is a positive integer, and the generating the target graph by using the graph characteristics and the behavior data comprises:
screening out specific behavior data corresponding to each specific condition sub-item from the behavior data;
generating a target subgraph corresponding to each specific behavior data by using the graphic features and each specific behavior data to obtain N target subgraphs;
according to a preset combination mode, carrying out combination processing on the N target subgraphs to obtain the target graph;
the graphic feature comprises a graphic shape, a graphic color and a reference size, and the generating the target subgraph corresponding to each specific behavior data by using the graphic feature and each specific behavior data comprises the following steps:
Acquiring the specific behavior data with the minimum value from N pieces of specific behavior data as reference data, wherein other N-1 pieces of specific behavior data are all used as other data;
calculating a reference ratio between the reference data and the preset value unit according to the preset value unit, and performing scaling processing on the reference size according to the reference ratio to obtain a graph size corresponding to the reference data;
generating the target subgraph corresponding to the reference data by using the graph shape, the graph color and the graph size;
calculating a data ratio between each other data and the reference data, and generating the target subgraph corresponding to each other data according to the data ratio and the target subgraph corresponding to the reference data;
the generating the target user portrait corresponding to the target graph according to the data screening condition and the behavior data comprises the following steps:
extracting information keywords from the user information corresponding to each specific condition sub-item by adopting a preset keyword extraction mode to obtain a user tag corresponding to each specific condition sub-item;
Determining the tag weight of each user tag according to the specific behavior data corresponding to each specific condition sub-item;
and inputting the user labels and the label weight of each user label into a preset word cloud image generating tool to generate a word cloud image, and taking the word cloud image as the target user portrait.
2. The data patterning method of claim 1, wherein the transmitting the target graphic and the target user representation to the client comprises:
splitting the target graph into a preset number of sub-graph data;
compressing each sub-graph data according to a preset compression mode to obtain a preset number of compressed graph data;
transmitting the preset number of compressed graphic data to the client so that the client decompresses the compressed graphic data and combines the compressed graphic data to obtain the target graphic, and displaying the target graphic in a preset display area;
and if the user portrait viewing request sent by the client is received, the target user portrait is sent to the client.
3. A data-patterning device, characterized in that the data-patterning device comprises:
The request receiving module is used for receiving a graphic browsing request sent by a client and acquiring data screening conditions and graphic keywords contained in the graphic browsing request;
the data screening module is used for acquiring behavior data meeting the data screening conditions from a preset database;
the feature determining module is used for determining the graphic features of the target graphics according to the graphic keywords;
a graph generating module, configured to generate the target graph using the graph feature and the behavior data;
the portrait generation module is used for generating a target user portrait corresponding to the target graph according to the data screening conditions and the behavior data;
the data sending module is used for sending the target graph and the target user portrait to the client;
the data filtering condition comprises N specific condition sub-items, wherein N is a positive integer, and the graph generating module comprises:
a specific data screening sub-module, configured to screen specific behavior data corresponding to each specific condition sub-item from the behavior data;
the target sub-graph generation sub-module is used for generating a target sub-graph corresponding to each specific behavior data by using the graphic features and each specific behavior data to obtain N target sub-graphs;
The image combination sub-module is used for carrying out combination processing on the N target subgraphs according to a preset combination mode to obtain the target image;
the graphic feature comprises a graphic shape, a graphic color and a reference size, and the target sub-graph generation sub-module comprises:
a behavior data screening unit, configured to obtain, from N pieces of specific behavior data, the specific behavior data with the smallest value as reference data, where the other N-1 pieces of specific behavior data are all other data;
the image size determining unit is used for calculating a reference ratio between the reference data and the preset value unit according to the preset value unit, and performing scaling processing on the reference size according to the reference ratio to obtain the image size corresponding to the reference data;
a first sub-graph generating unit, configured to generate the target sub-graph corresponding to the reference data using the graph shape, the graph color, and the graph size;
the second sub-graph generating unit is used for calculating the data ratio between each other data and the reference data and generating the target sub-graph corresponding to each other data according to the data ratio and the target sub-graph corresponding to the reference data;
The portrait generation module comprises:
the label extraction sub-module is used for extracting information keywords from the user information corresponding to each specific condition sub-item by adopting a preset keyword extraction mode to obtain a user label corresponding to each specific condition sub-item;
the weight determining sub-module is used for determining the tag weight of each user tag according to the specific behavior data corresponding to each specific condition sub-item;
and the word cloud image generation sub-module is used for inputting the user labels and the label weight of each user label into a preset word cloud image generation tool to generate a word cloud image, and taking the word cloud image as a target user portrait.
4. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the data patterning method as claimed in any one of claims 1 to 2 when executing the computer program.
5. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the data patterning method according to any one of claims 1 to 2.
CN201910841759.5A 2019-09-06 2019-09-06 Data graphical processing method and device, computer equipment and storage medium Active CN110781378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910841759.5A CN110781378B (en) 2019-09-06 2019-09-06 Data graphical processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910841759.5A CN110781378B (en) 2019-09-06 2019-09-06 Data graphical processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110781378A CN110781378A (en) 2020-02-11
CN110781378B true CN110781378B (en) 2023-09-22

Family

ID=69383346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910841759.5A Active CN110781378B (en) 2019-09-06 2019-09-06 Data graphical processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110781378B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010548A (en) * 2020-12-28 2021-06-22 魔元术(苏州)信息科技有限公司 Automatic matching graph system for data billboard
CN116483869A (en) * 2023-04-13 2023-07-25 深圳数阔信息技术有限公司 Big data-based user data analysis method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6801229B1 (en) * 2001-04-06 2004-10-05 Plumbdesign System for creation of visual representation of data
CN106682231A (en) * 2017-01-10 2017-05-17 深圳淞鑫金融服务科技发展有限公司 Graphical visual display method and device for big data
CN108334543A (en) * 2017-12-26 2018-07-27 北京国电通网络技术有限公司 With electricity consumption data visualization methods of exhibiting and system
CN108694223A (en) * 2018-03-26 2018-10-23 北京奇艺世纪科技有限公司 The construction method and device in a kind of user's portrait library

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549890B2 (en) * 1997-08-29 2003-04-15 Superbserv, Inc. Interactive computer system and data analysis method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6801229B1 (en) * 2001-04-06 2004-10-05 Plumbdesign System for creation of visual representation of data
CN106682231A (en) * 2017-01-10 2017-05-17 深圳淞鑫金融服务科技发展有限公司 Graphical visual display method and device for big data
CN108334543A (en) * 2017-12-26 2018-07-27 北京国电通网络技术有限公司 With electricity consumption data visualization methods of exhibiting and system
CN108694223A (en) * 2018-03-26 2018-10-23 北京奇艺世纪科技有限公司 The construction method and device in a kind of user's portrait library

Also Published As

Publication number Publication date
CN110781378A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN111061859B (en) Knowledge graph-based data processing method and device and computer equipment
US20200117961A1 (en) Two-dimensional document processing
US10740535B2 (en) Method and system for visualizing or interacting with array data using limited-resolution display devices
CN109284371B (en) Anti-fraud method, electronic device, and computer-readable storage medium
CN108664582B (en) Enterprise relation query method and device, computer equipment and storage medium
CN107273104B (en) Processing method and device for configuration data structure
CN110781378B (en) Data graphical processing method and device, computer equipment and storage medium
CN110246204B (en) Waveform drawing method, waveform drawing device, computer equipment and readable storage medium
US20230079275A1 (en) Method and apparatus for training semantic segmentation model, and method and apparatus for performing semantic segmentation on video
CN111324716A (en) Index data acquisition method and device, computer equipment and storage medium
CN114003160A (en) Data visualization display method and device, computer equipment and storage medium
CN114090838A (en) Method, system, electronic device and storage medium for large data visual display
CN111223155B (en) Image data processing method, device, computer equipment and storage medium
CN108369647B (en) Image-based quality control
CN110874644A (en) Method and device for assisting user in exploring data set and data table
CN115545791A (en) Guest group portrait generation method and device, electronic equipment and storage medium
CN115471582A (en) Map generation method and device, computer equipment and storage medium
CN113569114A (en) Visual display method and device for system data
CN111984743A (en) Audit service data display method and device, computer equipment and storage medium
Zhou et al. Visualizing confusion matrices for multidimensional signal detection correlational methods
JP2020024553A (en) Information processor and business form recognition failure factor analysis method
CN117573847B (en) Visualized answer generation method, device, equipment and storage medium
CN107295357B (en) Image file data entry method, cloud server and terminal
CN106484710B (en) Dynamic data processing method and device and information display method and device
CN114781557B (en) Image information acquisition method and device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant