CN116932860A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116932860A
CN116932860A CN202210322241.2A CN202210322241A CN116932860A CN 116932860 A CN116932860 A CN 116932860A CN 202210322241 A CN202210322241 A CN 202210322241A CN 116932860 A CN116932860 A CN 116932860A
Authority
CN
China
Prior art keywords
target
data
portrait
determining
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210322241.2A
Other languages
Chinese (zh)
Inventor
余辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Coocaa Network Technology Co Ltd
Original Assignee
Shenzhen Coocaa Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Coocaa Network Technology Co Ltd filed Critical Shenzhen Coocaa Network Technology Co Ltd
Priority to CN202210322241.2A priority Critical patent/CN116932860A/en
Publication of CN116932860A publication Critical patent/CN116932860A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application relates to a data processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; determining a portrait generation strategy corresponding to the target dimension type; determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model; processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the character portrait data is displayed on a first display interface. Therefore, the feature analysis of the target crowd is realized according to the portrait model and the dimension type corresponding to the portrait model, so that enterprises can push marketing messages in a targeted manner according to the feature analysis result, and the accuracy of pushing the marketing messages of the enterprises is improved.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of big data processing, in particular to a data processing method, a device, electronic equipment and a storage medium.
Background
With The development of OTT (Over The Top) industry, users have generated massive amounts of data in The process of using internet televisions. By mining and analyzing the massive data of the user group, the enterprise can be helped to push marketing messages.
Currently, in the related art, in order to push a marketing message, a target user group meeting a set condition is obtained from massive data of a user group, and the marketing message is pushed to the target user group.
However, in the prior art, in the pushing process of the marketing message, only the magnitude of the target user group can be obtained, and the visual information such as the characteristics of the target user group can not be obtained, so that the enterprise can not push the marketing message in a targeted manner, and the problem of inaccuracy in pushing the marketing message of the enterprise is solved.
Disclosure of Invention
In view of the foregoing, in order to solve the foregoing technical problems or some of the technical problems, embodiments of the present application provide a data processing method, apparatus, electronic device, and storage medium.
In a first aspect, an embodiment of the present application provides a data processing method, including:
determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model;
determining a portrait generation strategy corresponding to the target dimension type;
determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model;
processing the target dimension data set by using the portrait generation strategy to obtain portrait data;
and displaying the portrait data on a first display interface.
In one possible implementation manner, the target portrait model carries a second identification set of the first crowd;
the determining, from the target portrait model, a target dimension data set corresponding to the first identifier set under the target dimension type includes:
determining at least one first identifier successfully matched with the second identifier set from the first identifier set;
and determining a target dimension data set corresponding to the at least one first identifier under the target dimension type from the target portrait model.
In one possible embodiment, the method further comprises:
when at least one first identifier which fails to be matched with the second identifier set exists in the first identifier set, determining first data corresponding to the first identifier from a crowd database aiming at each first identifier in the at least one first identifier which fails to be matched, and obtaining a first data set corresponding to the at least one first identifier; the first data characterizes attribute data of a user;
updating the target portrait model according to the first data set.
In one possible embodiment, the target representation model is constructed by:
obtaining an image model template from an image model template library, wherein the image model template at least comprises a dimension type;
acquiring a second data set corresponding to a third identification set of a second crowd from the crowd database, wherein each second data in the second data set represents attribute data of a user;
classifying the data of the second data set based on each dimension type, and determining a dimension data set corresponding to the third identification set under each dimension type;
and filling each dimension data set into the portrait model template to obtain the target portrait model.
In one possible implementation manner, the determining the data information to be processed includes:
receiving a portrait generation instruction, wherein the portrait generation instruction comprises crowd screening conditions, a portrait model identifier and a dimension type identifier corresponding to the portrait model identifier;
and determining the data information to be processed according to the portrait generation instruction.
In one possible implementation manner, the determining data information to be processed according to the portrait generation instruction includes:
determining a first identification set of a target crowd meeting the crowd screening conditions from a crowd database;
determining a target portrait model corresponding to the portrait model identification from a portrait model library;
and determining the target dimension type corresponding to the dimension type identifier from the target portrait model.
In one possible embodiment, the received image generation instruction includes:
and when the triggering operation of the first control in the second display interface is monitored, the first control is determined to receive the portrait generation instruction, and the first control is used for receiving the portrait generation instruction.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
the system comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining data information to be processed, and the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model;
the determining module is further used for determining an portrait generation strategy corresponding to the target dimension type;
the determining module is further configured to determine, from the target portrait model, a target dimension dataset corresponding to the first identifier set under the target dimension type;
the acquisition module is used for processing the target dimension data set by utilizing the portrait generation strategy to obtain portrait data;
and the display module is used for displaying the figure data on the first display interface.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, the processor being adapted to execute a data processing program stored in the memory to implement the data processing method as described in the claims.
In a fourth aspect, embodiments of the present application provide a storage medium storing one or more programs executable by one or more processors to implement the data processing method as described above.
The data processing method provided by the embodiment of the application comprises the steps of determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; determining a portrait generation strategy corresponding to the target dimension type; determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model; processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the character portrait data is displayed on a first display interface. According to the method, the feature analysis of the target crowd is realized according to the portrait model and the dimension type corresponding to the portrait model, and the result of the feature analysis is visualized, so that business personnel can clearly determine the result of the feature analysis of the target crowd, an enterprise can push marketing messages according to the result of the feature analysis, and the accuracy of pushing the marketing messages of the enterprise is improved.
Drawings
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
in the above figures:
10. a determining module; 20. an acquisition module; 30. a display module;
400. an electronic device; 401. a processor; 402. a memory; 4021. an operating system; 4022. an application program; 403. a user interface; 404. a network interface; 405. a bus system.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For the purpose of facilitating an understanding of the embodiments of the present application, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the application.
Referring to fig. 1, fig. 1 is a flow chart of a data processing method according to an embodiment of the present application. The data processing method provided by the embodiment of the application comprises the following steps:
s101: determining to-be-processed data information, wherein the to-be-processed data information comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model.
The target crowd at least comprises one user, each user generates data corresponding to the user in the process of using the Internet television, all the data are stored in the crowd database, each data comprise attribute data of the user, and the attribute data comprise basic attribute data and behavior attribute data of the user. Each data in the crowd database is configured with a corresponding identifier, the identifier is a unique identifier of the user, the data of the user can be obtained according to the unique identifier of the user, the unique identifier of the user can be set according to actual needs, and in the embodiment, the unique identifier of the user can be preferably set as a MAC address. The target portrait model comprises at least one target dimension type and a target dimension data set corresponding to each target dimension type, wherein the target dimension type is a characteristic label, and the target dimension data under each target dimension type can be obtained after classifying the data of the user. The target portrait model can be constructed according to the business requirement and the data in the crowd database, and the specific construction method is as follows.
In this embodiment, the target portrait model may be constructed according to the following manner:
obtaining a portrait model template from a portrait model template library, wherein the portrait model template at least comprises a dimension type;
acquiring a second data set corresponding to a third identification set of a second crowd from the crowd database, wherein each second data in the second data set represents attribute data of a user;
classifying the data of the second data set based on each dimension type, and determining a dimension data set corresponding to the third identification set under each dimension type;
and filling each dimension data set into the portrait model template to obtain a target portrait model.
The attribute data of the user are described above, and this embodiment is not described herein. And after determining to obtain the dimension data set corresponding to the third identification set under each dimension type, filling each dimension data set into the portrait model template so as to enable the dimension data set to be associated with the dimension type, thereby obtaining the target portrait model. The portrayal model template library at least comprises one portrayal model template, the portrayal model templates can be set according to service requirements, the portrayal model templates in the embodiment can be a user active scene model template, a user conversion potential model template, a user payment analysis model template and a payment conversion prediction model template, wherein each portrayal model template is pre-provided with dimension types corresponding to the portrayal model templates, for example, the dimension types in the user active scene model template can be an active channel and a viewing layout, the dimension types in the user conversion potential model template can be a play starting and payment viscosity, a payment gold-absorbing list and a payment type, the dimension types in the user payment type analysis model template can be a product package arrival, a last payment driving, a turn-on time preference, an order ticket frequency, a product package loyalty and a continuous package release frequency, and the dimension types in the payment conversion prediction portrayal model template can be a product package preference prediction. The correspondence between the portrait model templates and the dimension types is shown in tables 1, 2, 3 and 4. The correspondence between the portrait model templates and the dimension types is only a preferred implementation manner in this embodiment, and those skilled in the art may update the correspondence between the portrait model templates and the dimension types according to specific service requirements and generate new portrait model templates.
It should be noted that, the second crowd in the crowd database represents all crowds using internet televisions, and the portrait model constructed by the portrait model template and the crowd database carries an identification set corresponding to the crowds, where the identification set carried in each portrait model may be either the third identification set or a part of the identification sets in the third identification set, and the identification set carried in each portrait model corresponds to the dimension data set in each dimension type.
TABLE 1 relationship between portrayal model templates, dimension types, and portrayal generation policies
TABLE 2 relationship between portrayal model templates, dimension types, and portrayal generation policies
TABLE 3 relationship between portrayal model templates, dimension types, and portrayal generation policies
TABLE 4 relationship between portrayal model templates, dimension types, and portrayal generation policies
In this embodiment, in step S101, determining the data information to be processed includes:
receiving an portrayal generation instruction, wherein the portrayal generation instruction comprises crowd screening conditions, a portrayal model identifier and a dimension type identifier corresponding to the portrayal model identifier;
and determining the data information to be processed according to the portrait generation instruction.
The crowd screening conditions can be classified into static time configuration and dynamic time configuration by taking time configuration as the screening conditions, and target crowd meeting the crowd screening conditions is determined according to the selected static time configuration or dynamic time configuration. Each portrait model is configured with a corresponding identifier, which is a unique identifier of the portrait model, and the unique identifier of the portrait model can be set according to actual needs, and the embodiment is not particularly limited herein. Similarly, at least one dimension type included in each portrait model is configured with an identifier, where the identifier is a unique identifier of the dimension type, and the unique identifier of the dimension type may also be set according to actual needs, and this embodiment is not limited specifically herein.
In this embodiment, the number of the target portrait identifiers may be one or more, and the specific user may select the target portrait identifier according to the actual needs, and similarly, the number of the dimension type identifiers corresponding to each portrait model identifier may be one or more, and the specific user may select the target portrait identifier according to the actual needs.
In this embodiment, determining data information to be processed according to an image generation instruction includes:
determining a first identification set of target groups meeting group screening conditions from a group database;
determining a target portrait model corresponding to the portrait model identification from a portrait model library;
and determining the target dimension type corresponding to the dimension type identification from the target portrait model.
The data generated in the process of using the Internet television by each user are stored in the crowd database, and the data are stored in the crowd database according to the corresponding relation between each data and the unique identifier. Each data has a time stamp. When the crowd screening condition is received, selecting data corresponding to the crowd screening condition from the crowd database according to the timestamp corresponding to the data, and determining a first identification set (namely, the first identification set of the data corresponding to the target crowd meeting the crowd screening condition) according to the corresponding relation between the data and the unique identification. At least one portrait model is stored in the portrait model library, and the corresponding relation between each portrait model and the unique identifier is stored in the portrait model library, and when the portrait model identifier is received, the target portrait model corresponding to the portrait model identifier can be determined from the portrait model library according to the corresponding relation between the portrait model and the unique identifier. When the dimension type identification is received, the dimension type in each portrait model is also provided with a unique identification, and the target dimension type can be determined from the target portrait model according to the dimension type identification.
When the number of received portrait model identifiers is plural, a target portrait model corresponding to each portrait model identifier is determined from a portrait model library according to the correspondence between the portrait model and the unique identifier, and similarly, when the number of dimension type identifiers corresponding to the portrait model identifiers is plural, a target dimension type corresponding to each dimension type identifier in the target portrait model can be determined according to the correspondence between the dimension type and the unique identifier in the target portrait model corresponding to the portrait model identifier.
In this embodiment, receiving an image generation instruction includes:
and when the triggering operation of the first control in the second display interface is monitored, determining that an image generation instruction is received, wherein the first control is used for receiving the image generation instruction.
The triggering operation may be a clicking operation for the second display interface or a touching operation for the second display interface. The second display interface is provided with a first control, the second display interface is also provided with at least one second control, the second control is used for a user to select crowd screening conditions, the second display interface is also provided with a third control, the third control is at least one, the third control is used for the user to select portrait models, the third display interface is also provided with a fourth control, the fourth control is at least one, the fourth control is used for the user to select dimension types, and after the user selects the crowd screening conditions, the portrait models and dimension types corresponding to the portrait models based on the second display interface, portrait generation instructions comprising crowd selection conditions, portrait model identifications and dimension identifications corresponding to the portrait model identifications are generated by clicking or touching the first control, and after the portrait generation instructions are generated, the portrait generation instructions are sent.
S102: and determining a portrait generation strategy corresponding to the target dimension type.
The association database stores the correspondence between the dimension type in each image model and the portrait generation policy, and the portrait generation policy is specifically described with reference to table 1, table 2, table 3 and table 4, and is used to process the dimension data under the dimension type to generate the portrait data corresponding to each dimension type.
S103: and determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model.
The dimension data set under the target dimension type in the portrait model is obtained according to the data in the crowd data, and the data of each user corresponds to the unique identifier, so that the dimension data set formed according to the crowd data also has a corresponding identifier set, and the portrait model further comprises the identifier set corresponding to the dimension data set.
And after the first identification set of the target crowd is obtained, matching is carried out according to the first identification set and the identification set contained in the target portrait model, so that a target dimension data set corresponding to the first identification set under the target dimension type can be obtained. The method comprises the following specific steps:
the target portrait model carries a second identification set of the first crowd;
determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model comprises the following steps:
determining at least one first identifier successfully matched with the second identifier set from the first identifier set;
and determining a target dimension data set corresponding to at least one first identifier under the target dimension type from the target portrait model.
The second identification set carried in the target portrait model corresponds to target dimension data under each target dimension type in the target portrait model.
The data processing method provided in this embodiment further includes:
when at least one first identifier which fails to be matched with the second identifier exists in the first identifier set, determining first data corresponding to the first identifiers from a crowd database aiming at each first identifier in the at least one first identifier which fails to be matched, and obtaining a first data set corresponding to the at least one first identifier; the first data characterizes attribute data of the user;
the target representation model is updated based on the first data set.
The attribute data of the user may be referred to above, and this embodiment is not described herein. When at least one first identifier which fails to be matched with the second identifier exists in the first identifier set, the representation is characterized in that after the portrait model is built, a new user uses the Internet television, and a new data packet is generated due to the use of the new user, so that the portrait model is more accurate, and the target portrait model is updated according to the generated new data. And carrying out data classification on new data generated by a new user to obtain dimension data under each dimension type, so as to update the target portrait model.
In this embodiment, a time period threshold may be preset, and when the interval time meets the preset time period threshold, the crowd database is detected, and after data generated by a new user exist in the crowd database, each portrait model in the portrait model library is updated according to the generated new data.
In this embodiment, it should be noted that, after each portrait model in the portrait model library is constructed, according to the correspondence between the identification set carried by each portrait model and the data in the crowd database, the data in the crowd database is detected in real time, and when the data update is generated in the detected data, the corresponding portrait model is updated in real time, so as to ensure the accuracy of the portrait model and improve the accuracy of the feature analysis result of the target crowd.
S104: and processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data.
Wherein the target dimension dataset is computed according to a portrait generation policy to obtain one or more portrait data, wherein the portrait data may include magnitudes of people under the target dimension type and duty ratios of the magnitudes of the people. For example, for table 1, when the target portrait model is a user active scene model and the target dimension type corresponding to the target portrait model is an active channel, obtaining a target dimension data set corresponding to a first identification set under the active channel according to a first identification set of a target crowd, and calculating the crowd level and the occupancy ratio of the crowd level of the homepage under the active channel when the homepage is active for more than five minutes through a portrait generation strategy corresponding to the active channel; the method comprises the steps that when the machine is started, the machine directly enters the APK, or the machine directly enters the crowd level of the bid APK within five minutes after the machine is started, and the occupancy rate of the crowd level; the method comprises the steps that the power-on directly enters a signal source, or the crowd level and the occupancy rate of the crowd level of the signal source are directly entered within five minutes after the power-on; the population level of the target population which is not started on the same day and the population level duty ratio.
S105: and displaying the image data of each character on a first display interface.
Wherein, the figure image data can be displayed in the form of a pie chart, a bar chart, a line chart, etc.
The data processing method includes determining data information to be processed, wherein the data information to be processed includes a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; determining a portrait generation strategy corresponding to the target dimension type; determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model; processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the character portrait data is displayed on a first display interface. According to the method, the feature analysis of the target crowd is realized according to the portrait model and the dimension type corresponding to the portrait model, and the result of the feature analysis is visualized, so that business personnel can clearly determine the result of the feature analysis of the target crowd, an enterprise can push marketing messages according to the result of the feature analysis, and the accuracy of pushing the marketing messages of the enterprise is improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing device provided by the embodiment of the application comprises: the system comprises a determining module 10, an acquiring module 20 and a display model, wherein the determining module 10 is used for determining data information to be processed, and the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; the determining module 10 is further configured to determine a portrait generation policy corresponding to the target dimension type; the determining module 10 is further configured to determine, from the target portrait model, a target dimension data set corresponding to the first identifier set under the target dimension type; the acquisition module 20 is used for processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the display module 30 is used for displaying the portrait data on the first display interface.
In this embodiment, the target portrait model carries a second identifier set of the first crowd; the determination module 10 is also for: determining at least one first identifier successfully matched with the second identifier set from the first identifier set; and determining a target dimension data set corresponding to the at least one first identifier under each target dimension type from the target portrait model.
The data processing device provided by the embodiment further comprises an updating module, wherein the updating module is used for determining first data corresponding to the first identifiers from the crowd database aiming at each first identifier in the at least one first identifier which is failed to be matched with the second identifier when the first identifier is at least one first identifier which is failed to be matched with the second identifier in the first identifier set, so as to obtain a first data set corresponding to the at least one first identifier; the first data characterizes attribute data of the user;
the target representation model is updated based on the first data set.
The data processing device provided by the embodiment further comprises a construction module, wherein the construction module is used for acquiring a portrait model template from a portrait model template library, and the portrait model template at least comprises a dimension type;
acquiring a second data set corresponding to a third identification set of a second crowd from the crowd database, wherein each first data in the second data set represents attribute data of a user;
classifying the data of the second data set based on each dimension type, and determining a dimension data set corresponding to the third identification set under each dimension type;
and filling each dimension data set into the portrait model template to obtain a target portrait model.
The determining module 10 in this embodiment is further configured to:
receiving a portrait generation instruction, wherein the portrait generation instruction comprises crowd screening conditions, a portrait model identifier and a dimension type identifier corresponding to the portrait model identifier;
and determining the data information to be processed according to the portrait generation instruction.
The determining module 10 in this embodiment is further configured to:
determining a first identification set of target groups meeting group screening conditions from a group database;
determining a target portrait model corresponding to the portrait model identification from a portrait model library;
and determining a target dimension type corresponding to the dimension type identifier from the target portrait model.
The determining module 10 in this embodiment is further configured to:
and when the triggering operation of the first control in the second display interface is monitored, determining that an image generation instruction is received, wherein the first control is used for receiving the image generation instruction.
According to the data processing device, feature analysis of target people is achieved according to the portrait model and dimension types corresponding to the portrait model, and the results of the feature analysis are visualized, so that business personnel can clearly determine the results of the feature analysis of the target people, an enterprise can push marketing messages according to the results of the feature analysis in a targeted mode, and accuracy of pushing the marketing messages of the enterprise is improved.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and an electronic device 400 shown in fig. 3 includes: at least one processor 401, memory 402, at least one network interface 404, and other user interfaces 403. The various components in electronic device 400 are coupled together by bus system 405. It is understood that the bus system 405 is used to enable connected communications between these components. The bus system 405 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 405 in fig. 3.
The user interface 403 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, etc.).
It will be appreciated that the memory 402 in embodiments of the application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 402 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory 402 stores the following elements, executable units or data structures, or a subset thereof, or an extended set thereof: an operating system 4021 and application programs 4022.
The operating system 4021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 4022 include various application programs such as a Media Player (Media Player), a Browser (Browser), and the like for realizing various application services. A program for implementing the method of the embodiment of the present application may be included in the application program 4022.
In the embodiment of the present application, the processor 401 is configured to execute the method steps provided in the method embodiments by calling a program or an instruction stored in the memory 402, specifically, a program or an instruction stored in the application program 4022, for example, including: determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; determining a portrait generation strategy corresponding to the target dimension type; determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model; processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the character portrait data is displayed on a first display interface.
The method disclosed in the above embodiment of the present application may be applied to the processor 401 or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 401 or by instructions in the form of software. The processor 401 described above may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software elements in a decoding processor. The software elements may be located in a random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 402, and the processor 401 reads the information in the memory 402 and, in combination with its hardware, performs the steps of the above method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (dspev, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be an electronic device as shown in fig. 3, and may perform all steps of the data processing method shown in fig. 1, so as to achieve the technical effects of the data processing method shown in fig. 1, and the detailed description with reference to fig. 1 is omitted herein for brevity.
The embodiment of the application also provides a storage medium (computer readable storage medium). The storage medium here stores one or more programs. Wherein the storage medium may comprise volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid state disk; the memory may also comprise a combination of the above types of memories.
When one or more programs in the storage medium are executable by one or more processors, the above-described data processing method performed on the data processing apparatus side is implemented.
The processor is configured to execute a data processing program stored in the memory to implement the following steps of a data processing method executed on the data processing apparatus side: determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model; determining a portrait generation strategy corresponding to the target dimension type; determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model; processing the corresponding target dimension data set by using the portrait generation strategy to obtain portrait data; the character portrait data is displayed on a first display interface.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method of data processing, comprising:
determining data information to be processed, wherein the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model;
determining a portrait generation strategy corresponding to the target dimension type;
determining a target dimension data set corresponding to the first identification set under the target dimension type from the target portrait model;
processing the target dimension data set by using the portrait generation strategy to obtain portrait data;
and displaying the portrait data on a first display interface.
2. The method of claim 1, wherein the target representation model carries a second set of identifications of the first population;
the determining, from the target portrait model, a target dimension data set corresponding to the first identifier set under the target dimension type includes:
determining at least one first identifier successfully matched with the second identifier set from the first identifier set;
and determining a target dimension data set corresponding to the at least one first identifier under the target dimension type from the target portrait model.
3. The method according to claim 2, characterized in that the method further comprises:
when at least one first identifier which fails to be matched with the second identifier set exists in the first identifier set, determining first data corresponding to the first identifier from a crowd database aiming at each first identifier in the at least one first identifier which fails to be matched, and obtaining a first data set corresponding to the at least one first identifier; the first data characterizes attribute data of a user;
updating the target portrait model according to the first data set.
4. The method of claim 1, wherein the target representation model is constructed by:
obtaining an image model template from an image model template library, wherein the image model template at least comprises a dimension type;
acquiring a second data set corresponding to a third identification set of a second crowd from the crowd database, wherein each second data in the second data set represents attribute data of a user;
classifying the data of the second data set based on each dimension type, and determining a dimension data set corresponding to the third identification set under each dimension type;
and filling each dimension data set into the portrait model template to obtain the target portrait model.
5. The method of claim 1, wherein the determining the data information to be processed comprises:
receiving a portrait generation instruction, wherein the portrait generation instruction comprises crowd screening conditions, a portrait model identifier and a dimension type identifier corresponding to the portrait model identifier;
and determining the data information to be processed according to the portrait generation instruction.
6. The method of claim 5, wherein the determining data information to be processed according to the representation generation instruction comprises:
determining a first identification set of a target crowd meeting the crowd screening conditions from a crowd database;
determining a target portrait model corresponding to the portrait model identification from a portrait model library;
and determining the target dimension type corresponding to the dimension type identifier from the target portrait model.
7. The method of claim 5 or 6, wherein the received representation generation instruction comprises:
and when the triggering operation of the first control in the second display interface is monitored, the first control is determined to receive the portrait generation instruction, and the first control is used for receiving the portrait generation instruction.
8. A data processing apparatus, comprising:
the system comprises a determining module, a processing module and a processing module, wherein the determining module is used for determining data information to be processed, and the data information to be processed comprises a first identification set of a target crowd, a target portrait model and a target dimension type corresponding to the target portrait model;
the determining module is further used for determining an portrait generation strategy corresponding to the target dimension type;
the determining module is further configured to determine, from the target portrait model, a target dimension dataset corresponding to the first identifier set under the target dimension type;
the acquisition module is used for processing the target dimension data set by utilizing the portrait generation strategy to obtain portrait data;
and the display module is used for displaying the figure data on the first display interface.
9. An electronic device, comprising: a processor and a memory, the processor being configured to execute a data processing program stored in the memory to implement the data processing method according to any one of claims 1 to 7.
10. A storage medium storing one or more programs executable by one or more processors to implement the data processing method of any one of claims 1-7.
CN202210322241.2A 2022-03-29 2022-03-29 Data processing method and device, electronic equipment and storage medium Pending CN116932860A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210322241.2A CN116932860A (en) 2022-03-29 2022-03-29 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210322241.2A CN116932860A (en) 2022-03-29 2022-03-29 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116932860A true CN116932860A (en) 2023-10-24

Family

ID=88390205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210322241.2A Pending CN116932860A (en) 2022-03-29 2022-03-29 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116932860A (en)

Similar Documents

Publication Publication Date Title
US10621281B2 (en) Populating values in a spreadsheet using semantic cues
US9665659B1 (en) System and method for website experimentation
JP5917542B2 (en) Evaluation score for online store
CN108399072B (en) Application page updating method and device
CN110727431A (en) Applet generation method and apparatus
CN113283321A (en) Method and system for marking advertisement frame for automatic content identification
WO2020015171A1 (en) Electronic device, method and system for determining target object for investment promotion, and storage medium
WO2019111508A1 (en) Information processing device, information processing method, and program
CN110175306A (en) A kind of processing method and processing device of advertising information
CN112948418A (en) Dynamic query method, device, equipment and storage medium
CN113159934A (en) Method and system for predicting passenger flow of network, electronic equipment and storage medium
WO2016188334A1 (en) Method and device for processing application access data
CN109213782B (en) Search interface configuration and display method and device and communication equipment
CN105260459A (en) Search method and apparatus
Arellano-Uson et al. Protocol-agnostic method for monitoring interactivity time in remote desktop services
CN113190229A (en) Method and device for generating government affair page
JP6680663B2 (en) Information processing apparatus, information processing method, prediction model generation apparatus, prediction model generation method, and program
CN112650946A (en) Product information recommendation method, device and system and storage medium
CN110175295B (en) Advertisement space recommendation method, electronic device and computer readable storage medium
US20180061258A1 (en) Data driven feature discovery
CN111553749A (en) Activity push strategy configuration method and device
CN116932860A (en) Data processing method and device, electronic equipment and storage medium
CN111311015A (en) Personalized click rate prediction method, device and readable storage medium
KR102141133B1 (en) Systems and methods for skinning an application with interactive content
US20140258829A1 (en) Webform monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination