CN113409076B - Method and system for constructing user portrait based on big data and cloud platform - Google Patents

Method and system for constructing user portrait based on big data and cloud platform Download PDF

Info

Publication number
CN113409076B
CN113409076B CN202110650469.XA CN202110650469A CN113409076B CN 113409076 B CN113409076 B CN 113409076B CN 202110650469 A CN202110650469 A CN 202110650469A CN 113409076 B CN113409076 B CN 113409076B
Authority
CN
China
Prior art keywords
portrait
dynamic content
target dynamic
target
outline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110650469.XA
Other languages
Chinese (zh)
Other versions
CN113409076A (en
Inventor
张文博
黄国华
陈剑伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tension Information Technology Co ltd
Original Assignee
Guangzhou Tension Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tension Information Technology Co ltd filed Critical Guangzhou Tension Information Technology Co ltd
Priority to CN202110650469.XA priority Critical patent/CN113409076B/en
Publication of CN113409076A publication Critical patent/CN113409076A/en
Application granted granted Critical
Publication of CN113409076B publication Critical patent/CN113409076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The method, the system and the cloud platform for constructing the user portrait based on the big data acquire the same contour of the to-be-processed portrait corresponding to each target dynamic content contained in the user behavior data and the mobile association relation corresponding to the contour of the to-be-processed portrait, divide the description characteristics of each target dynamic content contained in the user behavior data by dividing the user behavior data, update the contour of the to-be-processed portrait according to the description characteristics and the mobile association relation of each target dynamic content, and update the portrait of the target dynamic content in an automatic mode, so that the labor cost can be saved, the updating efficiency can be improved, and after the portrait of the target dynamic content is updated, the occurrence of errors in portrait construction can be greatly reduced in a scene needing to apply the portrait of the target dynamic content to perform related operations, and the accuracy of portrait construction can be improved.

Description

Method and system for constructing user portrait based on big data and cloud platform
Technical Field
The application relates to the technical field of big data and portrait construction, in particular to a method and a system for constructing a user portrait based on big data and a cloud platform.
Background
The market competition is aggravated and the information technology is changed day by day, so that the workflow data processing technology is developed, developed rapidly, and widely applied in the fields of office automation, software development process data processing and industrial manufacturing, and becomes an effective way for realizing enterprise informatization construction. The independent workflow engine system helps enterprises to complete low-cost and high-efficiency cooperative office work by adopting the latest internet information technology, and greatly improves the cooperative office work efficiency of the enterprises; all the subsequent workflow related systems can be repeatedly used through the independent workflow engine system, and a workflow platform does not need to be purchased independently, so that the cost is greatly reduced, and the industry informatization process is promoted.
With the continuous progress of science and technology, based on an independent workflow engine system, the user portrait serves as an effective tool for delineating target users and connecting user requirements and design directions, and the user portrait is widely applied to various fields. In the actual operation process, the most superficial and life-close words are used to link the attributes and behaviors of the user with the expected data conversion.
With the introduction of the technology of big data into the user portrait construction, the speed of user portrait construction is improved, and good experience is brought to users. However, there are some drawbacks in the technique of constructing a user representation.
Disclosure of Invention
In view of this, the present application provides a method, system and cloud platform for constructing a user portrait based on big data.
In a first aspect, a method for constructing a user representation based on big data is provided, including:
acquiring user behavior data and a contour of an image to be processed corresponding to a target dynamic content set contained in the user behavior data, wherein the target dynamic content set contains at least one target dynamic content contained in the user behavior data;
acquiring a movement association relation corresponding to the outline of the to-be-processed portrait;
dividing the user behavior data into description features of the target dynamic content set, wherein the description features of the target dynamic content set comprise the description features of the at least one target dynamic content;
and updating the outline of the to-be-processed portrait according to the description characteristics of the target dynamic content set and the mobile association relation to obtain an updated portrait.
Further, the method further comprises at least one of:
acquiring a target portrait outline of a target identification portrait corresponding to the target dynamic content set, and determining that an error between the portrait outline to be processed and the target portrait outline is less than or equal to a first preset error;
and acquiring a recognition result of a target recognition portrait corresponding to the target dynamic content set, and determining that an outline label exists in the outline of the portrait to be processed based on the recognition result and the movement association relation, wherein the error between the portrait of the outline label and the outline of the portrait to be processed is less than or equal to a second set error.
Further, the target dynamic content includes identification result guidance content, the description feature includes identification result operation data, and the updating of the to-be-processed portrait outline according to the description feature of the target dynamic content set and the mobile association relationship obtains an updated portrait, including:
determining final identification corresponding to the target dynamic content set according to the description characteristics of the target dynamic content set and the mobile association relation;
and updating the outline of the to-be-processed portrait based on the final identification to obtain an updated portrait.
Further, the determining a final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the mobile association relationship includes:
if the identification result operation data of the target dynamic content set comprises feedback operation, determining that the following conditions are met as the final identification corresponding to the target dynamic content set according to the mobile association relationship: referencing the to-be-processed portrait outline, adjacent to the to-be-processed portrait outline, and associated with the feedback operation.
Further, the determining a final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the mobile association relationship includes:
if the identification result operation data of the target dynamic content set only comprises a first compensation operation, determining compensation which refers to the outline of the to-be-processed portrait and has an error adjacent to the outline of the to-be-processed portrait according to the movement association relation, wherein the first compensation operation comprises a left-turn operation or a right-turn operation;
determining the compensation as a final identification of the target dynamic content.
Further, the determining a final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the mobile association relationship includes:
if the identification result operation data of the target dynamic content set comprises a second compensation operation, determining that the following conditions are met as the final identification corresponding to the target dynamic content set according to the mobile association relationship: referring to the outline of the to-be-processed portrait, and the outline of the to-be-processed portrait is adjacent to the outline of the to-be-processed portrait and corresponds to the second compensation operation before compensation; wherein the second compensation operation includes at least two of a left turn operation, a right turn operation, and a straight operation.
Further, the determining the final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the mobile association relationship includes:
if the identification result operation data of the target dynamic content set only comprises straight operation, determining an outline label which refers to the outline of the portrait to be processed and is adjacent to the outline of the portrait to be processed according to the mobile association relation;
and if the contour label is a contour label corresponding to at least one of feedback or compensation, determining a target identification portrait corresponding to the target dynamic content set, and determining a standard index of the contour label, which is consistent with the identification result of the target identification portrait and refers to, as a final identification corresponding to the target dynamic content set.
Further, the method further comprises:
acquiring a target identification portrait corresponding to the target dynamic content set;
acquiring a first motion track contained in the user behavior data and a second motion track of the target identification portrait;
and updating the target recognition portrait according to the first motion track and the second motion track.
In a second aspect, there is provided a system for constructing a user representation based on big data, comprising a processor and a memory in communication with each other, the processor being configured to read a computer program from the memory and execute the computer program to implement the method described above.
In a third aspect, a cloud platform, comprising: a memory for storing a computer program; a processor coupled to the memory for executing the computer program stored by the memory to implement the above-described method.
The method, the system and the cloud platform for constructing the user portrait based on the big data acquire the same contour of the to-be-processed portrait corresponding to each target dynamic content contained in the user behavior data and the mobile association relation corresponding to the contour of the to-be-processed portrait, divide the description characteristics of each target dynamic content contained in the user behavior data by dividing the user behavior data, update the contour of the to-be-processed portrait according to the description characteristics and the mobile association relation of each target dynamic content, and update the portrait of the target dynamic content in an automatic mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a flowchart of a method for building a user representation based on big data according to an embodiment of the present application.
FIG. 2 is a block diagram of an apparatus for constructing a user representation based on big data according to an embodiment of the present application.
FIG. 3 is an architecture diagram of a system for building a user representation based on big data according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
In order to improve the technical problems in the background art, the inventor innovatively provides a method, a system and a cloud platform for constructing a user portrait based on big data, the scheme can acquire the contour of the same to-be-processed portrait corresponding to each target dynamic content contained in user behavior data and a mobile association relation corresponding to the contour of the to-be-processed portrait, the user behavior data are divided to obtain the description characteristics of each target dynamic content contained in the user behavior data, the contour of the to-be-processed portrait is updated according to the description characteristics and the mobile association relation of each target dynamic content, the portrait of the target dynamic content is updated in an automatic mode, labor cost can be saved, updating efficiency is improved, after the portrait of the target dynamic content is updated, in a scene needing to apply the portrait of the target dynamic content to perform related operation, the occurrence of the situation of portrait construction errors can be greatly reduced, and the accuracy of the construction is improved.
Referring to fig. 1, a method for constructing a user representation based on big data is shown, and the method may be applied to a system for partitioning a risk account against intrusion, and may include the following technical solutions described in steps 100 to 400.
Step 100, user behavior data and a to-be-processed portrait outline corresponding to a target dynamic content set contained in the user behavior data are obtained, wherein the target dynamic content set contains at least one target dynamic content contained in the user behavior data.
Illustratively, the target dynamic content set represents a set combined by people or things in motion during the shooting process.
And 200, acquiring a movement association relation corresponding to the outline of the to-be-processed portrait.
For example, the outline of the image to be processed represents the boundary formed by movable people or things.
Step 300, dividing the user behavior data into description features of the target dynamic content set, where the description features of the target dynamic content set include the description features of the at least one target dynamic content.
It is to be understood that the descriptive characteristics represent key characteristics of people and things that are movable in the target dynamic content collection.
And 400, updating the outline of the to-be-processed portrait according to the description characteristics of the target dynamic content set and the mobile association relation to obtain an updated portrait.
It can be understood that, when the technical solutions described in the above steps 100 to 400 are executed, the same to-be-processed portrait outline corresponding to each target dynamic content included in the user behavior data and the mobile association relationship corresponding to the to-be-processed portrait outline are obtained, the user behavior data is divided into the description features of each target dynamic content included in the user behavior data, the to-be-processed portrait outline is updated according to the description features and the mobile association relationship of each target dynamic content, and the portrait of the target dynamic content is updated in an automatic manner, so that the labor cost can be saved, the updating efficiency can be improved, and after the portrait of the target dynamic content is updated, in a scene in which the portrait of the target dynamic content needs to be applied to perform related operations, the occurrence of the situation of errors in portrait construction can be greatly reduced, and the accuracy of portrait construction can be improved.
Based on the above basis, the technical scheme described in the following step q1 and step q2 is also included.
And q1, acquiring a target portrait outline of a target identification portrait corresponding to the target dynamic content set, and determining that an error between the outline of the portrait to be processed and the outline of the target portrait is less than or equal to a first preset error.
And q2, acquiring a recognition result of a target recognition portrait corresponding to the target dynamic content set, and determining that an outline label exists in the outline of the portrait to be processed based on the recognition result and the mobile association relation, wherein the error between the portrait of the outline label and the outline of the portrait to be processed is less than or equal to a second set error.
It can be understood that when the technical solutions described in the above steps q1 and q2 are executed, the contour label of the contour of the to-be-processed image can be accurately determined by the error between the contour of the to-be-processed image and the contour of the target image.
In one aspect, in an alternative embodiment, the inventor finds that, the target dynamic content includes identification result guidance content, the description feature includes identification result operation data, and the to-be-processed portrait profile is updated according to the description feature of the target dynamic content set and the movement association relationship, which has a technical problem that the description feature and the movement association relationship are inaccurate, so that it is difficult to accurately obtain an updated portrait, and in order to improve the technical problem, the target dynamic content described in step 400 includes identification result guidance content, the description feature includes identification result operation data, and the step of updating the to-be-processed portrait profile according to the description feature of the target dynamic content set and the movement association relationship, so as to obtain an updated portrait may specifically include the technical solutions described in the following steps s11 and s 12.
And step s11, determining the final identification corresponding to the target dynamic content set according to the description characteristics of the target dynamic content set and the mobile association relation.
And s12, updating the outline of the to-be-processed portrait based on the final identification to obtain an updated portrait.
It can be understood that, when the technical solutions described in the above steps s11 and s12 are executed, the target dynamic content includes identification result guidance content, the description feature includes identification result operation data, and the to-be-processed portrait outline is updated according to the description feature of the target dynamic content set and the movement association relationship, so as to avoid the technical problem that the description feature and the movement association relationship are inaccurate, and thus the updated portrait can be accurately obtained.
In one aspect, in an alternative embodiment, the inventor finds that, according to the description feature of the target dynamic content set and the movement association relationship, there is a problem that operation data of the recognition result is inaccurate, so that it is difficult to accurately determine the final recognition corresponding to the target dynamic content set, and in order to improve the above technical problem, the step of determining the final recognition corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the movement association relationship described in step s11 may specifically include the technical solution described in step s11a1 below.
Step s11a1, if the identification result operation data of the target dynamic content set includes a feedback operation, according to the mobile association relationship, determining that the following conditions are met as a final identification corresponding to the target dynamic content set: referencing the to-be-processed portrait outline, adjacent to the to-be-processed portrait outline, and associated with the feedback operation.
It can be understood that, when the technical solution described in the above step s11a1 is executed, according to the description feature of the target dynamic content set and the movement association relationship, the problem of inaccurate operation data of the recognition result is avoided, so that the final recognition corresponding to the target dynamic content set can be accurately determined.
On the one hand, in an alternative embodiment, the inventor finds that, according to the description feature of the target dynamic content set and the movement association relationship, there is a technical problem of related data defects, so that it is difficult to accurately determine the final identification corresponding to the target dynamic content set, and in order to improve the above technical problem, the step of determining the final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the movement association relationship, which is described in step s11, may specifically include the technical solutions described in the following step s11b1 and step s11b 2.
Step s11b1, if the operation data of the recognition result of the target dynamic content set only includes a first compensation operation, determining compensation that refers to the contour of the to-be-processed image and has an error adjacent to the contour of the to-be-processed image according to the motion association relationship, where the first compensation operation includes a left-turn operation or a right-turn operation.
Step s11b2, determining the compensation as a final identification of the target dynamic content.
It can be understood that, when the technical solutions described in the above step s11b1 and step s11b2 are executed, according to the description features of the target dynamic content set and the mobile association relationship, the technical problem of related data defects is avoided, so that the final identification corresponding to the target dynamic content set can be accurately determined.
On the one hand, in an alternative embodiment, the inventor finds that, according to the description feature of the target dynamic content set and the movement association relationship, there is a technical problem that a compensation operation is not in place, so that it is difficult to accurately determine the final identification corresponding to the target dynamic content set, and in order to improve the above technical problem, the step of determining the final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the movement association relationship described in step s11 may specifically include the technical solution described in step s11c1 below.
Step s11c1, if the operation data of the identification result of the target dynamic content set includes a second compensation operation, determining, according to the mobile association relationship, that the following conditions are met as a final identification corresponding to the target dynamic content set: and referencing the outline of the to-be-processed portrait, and the outline of the to-be-processed portrait is adjacent to the outline of the to-be-processed portrait and corresponds to the second compensation operation before compensation.
Illustratively, the second compensation operation includes at least two of a left turn operation, a right turn operation, and a straight operation.
It can be understood that, when the technical solution described in step s11c1 is executed, according to the description feature of the target dynamic content set and the mobile association relationship, the technical problem that the compensation operation is not in place is avoided, so that the final identification corresponding to the target dynamic content set can be accurately determined.
In one aspect, in an alternative embodiment, the inventor finds that, according to the description feature of the target dynamic content set and the movement association relationship, there is a technical problem that an outline label adjacent to an outline of an image to be processed is inaccurate, so that it is difficult to accurately determine a final identification corresponding to the target dynamic content set, and in order to improve the above technical problem, the step of determining the final identification corresponding to the target dynamic content set according to the description feature of the target dynamic content set and the movement association relationship, which is described in step s11, may specifically include the technical solutions described in the following step s11d1 and step s11d 2.
And step s11d1, if the identification result operation data of the target dynamic content set only contains straight operation, determining an outline tag which refers to the outline of the to-be-processed portrait and is adjacent to the outline of the to-be-processed portrait according to the movement association relation.
And step s11d2, if the contour label is a contour label corresponding to at least one of feedback or compensation, determining a target identification portrait corresponding to the target dynamic content set, and determining a standard index of the contour label, which is consistent with the identification result of the target identification portrait and refers to, as a final identification corresponding to the target dynamic content set.
It can be understood that, when the technical solutions described in the above steps s11d1 and s11d2 are executed, according to the description features of the target dynamic content set and the movement association relationship, the technical problem that the contour labels adjacent to the contour of the to-be-processed image are inaccurate is avoided, so that the final identification corresponding to the target dynamic content set can be accurately determined.
Based on the above basis, the technical scheme described in the following steps w 1-w 3 can be further included.
And w1, acquiring a target identification portrait corresponding to the target dynamic content set.
And w2, acquiring a first motion track contained in the user behavior data and a second motion track of the target identification portrait.
And w3, updating the target identification portrait according to the first motion track and the second motion track.
It can be understood that, when the technical scheme described in the steps w1 to w3 is executed, the target recognition portrait is accurately updated through a multi-dimensional motion track.
In one aspect, in an alternative embodiment, the inventor finds that, according to the first motion trajectory and the second motion trajectory, there is a problem that associated parameters are inaccurate, so that it is difficult to accurately update the target identification portrait, and in order to improve the above technical problem, the step of updating the target identification portrait according to the first motion trajectory and the second motion trajectory described in step w3 may specifically include the technical solutions described in the following steps w3a1 to w3a 3.
And w3a1, if the correlation parameter between the first motion track and the second motion track is greater than or equal to a first set value, taking the main motion track with the minimum error between the recognition result and the recognition result of the target recognition image as the final recognition of the target dynamic content according to the movement correlation.
And w3a2, if the correlation parameter between the first motion track and the second motion track is smaller than or equal to a second set value, taking the secondary motion track with the minimum error between the recognition result and the recognition result of the target recognition image as the final recognition of the target dynamic content according to the movement correlation, wherein the second set value is smaller than the first set value.
And w3a3, if the correlation parameter between the first motion track and the second motion track is larger than the second set value and smaller than the first set value, taking the target identification portrait as the final identification of the target dynamic content.
It can be understood that, when the technical solutions described in the above steps w2a1 to w2a3 are performed, the problem of inaccurate associated parameters is avoided according to the first motion trajectory and the second motion trajectory, so that the target identification portrait can be accurately updated.
Based on the above basis, the user behavior data is any user behavior data in a user behavior data sequence, the user behavior data sequence is obtained by a user behavior data acquisition device, and the technical scheme described in the following steps r1 to r3 can be further included.
And r1, acquiring a target identification portrait corresponding to the target dynamic content set, and determining the identification result of the target identification portrait.
And r2, acquiring processing corresponding to the acquisition track of the user behavior data sequence corresponding to the user behavior data.
And r3, updating the target identification portrait according to the processing corresponding to the acquisition track and the identification result.
It will be appreciated that the object recognition representation can be accurately updated by improving the accuracy of the recognition result of the object recognition representation when the above-described technical solutions of steps r 1-r 3 are implemented.
In one aspect, in an alternative embodiment, the inventor finds that, according to the processing corresponding to the acquiring track and the recognition result, there is a problem that the recognition result is inconsistent, so that it is difficult to accurately update the target recognition portrait, and in order to improve the above technical problem, the step of updating the target recognition portrait according to the processing corresponding to the acquiring track and the recognition result described in step r3 may specifically include the technical solution described in the following step r3a 1.
R3a1, if the processing corresponding to the acquisition track is inconsistent with the identification result, searching the reverse corresponding to the target identification image according to the mobile association relation, and using the reverse as the updated identification; and if the processing corresponding to the acquisition track is consistent with the recognition result, the target recognition portrait is updated.
It can be understood that, when the technical solution described in the above step r3a1 is executed, according to the processing corresponding to the acquisition trajectory and the recognition result, the problem of inconsistent recognition result is avoided, so that the target recognition portrait can be accurately updated.
On the basis, please refer to fig. 2 in combination, an apparatus 200 for constructing a user portrait based on big data is provided, which is applied to a cloud platform, and the apparatus includes:
a portrait outline acquisition module 210, configured to acquire user behavior data and a to-be-processed portrait outline corresponding to a target dynamic content set included in the user behavior data, where the target dynamic content set includes at least one target dynamic content included in the user behavior data;
an association relation obtaining module 220, configured to obtain a mobile association relation corresponding to the outline of the to-be-processed portrait;
a description characteristic dividing module 230, configured to divide the user behavior data into description characteristics of the target dynamic content set, where the description characteristics of the target dynamic content set include description characteristics of the at least one target dynamic content;
and the portrait outline updating module 240 is configured to update the outline of the portrait to be processed according to the description feature of the target dynamic content set and the mobile association relationship, so as to obtain an updated portrait.
On the basis of the above, please refer to fig. 3, which shows a system 300 for constructing a user representation based on big data, comprising a processor 310 and a memory 320, which are communicated with each other, wherein the processor 310 is configured to read a computer program from the memory 320 and execute the computer program to implement the method.
The application provides a cloud platform, including: a memory for storing a computer program; a processor coupled to the memory for executing the computer program stored by the memory to implement the above-described method.
In summary, based on the above scheme, the same to-be-processed portrait outline corresponding to each target dynamic content included in the user behavior data and the movement association relationship corresponding to the to-be-processed portrait outline are obtained, the user behavior data is divided to obtain the description features of each target dynamic content included in the user behavior data, the to-be-processed portrait outline is updated according to the description features of each target dynamic content and the movement association relationship, the portrait of the target dynamic content is updated in an automatic manner, labor cost can be saved, updating efficiency is improved, and after the portrait of the target dynamic content is updated, in a scene in which the portrait of the target dynamic content needs to be applied for relevant operation, occurrence of situations of portrait errors can be greatly reduced, and accuracy of the portrait construction is improved.
It should be appreciated that the system and its modules shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C + +, C #, VB.NET, python, and the like, a conventional programming language such as C, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While certain presently contemplated useful embodiments of the invention have been discussed in the foregoing disclosure by way of various examples, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of the disclosure. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the numbers allow for adaptive variation. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (4)

1. A method of constructing a user representation based on big data, comprising:
acquiring user behavior data and a contour of an image to be processed corresponding to a target dynamic content set contained in the user behavior data, wherein the target dynamic content set contains at least one target dynamic content contained in the user behavior data;
acquiring a movement association relation corresponding to the outline of the to-be-processed portrait;
dividing the user behavior data into description features of the target dynamic content set, wherein the description features of the target dynamic content set comprise description features of the at least one target dynamic content, and the description features represent key features of movable people and things in the target dynamic content set;
updating the outline of the portrait to be processed according to the description characteristics of the target dynamic content set and the mobile association relation to obtain an updated portrait;
wherein, the target dynamic content includes identification result directing content, the description feature includes identification result operation data, and the updating the outline of the portrait to be processed according to the description feature of the target dynamic content set and the mobile association relationship to obtain an updated portrait includes:
determining final identification corresponding to the target dynamic content set according to the description characteristics of the target dynamic content set and the mobile association relation;
updating the outline of the to-be-processed portrait based on the final identification to obtain an updated portrait;
the determining the final identification corresponding to the target dynamic content set according to the description features of the target dynamic content set and the mobile association relationship includes:
if the identification result operation data of the target dynamic content set only comprises a first compensation operation, determining compensation which refers to the outline of the to-be-processed portrait and is adjacent to the outline of the to-be-processed portrait according to the movement association relation, and determining the compensation as final identification of the target dynamic content; the first compensation operation comprises a left-turn operation or a right-turn operation;
wherein the method further comprises:
acquiring a target identification portrait corresponding to the target dynamic content set;
acquiring a first motion track contained in the user behavior data and a second motion track of the target identification portrait;
and updating the target identification portrait according to the first motion trail and the second motion trail.
2. The method of claim 1, further comprising at least one of:
acquiring a target portrait outline of a target identification portrait corresponding to the target dynamic content set, and determining that an error between the portrait outline to be processed and the target portrait outline is less than or equal to a first preset error;
and acquiring a recognition result of a target recognition portrait corresponding to the target dynamic content set, and determining that an outline label exists in the outline of the portrait to be processed based on the recognition result and the movement association relation, wherein the error between the portrait of the outline label and the outline of the portrait to be processed is less than or equal to a second set error.
3. A system for constructing a user representation based on big data, comprising a processor and a memory in communication with each other, the processor being configured to read a computer program from the memory and execute the computer program to implement the method of any of claims 1 and 2.
4. A cloud platform, comprising:
a memory for storing a computer program;
a processor coupled to the memory for executing the computer program stored by the memory to implement the method of any of claims 1 and 2.
CN202110650469.XA 2021-06-11 2021-06-11 Method and system for constructing user portrait based on big data and cloud platform Active CN113409076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110650469.XA CN113409076B (en) 2021-06-11 2021-06-11 Method and system for constructing user portrait based on big data and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110650469.XA CN113409076B (en) 2021-06-11 2021-06-11 Method and system for constructing user portrait based on big data and cloud platform

Publications (2)

Publication Number Publication Date
CN113409076A CN113409076A (en) 2021-09-17
CN113409076B true CN113409076B (en) 2023-03-24

Family

ID=77683551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110650469.XA Active CN113409076B (en) 2021-06-11 2021-06-11 Method and system for constructing user portrait based on big data and cloud platform

Country Status (1)

Country Link
CN (1) CN113409076B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627117A (en) * 2020-06-01 2020-09-04 上海商汤智能科技有限公司 Method and device for adjusting special effect of portrait display, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013075295A1 (en) * 2011-11-23 2013-05-30 浙江晨鹰科技有限公司 Clothing identification method and system for low-resolution video
CN109118288B (en) * 2018-08-22 2023-06-20 中国平安人寿保险股份有限公司 Target user acquisition method and device based on big data analysis
CN109978630A (en) * 2019-04-02 2019-07-05 安徽筋斗云机器人科技股份有限公司 A kind of Precision Marketing Method and system for establishing user's portrait based on big data
CN112861013A (en) * 2021-03-18 2021-05-28 京东数字科技控股股份有限公司 User portrait updating method and device, electronic equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627117A (en) * 2020-06-01 2020-09-04 上海商汤智能科技有限公司 Method and device for adjusting special effect of portrait display, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113409076A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN113886468A (en) Online interactive data mining method and system based on Internet
CN113378554A (en) Medical information intelligent interaction method and system
CN114168747A (en) Knowledge base construction method and system based on cloud service
CN115757370A (en) User information communication method and system based on Internet of things
CN114663753A (en) Production task online monitoring method and system
CN113409076B (en) Method and system for constructing user portrait based on big data and cloud platform
CN113360562A (en) Interface pairing method and system based on artificial intelligence and big data and cloud platform
CN114187552A (en) Method and system for monitoring power environment of machine room
CN114417076A (en) Production line intelligent early warning method and system based on artificial intelligence
CN113485203A (en) Method and system for intelligently controlling network resource sharing
CN113609170A (en) Online office work data processing method and system based on neural network
CN114611478B (en) Information processing method and system based on artificial intelligence and cloud platform
CN113715794B (en) Automobile intelligent braking method and system based on artificial intelligence
CN115279127A (en) Temperature control method and system for spraying type liquid cooling machine regulation and control data center
CN114169551A (en) Cabinet inspection management method and system
CN114282505A (en) Report template construction method and system based on scientific and technological achievement transformation
CN113269269A (en) Big data based data dimension reduction method and system and cloud platform
CN113312215A (en) Data backup method and system based on artificial intelligence and cloud platform
CN114648364B (en) Method and system for analyzing sales data of electronic commerce website
CN113269270A (en) Data pushing method and system based on block chain and cloud platform
CN115292301A (en) Task data abnormity monitoring and processing method and system based on artificial intelligence
CN114135992A (en) Air conditioner refrigeration method and system based on data center
CN114168410A (en) Intelligent control evaporative cooling method and system based on big data
CN114629715A (en) Network security protection method and system based on big data
CN114168999A (en) Comprehensive security method and system based on data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant