CN112307110A - User portrait generation method and device, computer equipment and storage medium - Google Patents

User portrait generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112307110A
CN112307110A CN202011185852.4A CN202011185852A CN112307110A CN 112307110 A CN112307110 A CN 112307110A CN 202011185852 A CN202011185852 A CN 202011185852A CN 112307110 A CN112307110 A CN 112307110A
Authority
CN
China
Prior art keywords
user
information
attribute information
image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011185852.4A
Other languages
Chinese (zh)
Inventor
许景涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202011185852.4A priority Critical patent/CN112307110A/en
Publication of CN112307110A publication Critical patent/CN112307110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a user portrait generation method and device, computer equipment and a storage medium. One embodiment of the method comprises: acquiring a user image; inputting a user image into a neural network model for recognition to obtain face attribute information and/or body attribute information of a user; and obtaining a predicted value of the professional information and/or the personality information of the user based on at least one of a preset mapping relation between the face attribute information and the professional information, a mapping relation between the face attribute information and the personality information, a mapping relation between the body attribute information and the professional information and a mapping relation between the body attribute information and the personality information, and generating the portrait of the user. The embodiment can predict hidden attribute information of occupations, characters and the like of the users, so that a complete user portrait is realized, and the user data is more effectively maintained and managed by combining data statistics and analysis.

Description

User portrait generation method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of internet. And more particularly, to a user representation generation method, apparatus, computer device, and storage medium.
Background
The user portrait is an effective tool for sketching the target user and connecting the user appeal and the design direction, and the user portrait is widely applied to various fields. In the actual operation process, the most superficial and life-close words are used to link the attributes and behaviors of the user with the expected data conversion. The special user image is constructed aiming at the user, so that the information of the attribute, the preference, the behavior and the like of the user is displayed, and different services are provided for the user aiming at the ground.
At present, the process of portraying users is mainly in a manual review mode, and portraits are constructed based on user information. The manual review mode is easy to have too many subjective factors in the review process, so that the user portrait cannot be objectively constructed, the construction of the portrait of the user is inaccurate, the portrait of the user is incomplete, and the management of user data in the later period is not facilitated.
Disclosure of Invention
The present application is directed to a method, an apparatus, a computer device, and a storage medium for generating a user portrait, so as to solve at least one of the problems of the prior art.
In order to achieve the purpose, the following technical scheme is adopted in the application:
the application provides a user portrait generation method in a first aspect, which includes:
acquiring a user image;
inputting the user image into a neural network model for recognition to obtain face attribute information and/or body attribute information of the user;
and obtaining a predicted value of the occupational information and/or the personality information of the user according to the face attribute information and/or the human body attribute information of the user and generating the portrait of the user based on at least one mapping relation of the preset mapping relation of the face attribute information and the occupational information, the mapping relation of the face attribute information and the personality information, the mapping relation of the human body attribute information and the occupational information and the mapping relation of the human body attribute information and the personality information.
According to the user portrait generation method provided by the first aspect of the application, based on the face attribute information and/or the body attribute information extracted through the neural network model and the preset mapping relation, the hidden attribute information of occupation, character and the like of a user can be predicted, so that complete user portrait and user data statistics are obtained, the accuracy of user portrait construction is improved, the accuracy of service provided for the user is improved, and the subsequent user data can be maintained and managed more effectively.
In one possible implementation, the neural network model is a convolutional neural network model.
In one possible implementation, the convolutional neural network model is a lightweight network model.
The implementation mode selects the lightweight network model, so that the calculation of the neural network model is further optimized while high calculation accuracy is ensured, network parameters are reduced, calculation complexity is reduced, the calculation efficiency of the neural network model is improved, calculation delay is reduced, and the operation speed is improved.
In one possible implementation, the acquiring the user image includes: acquiring a face image and/or a human body image of a user;
the step of inputting the user image into a neural network model for recognition to obtain the face attribute information and/or the body attribute information of the user comprises the following steps: inputting the user face image into a neural network model for recognition to obtain the face attribute information of the user and/or inputting the user human body image into the neural network model for recognition to obtain the human body attribute information of the user.
In a possible implementation manner, the face attribute information includes at least one of gender information, expression information, age information, whether to wear glasses or leave a beard, and the body attribute information includes hair style information and/or clothing information.
A second aspect of the present application provides a user representation generating apparatus, comprising:
the acquisition module is used for acquiring a user image;
the neural network model is used for identifying the input user image to obtain the face attribute information and/or the body attribute information of the user;
and the prediction module is used for obtaining a predicted value of the professional information and/or the personality information of the user according to the human face attribute information and/or the human body attribute information of the user and generating the user portrait based on at least one of the preset mapping relation between the human face attribute information and the professional information, the mapping relation between the human face attribute information and the personality information, the mapping relation between the human body attribute information and the professional information and the mapping relation between the human body attribute information and the personality information.
In one possible implementation, the neural network model is a convolutional neural network model.
In one possible implementation, the convolutional neural network model is a lightweight network model.
In a possible implementation manner, the obtaining module is configured to obtain a face image and/or a body image of a user;
the neural network model is used for identifying the face image of the user to obtain face attribute information of the user and/or identifying the human body image of the user to obtain human body attribute information of the user.
In one possible implementation, the apparatus further includes: the system comprises a first image collector for collecting a face image of a user and/or at least one second image collector for collecting a human body image of the user.
In a possible implementation manner, the face attribute information includes at least one of gender information, expression information, age information, whether to wear glasses or leave a beard, and the body attribute information includes hair style information and/or clothing information.
A third aspect of the present application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as provided by the first aspect of the present application when executing the program.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method as provided in the first aspect of the present application.
The beneficial effect of this application is as follows:
the application provides a user portrait generation method, a device, a computer device and a storage medium, which aim at solving the technical problems in the prior art, and can realize predictive analysis of hidden attributes of occupation, character, even financial risk, purchasing ability and the like of a user based on face attribute information and/or body attribute information extracted through a neural network model and a preset mapping relation, so that complete user portrait and user data statistics are obtained, the accuracy of user portrait construction is improved, the accuracy of service provided for the user is improved, and the subsequent user data can be maintained and managed more effectively.
Drawings
The following describes embodiments of the present application in further detail with reference to the accompanying drawings.
FIG. 1 illustrates a flow diagram of a user representation generation method in one embodiment of the present application.
FIG. 2 illustrates an exemplary diagram of a user representation generation method in one embodiment of the present application.
FIG. 3 illustrates a flow diagram of neural network model identification in one embodiment of the present application.
FIG. 4 shows a schematic diagram of a user representation generation apparatus in an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a computer system implementing the apparatus provided in the embodiment of the present application.
Detailed Description
In order to more clearly explain the present application, the present application is further described below with reference to the embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not intended to limit the scope of the present application.
In the description of the present application, it should be noted that the terms "upper", "lower", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are only for convenience in describing the present application and simplifying the description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and operate, and thus, should not be construed as limiting the present application. Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "connected" are intended to be inclusive and mean, for example, that they may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It is further noted that, in the description of the present application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
To address the technical problems in the prior art, an embodiment of the present application provides a user representation generation method. The method can be applied to various servers, computer equipment and terminals, and the execution main body of the method can be a processor in the servers, the computer equipment or the terminals. In one embodiment, the method is applied to the field of intelligent financial banking systems, independent user images are constructed for users, and the method can also be applied to other fields of personalizable customization services, such as automobile sales portal systems.
As shown in fig. 1, the method for generating a user portrait according to this embodiment includes the following steps:
s101, acquiring a user image;
in one embodiment, acquiring the user image comprises acquiring a user face image and/or a user human body image, wherein the face image can be an image comprising the outline of the five sense organs of the human; the body image may be an image including limbs, trunk, clothing, and hairstyle of the body. When acquiring a user image, an execution main body (server or computer equipment) executing the user portrait generation method can acquire original information of a target user needing to be portrait in advance for constructing a user portrait of a multi-dimensional system, wherein the original information comprises personal information of the target user, such as user name, native place, place of birth, work information and other various aspects of user information.
In a specific example, an execution subject (server or computer device) executing the user representation generation method comprises a camera for acquiring a user image (such as a photo or a video stream), and the user image is acquired locally in real time through the camera; in another specific example, the user image stored in another server may also be recalled remotely; in another specific example, the uploaded user image may also be received from a terminal of the user (e.g., a cell phone, an electronic computer device, etc.).
Referring to FIG. 2, FIG. 2 is a diagram illustrating an exemplary method of generating a user representation. Firstly, an execution main body (server or computer equipment) for executing the user portrait generation method comprises a camera for collecting user images, after the user images are obtained through the camera, firstly, according to the obtained user face images and the user body images, face detection and/or body detection are/is respectively carried out, whether the images contain faces and/or bodies is determined, and user portrait images to be recognized and/or user body images to be recognized are obtained. And then, judging whether the face of the user portrait image to be identified is shielded and whether the face is forward so as to ensure the clarity of the face image, if the face image is not clear, indicating unsuccessfully, and re-acquiring the user image or performing the next frame.
Then, carrying out face recognition on the face image to be recognized by adopting a corresponding algorithm, namely extracting face features; meanwhile, an executing body (server or computer device) executing the user portrait generation method calls the face features of the registered user in the database and compares the face features with the face features of the face image to be recognized to determine whether the user is registered.
If the user does not exist in the database after comparison, the execution main body (server or computer equipment) executing the user portrait generation method displays a prompt language such as 'unregistered, please register first' and the like, reminds the user to input original information such as the name, the place of residence, the place of birth, the working information and the like of the user, and after the input is finished, the execution main body (server or computer equipment) executing the user portrait generation method distributes a new user Identity (ID) to the user according to the original information provided by the user, and binds the original information of the user, the extracted face features, the acquired face image and the acquired human body image of the user with the user Identity (ID) to serve as a dominant attribute in the user portrait of the user.
If the comparison is successful, the database has a user matching the face features of the face image to be recognized, and the execution main body (server or computer device) executing the user portrait generation method displays the identity ID of the user for the user to confirm, and binds the face features of the face image newly added in the acquisition with the identity ID of the user.
S102, inputting a user image into a neural network model for recognition to obtain face attribute information and/or body attribute information of a user;
in an embodiment where obtaining the user image includes obtaining a face image and/or a body image of the user, inputting the user image into the neural network model for recognition, and obtaining the face attribute information and/or the body attribute information of the user includes: inputting the face image of the user into the neural network model for recognition to obtain face attribute information of the user and/or inputting the human body image of the user into the neural network model for recognition to obtain human body attribute information of the user.
In a specific embodiment, the face attribute information includes at least one of gender information, expression information, face information, age information, whether to wear glasses, whether to leave beard; in one particular example, the facial information may include: round face, square face, triangular face, melon seed face, heart-shaped face, etc. Additionally, the body attribute information includes hair style information and/or clothing information, and in yet another specific example, the hair style information may include straight, wavy, small curly, plaited, coiled or short hair, and the like. The clothing information may include formal, business, leisure, sports, or others.
In a specific embodiment, the neural network model is a convolutional neural network model, the feature vectors of the user face image and/or the user body image are extracted by inputting the user face image and/or the user body image into the convolutional neural network model, the feature vectors are vectors for representing features of the face image and/or the body image, the feature vectors of the face image are different from the feature vectors of the body image, a preset number of face images and/or body images are input into the convolutional neural network model, and the convolutional neural network model outputs the feature vectors of each face image or body image, so that the feature vectors of each face image or body image can be acquired. The convolutional neural Network model may be ResNet50(Residual Network50, Residual Network model 50), FPN (Feature Pyramid Network model), Google Network (a Network model), Shuffle Network (a convolutional compression Network model), or a Network model composed of ResNet50 and FPN.
Considering that the calculation speed of an excessively complex network model is reduced, and meanwhile, the calculation efficiency of the traditional convolutional neural network is optimized, so that the accuracy of model calculation is guaranteed, the speed of model calculation is kept, and timely feedback and low delay are guaranteed. In a specific embodiment, the convolutional neural network model is a lightweight network model. The embodiment can be applied to the fields of personalized customization services which have large human flow, small single data processing amount and need quick feedback, such as bank scenes or 4S scenes.
In a further embodiment, the lightweight network model may be a shuffle _ v2 model, and the shuffle _ v2 model may effectively reduce network parameters by adopting different convolution calculation methods, thereby simplifying convolution calculation of a conventional convolutional neural network model, improving calculation efficiency thereof, and achieving the best model accuracy by using limited calculation resources.
Compared with other lightweight network models, the ShuffleNet _ v2 model has the advantages of lower computational complexity, higher accuracy and the like on the basis of high running speed, is particularly suitable for running in a mobile terminal with limited computational resources, for example, is installed in a mobile terminal of an operator or a marketer, can be conveniently carried by the operator, and realizes real-time observation and analysis.
According to the embodiment, the light-weight network model is selected, so that the calculation of the neural network model is further optimized while the high calculation accuracy is ensured, the network parameters are reduced, the calculation complexity is reduced, the calculation efficiency of the neural network model is improved, the calculation delay is reduced, and the operation speed is improved. Meanwhile, the ShuffleNet _ v2 model is adopted to realize that the neural network model is arranged in the mobile terminal, so that the mobile terminal is convenient for operators to carry, the user attributes and analysis are checked in real time, and the efficiency of providing personalized services for different users is improved.
Referring to fig. 3, fig. 3 is a flowchart of neural network model identification according to an embodiment.
Firstly, a data set of different face attribute information (including attributes such as gender information, expression information, face information, age information, whether glasses are worn or not, whether beard is left or not) and a data set of different body attribute information (including hair style and clothing type) are adopted for model training.
In the embodiment shown in fig. 3, the user face image and/or the user body image are input into the trained feature extraction model to perform feature extraction, and then the pooling operation is performed, where the pooling operation refers to performing aggregation statistics on 512-dimensional vector data extracted by the feature extraction model. In a specific example, the pooling operation is an average pooling operation, that is, the maximum value extracted from each row (or column) of the output matrix represents the row (or column), so that the dimensionality of the features extracted by the neural network model is effectively reduced on the premise of ensuring that the overall data features are retained, the features with better and stronger information are extracted, the calculation difficulty is reduced, and the calculation speed is increased.
The features after the pooling layer are respectively input into a classifier (SoftMax) and a regularization model, and partial face attribute information and body attribute information of the user, such as sex information, expression information, face information, whether glasses are worn, whether beard exists, hair style information, clothing information and the like, can be obtained through the classifier according to the features after the pooling layer. In addition, the features after being subjected to the pooling layer are subjected to regularization model, namely local corresponding normalization, so that part of face attribute information of the user, such as age information, can be obtained. The regularization model introduces extra information into the original model, and by adopting the regularization model, the overfitting risk can be effectively reduced, and the generalization performance of the neural network model is improved.
In a further embodiment, the regularization model adopts L1 norm regularization, that is, by adding L1 norm (L1 norm refers to the sum of absolute values of each element in vector data) to the original model, so that the output result satisfies sparsification (sparsity), the overfitting risk is further reduced, the generalization performance of the neural network model is improved, and feature extraction is facilitated.
S103, obtaining a predicted value of the career information and/or the personality information of the user according to the face attribute information and/or the body attribute information of the user and generating the user portrait based on at least one of a preset mapping relation between the face attribute information and the career information, a mapping relation between the face attribute information and the personality information, a mapping relation between the body attribute information and the career information and a mapping relation between the body attribute information and the personality information.
In one particular example, the personality information may include mental, axiomatic, inclination, camber, and hybrid; the professional information may include business talents, high-salary talents, and salary talents.
In a specific embodiment, the mapping relationship between the face attribute information and the body attribute information and the professional information and the personality information of the user may be pre-constructed and stored, and the specific example includes the following steps:
constructing an input value set X, i.e. face attribute information and body attribute information, for example, including: gender (male, female), face type (round face, square face, triangular face, melon seed face, heart-shaped face), glasses (wearing glasses, not wearing glasses), beard (with beard, without beard), hair style (straight, wave, small roll, braid, coiled hair, short hair), clothing (formal wear, business wear, casual wear, sportswear, others).
Constructing a set of output values Y, for example: personality (intelligence type, doubtful type, inward inclining type, outward inclining type, mixed type), occupation (business talent, high-salary talent, salary talent).
Setting a function f (denoted as f: X → Y) so that the input value set X and the output value set Y satisfy a many-to-many mapping relationship, and simultaneously satisfy the following conditions: for each element X in the set of input values X there is an element Y in the set of output values Y, satisfying that X and Y are f-related.
For example, the mapping functions f1-f6 are as follows:
f 1: x (beard) → Y (character) = { beard → mental type, inward-inclining type, outward-inclining type; beard → intelligence, inward-inclining, outward-inclining, and mixed);
f 2: x (glasses) → Y (character) = { wearing glasses → intellectual type, doubtful type, inward-inclined type, outward-inclined type; no glasses → intelligence, inward-leaning, outward-leaning, mixed };
f 3: x (hairstyle) → Y (character) = { straight → intellectual, doubtful, inward leaning; wave → intelligence, inward-inclining and outward-inclining; small roll → intellectual type, doubtful type, inward inclining type, outward inclining type, mixed type; braid → extroversion type, mixed type, crinis, breve → intellectual type, doubtful type, introversion type, extroversion type, mixed type };
f 4: x (hairstyle) → Y (occupation) { straight → high-salary talent, salary talent; wave → business talents, high-salary talents; small Ju → high salary talent, salary talent; braids → high-salary talents, short hair → business talents, high-salary talents, salary talents };
f 5: x (clothing) → Y (character) = { formal dress → intellectual type, doubtful type, inward inclined type; business suit → intelligence, inward inclination, outward inclination, mixed type; leisure wear → doubtful type, extroversion type, mixed type, sports wear → intellectual type, extroversion type, mixed type, others → intellectual type, doubtful type, introversion type, extroversion type, mixed type };
f 6: x (clothing) → Y (occupation) = { formal → high-salary talent, salary talent; business dress → high salary talent, salary talent; leisure wear → high-salary talents, salary talents; sportswear → salary talents, others → salary talents }.
According to any one of the mapping relations between the face attribute information and the body attribute information and the occupational information and the personality information of the user, obtaining a predicted value of the occupational information and/or the personality information of the user according to the face attribute information and/or the body attribute information of the user, for example, if the body attribute information of the user is: if the hair style information is straight, and according to the mapping function f3, the predicted value of the personality information of the user is: mental type, doubtful type, inward inclining type.
In a specific embodiment, the occupational information and/or personality information of the user is predicted by adopting multiple mapping relations among the face attribute information, the human body attribute information, the occupational information of the user and the personality information, and a final prediction result is obtained according to an intersection of the multiple mapping relations.
In a specific example, if the face attribute information of the user is: beard and glasses are worn; the human body attribute information is: the hair style information is wave shape, and the clothes are business clothes. According to the mapping functions f1-f6, the predicted values of the professional information and the character information of the user are as follows: intelligence-regulating type and inward-inclining type; the occupational information is: high salary talents.
In a specific example, the predicted values of the professional information and/or the personality information of the user and the face attribute information and/or the human body attribute information of the user are uniformly output for attribute statistics, for example, gender distribution, age distribution, face distribution, color distribution, user distribution with different hairstyles, user distribution with different clothes, predicted professional distribution, predicted personality type distribution, financing risk type distribution, purchasing ability distribution, the number of registered users and the like of daily visiting users are counted, and information analysis of the user is performed, so that operators and marketing personnel can visually know the information of the user, and different services or products can be effectively provided for users with different financing risks or different purchasing abilities. Meanwhile, the complete user portrait for the user is formed by combining the predicted values of the occupational information and/or character information of the user, the face attribute information and/or the body attribute information of the user and the original information of the user obtained before, extracting face features and distributing a new user Identity (ID), so that the accuracy of user portrait construction is improved, the accuracy of services or products provided for the user is improved, and the subsequent user data can be maintained and managed conveniently.
The user portrait generation method provided by the embodiment can realize predictive analysis of hidden attributes of the user, such as occupation, character, even financial risk, purchasing ability and the like, based on the face attribute information, such as face gender, age and the like, and/or the body attribute information, such as hairstyle, clothing type and the like, extracted through the neural network model and the preset mapping relation, so that complete user portrait and user data statistics are obtained, the accuracy of user portrait construction is improved, the accuracy of service provided for the user is improved, and the subsequent user data can be maintained and managed more effectively.
Referring to fig. 4, another embodiment of the present application provides a user representation generating apparatus 200, the embodiment of the user representation generating apparatus 200 corresponds to the embodiment of the user representation generating method shown in fig. 1, and the user representation generating apparatus 200 can be applied to various servers, computer devices and terminals. The user representation generating device 200 comprises an obtaining module 201 for obtaining a user image; the neural network model 202 is used for identifying an input user image to obtain face attribute information and/or body attribute information of the user; the prediction module 203 is configured to obtain a predicted value of the career information and/or the personality information of the user according to the face attribute information and/or the body attribute information of the user, and generate the user portrait based on at least one mapping relationship among a preset mapping relationship between the face attribute information and the career information, a mapping relationship between the face attribute information and the personality information, a mapping relationship between the body attribute information and the career information, and a mapping relationship between the body attribute information and the personality information.
In a specific embodiment, the user portrait generation apparatus 200 includes a front end and a cloud end, wherein the front end is used for collecting a user portrait (such as a user face portrait and/or a user body portrait) and obtaining original information of the user, extracting face features and assigning a new user identity ID, and transmitting the information to the cloud end for statistics and analysis, the cloud end is used for recognizing the user face portrait and/or the user body portrait to obtain face attribute information and/or body attribute information of the user, predicting the recognized face attribute information and/or body attribute information, and performing statistics on the face attribute and/or body attribute and predicted value of the user, for example, calculating gender distribution, age distribution, face type distribution, color value distribution, user distribution of different styles, facial style distribution, color value distribution, and user distribution of different visiting users every day, User distribution, predicted occupation distribution, predicted character type distribution, financing risk type distribution, purchasing ability distribution, registered user quantity and the like of different clothes, information analysis of the users is carried out, the information is combined with original information of the users acquired by a front end, facial features are extracted, new user identity IDs are distributed, a complete portrait of the users is formed, and subsequent user data are maintained and managed.
In another specific example, the front end is used only to capture a user representation (e.g., a user face representation and/or a user body representation); the cloud is further configured to obtain original information of the user, extract facial features, and assign a new user identity ID, that is, in this embodiment, the cloud is configured to construct a complete user representation of the user.
The embodiment is suitable for a user portrait system in the field of intelligent finance through the fusion operation of the front end and the cloud, and can realize flexible deployment of a user portrait construction process.
In a particular embodiment, the neural network model 202 is a convolutional neural network model, and in a further embodiment, the convolutional neural network model is a lightweight network model.
In a specific embodiment, the obtaining module 201 is configured to obtain a face image and/or a body image of a user; the neural network model 202 is used for recognizing the face image of the user to obtain face attribute information of the user and/or recognizing the human body image of the user to obtain human body attribute information of the user.
In a particular embodiment, the user representation generating apparatus 200 further comprises: the system comprises a first image collector for collecting a face image of a user and/or at least one second image collector for collecting a human body image of the user. The user face image and the user body image can be pictures or video streams. In an example applied to the field of the intelligent financial banking system, the first image collector may be a camera provided on the ATM, and the second image collector may be at least one monitoring camera provided in a bank lobby. In addition, the user face image and the user body image can be acquired simultaneously only through at least one monitoring camera arranged in a bank hall, hardware facilities are reduced, and under the condition that the definition meets the requirement, the user face image and the user body image are acquired simultaneously only from one camera, so that the processing efficiency can be effectively improved.
In a specific embodiment, the face attribute information includes at least one of gender information, expression information, face information, age information, whether to wear glasses, whether to leave beard; in one particular example, the facial information may include: round face, square face, triangular face, melon seed face, heart-shaped face, etc. In addition, the body attribute information includes hair style information and/or clothing information. In yet another specific example, the hair style information may include straight, wavy, small curls, braids, coiled or short hairs, and the like. The clothing information may include formal wear, business, leisure, sports, or others.
It should be noted that the principle and the working flow of the user image generating apparatus 200 provided in this embodiment are similar to those of the user image generating method, and reference may be made to the above description for relevant points, which are not repeated herein.
As shown in fig. 5, in another embodiment of the present application, a computer device suitable for implementing the user representation generation method provided by the above embodiments is provided, which includes a central processing module (CPU), a memory, and a computer program stored in and executable on the central processing module (CPU), and which can perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the Random Access Memory (RAM), various programs and data required for the operation of the computer system are also stored. The CPU, ROM, and RAM are connected thereto via a bus. An input/output (I/O) interface is also connected to the bus.
An input section including a keyboard, a mouse, and the like; an output section including a display such as an organic light emitting diode display (OLED) or an inorganic light emitting diode display (LED), and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary.
In particular, the processes described in the above flowcharts may be implemented as computer software programs according to the present embodiment. For example, the present embodiments include a computer program product comprising a computer program tangibly embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium.
The flowchart and schematic diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to the present embodiments. In this regard, each block in the flowchart or schematic diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the schematic and/or flowchart illustration, and combinations of blocks in the schematic and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the present embodiment may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, an identification module, and a prediction module. Wherein the names of the modules do not in some cases constitute a limitation of the module itself. For example, the recognition module may also be described as a "feature extraction module".
On the other hand, the present embodiment also provides a nonvolatile computer storage medium, which may be the nonvolatile computer storage medium included in the apparatus in the foregoing embodiment, or may be a nonvolatile computer storage medium that exists separately and is not assembled into a terminal. The non-volatile computer storage medium stores one or more programs that, when executed by a device, cause the device to implement the method of generating a user representation as provided in the above embodiments.
It should be understood that the above-mentioned examples are given for the purpose of illustrating the present application clearly and not for the purpose of limiting the same, and that various other modifications and variations of the present invention may be made by those skilled in the art in light of the above teachings, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed.

Claims (13)

1. A user representation generation method, comprising:
acquiring a user image;
inputting the user image into a neural network model for recognition to obtain face attribute information and/or body attribute information of the user;
and obtaining a predicted value of the occupational information and/or the personality information of the user according to the face attribute information and/or the human body attribute information of the user and generating the portrait of the user based on at least one mapping relation of the preset mapping relation of the face attribute information and the occupational information, the mapping relation of the face attribute information and the personality information, the mapping relation of the human body attribute information and the occupational information and the mapping relation of the human body attribute information and the personality information.
2. The method of claim 1, wherein the neural network model is a convolutional neural network model.
3. The method of claim 2, wherein the convolutional neural network model is a lightweight network model.
4. The method of claim 1,
the acquiring of the user image includes: acquiring a face image and/or a human body image of a user;
the step of inputting the user image into a neural network model for recognition to obtain the face attribute information and/or the body attribute information of the user comprises the following steps: inputting the user face image into a neural network model for recognition to obtain the face attribute information of the user and/or inputting the user human body image into the neural network model for recognition to obtain the human body attribute information of the user.
5. The method of claim 1,
the human face attribute information comprises at least one of sex information, expression information, age information, whether glasses are worn or not and whether beard is left or not, and the human body attribute information comprises hair style information and/or clothing information.
6. A user representation generation apparatus, comprising:
the acquisition module is used for acquiring a user image;
the neural network model is used for identifying the input user image to obtain the face attribute information and/or the body attribute information of the user;
and the prediction module is used for obtaining a predicted value of the professional information and/or the personality information of the user according to the human face attribute information and/or the human body attribute information of the user and generating the user portrait based on at least one of the preset mapping relation between the human face attribute information and the professional information, the mapping relation between the human face attribute information and the personality information, the mapping relation between the human body attribute information and the professional information and the mapping relation between the human body attribute information and the personality information.
7. The apparatus of claim 6, wherein the neural network model is a convolutional neural network model.
8. The apparatus of claim 7, wherein the convolutional neural network model is a lightweight network model.
9. The apparatus of claim 6,
the acquisition module is used for acquiring a face image and/or a human body image of a user;
the neural network model is used for identifying the face image of the user to obtain face attribute information of the user and/or identifying the human body image of the user to obtain human body attribute information of the user.
10. The apparatus of claim 9, further comprising: the system comprises a first image collector for collecting a face image of a user and/or at least one second image collector for collecting a human body image of the user.
11. The apparatus of claim 6,
the human face attribute information comprises at least one of sex information, expression information, age information, whether glasses are worn or not and whether beard is left or not, and the human body attribute information comprises hair style information and/or clothing information.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-5 when executing the program.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202011185852.4A 2020-10-30 2020-10-30 User portrait generation method and device, computer equipment and storage medium Pending CN112307110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011185852.4A CN112307110A (en) 2020-10-30 2020-10-30 User portrait generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011185852.4A CN112307110A (en) 2020-10-30 2020-10-30 User portrait generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112307110A true CN112307110A (en) 2021-02-02

Family

ID=74332281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011185852.4A Pending CN112307110A (en) 2020-10-30 2020-10-30 User portrait generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112307110A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313470A (en) * 2021-06-10 2021-08-27 郑州科技学院 Employment type evaluation method and system based on big data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313470A (en) * 2021-06-10 2021-08-27 郑州科技学院 Employment type evaluation method and system based on big data
CN113313470B (en) * 2021-06-10 2023-06-09 郑州科技学院 Employment type assessment method and system based on big data

Similar Documents

Publication Publication Date Title
Xie et al. Scut-fbp: A benchmark dataset for facial beauty perception
Yan et al. Multi-attributes gait identification by convolutional neural networks
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN109815826A (en) The generation method and device of face character model
CN110532996A (en) The method of visual classification, the method for information processing and server
CN111553419B (en) Image identification method, device, equipment and readable storage medium
Ahmed et al. An integrated real-time deep learning and belief rule base intelligent system to assess facial expression under uncertainty
CN110378208B (en) Behavior identification method based on deep residual error network
US20220391611A1 (en) Non-linear latent to latent model for multi-attribute face editing
CN113378706B (en) Drawing system for assisting children in observing plants and learning biological diversity
US20220130172A1 (en) Systems and methods for matching facial images to reference images
CN116547721A (en) Digital imaging and learning system and method for analyzing pixel data of an image of a hair region of a user's head to generate one or more user-specific recommendations
CN112733602A (en) Relation-guided pedestrian attribute identification method
Pikoulis et al. Leveraging semantic scene characteristics and multi-stream convolutional architectures in a contextual approach for video-based visual emotion recognition in the wild
Ma et al. Landmark-based facial feature construction and action unit intensity prediction
CN111860250A (en) Image identification method and device based on character fine-grained features
Das et al. Gradient-weighted class activation mapping for spatio temporal graph convolutional network
CN110598097A (en) Hair style recommendation system, method, equipment and storage medium based on CNN
Zhi et al. Dynamic facial expression feature learning based on sparse RNN
Azzakhnini et al. Combining facial parts for learning gender, ethnicity, and emotional state based on rgb-d information
CN112307110A (en) User portrait generation method and device, computer equipment and storage medium
CN114239754A (en) Pedestrian attribute identification method and system based on attribute feature learning decoupling
CN111950362A (en) Golden monkey face image identification method, device, equipment and storage medium
CN116311472A (en) Micro-expression recognition method and device based on multi-level graph convolution network
CN111325173A (en) Hair type identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination