CN115168745A - Virtual character image recreating method and system based on image technology - Google Patents

Virtual character image recreating method and system based on image technology Download PDF

Info

Publication number
CN115168745A
CN115168745A CN202211069029.6A CN202211069029A CN115168745A CN 115168745 A CN115168745 A CN 115168745A CN 202211069029 A CN202211069029 A CN 202211069029A CN 115168745 A CN115168745 A CN 115168745A
Authority
CN
China
Prior art keywords
image
user
character
virtual character
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211069029.6A
Other languages
Chinese (zh)
Other versions
CN115168745B (en
Inventor
张卫平
黄筱雨
丁烨
张思琪
张伟
李显阔
李蕙男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Digital Group Co Ltd
Original Assignee
Global Digital Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Digital Group Co Ltd filed Critical Global Digital Group Co Ltd
Priority to CN202211069029.6A priority Critical patent/CN115168745B/en
Publication of CN115168745A publication Critical patent/CN115168745A/en
Application granted granted Critical
Publication of CN115168745B publication Critical patent/CN115168745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method and a system for recreating an image of a virtual character based on an image technology, wherein the recreating method takes a reference template of the virtual character as an original template for creating a new image; analyzing a reference character medium provided by a user through an image technology so as to obtain at least one item of reference character feature in a reference character image, generating a first character combination according to the character feature, and obtaining a grading set of a plurality of users on a plurality of virtual character images in the first character combination; furthermore, a recommendation algorithm is used for obtaining a recommendation sequence in an image list of recommended virtual characters for the current user, and finally, a re-created character image is generated and is given to the virtual character by combining the keywords of the content to be displayed and is displayed on the current user.

Description

Virtual character image recreating method and system based on image technology
Technical Field
The invention relates to the technical field of electronic digital data processing. In particular to a method and a system for recreating virtual character images based on image technology.
Background
The image is the overall impression and evaluation of the social public to the individual; the image is the comprehensive reflection of the intrinsic quality and appearance expression of the character; with the rapid development of digital virtual technology, various types of programs and application scenes displayed by virtual people are increasingly applied to the social level, and the image requirements of people on virtual people are gradually improved.
The prior virtual character image is generally generated in a fixed form by a program, and even if a variable image setting algorithm is included, the variability of the image cannot be flexibly changed according to the theme needing to be displayed and the personalized requirements of a user, so that the virtual character image has obvious limitation.
Referring to the related published technical solutions, the technical solution with publication number CN114303116A proposes a method for controlling virtual roles by a multi-modal model. Internal models can be combined to make credible responses such as emotions through virtual characters, and changes to a plurality of factors including characters of the virtual characters are generated through various interactive links generated by users; the technical proposal of the publication No. US20200027271A1 proposes optimizing subsequent images and related actions of the virtual human by analyzing the posture and actions of the current virtual character through a database of two-dimensional images and three-dimensional images; KR102223444B1 proposes a display system for fashion matching by using a virtual human, which captures physical data of a real user and generates a virtual character, so that the user can change the virtual character in the virtual system, including elements such as clothes, make up, and posture, thereby investigating the matching effect of a series of image elements. The above technical solutions all mention the system of avatar creation and generation, but there are still few mention about the method and system that can perform adaptive avatar optimization.
The foregoing discussion of the background art is intended to facilitate an understanding of the present invention only. This discussion is not an acknowledgement or admission that any of the material referred to is part of the common general knowledge.
Disclosure of Invention
The invention aims to provide a method and a system for recreating a virtual character image based on an image technology; the re-creation method takes a reference template of a virtual character as an original template for creating a new image; analyzing a reference character medium provided by a user through an image technology so as to obtain at least one reference character characteristic in a reference character image, and generating a first character combination according to the character characteristic, wherein the first character combination is used for obtaining a grading set of a plurality of users on a plurality of virtual character images in the first character combination; further, a recommendation algorithm is used for obtaining a recommendation sequence in an image list of recommended virtual characters for the current user, and finally, a recreated character image is generated and given to the virtual character by combining keywords of the content to be displayed and displayed on the current user.
The invention adopts the following technical scheme:
a virtual character image recreating system based on image technology is characterized in that the recreating system comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the related image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of a virtual character and the image element library;
the image generation module is used for generating the image of the virtual character, and comprises the steps of calling a specified image element from the image element library and synthesizing the image element into an virtual character image so as to generate a new virtual character image;
the interaction module is used for interacting with a user and receiving the selection and the grading of the virtual character image by the user; the method also comprises the steps of showing the virtual character to the user;
the calculating module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of the image of the virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m characters in the first character set, and randomly dividing the m characters into h batches to be displayed to p users;
s300: selecting at least one image of the interested virtual character from the images of the virtual characters displayed in each batch by the p users; the finally selected characters of the q virtual characters are defined as q second characters and form a second character set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user for n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: according to the keywords of the content to be displayed, image elements related to the keywords are obtained and synthesized with the virtual character in the recommended character list, and therefore the final virtual character is obtained;
preferably, in step S200, the reference character media provided by the user includes a picture or a video clip of the reference character; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate first images of n virtual characters;
preferably, in step S300, the user is involved in scoring each of the selected second images;
preferably, in step S400, the following steps are further included:
s401: generating an original scoring matrix Y based on the scoring of the p users on the second images of the q virtual characters;
s402: and extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix.
Preferably, in step S402, the singular value decomposition is performed on the raw scoring matrix to extract a user-feature matrix U, which includes the following calculation methods:
Figure 100002_DEST_PATH_IMAGE001
formula 1;
in formula 1, Y is the raw scoring matrix, and U is a user-feature matrix representing a matrix vector description between a user and a potential feature; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types contained in each image of the virtual character;
preferably, in step S402, the method further includes calculating similarities of p users by using the user-feature matrix U, and obtaining a user similarity matrix SM by using the set of calculated similarities of p users; the user similarity matrix SM is used for describing similarity of preference of p users to n first images;
preferably, after obtaining the user similarity matrix SM, the method further includes:
calculating a neighbor set { Neib } of the current user about r based on the user similarity matrix SM and a preset range value r;
predicting the grade of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original grade matrix Y, and generating a predicted grade list of the current user;
and based on a prediction scoring list of the current user, sequencing n first images in the first image set, and forming the top v images with the highest scores into the recommended image list.
The beneficial effects obtained by the invention are as follows:
1. the re-creation method of the invention uses the image technology to analyze based on a reference image which is interesting to the user, thereby obtaining the characteristics of the image which is interesting to the user, and generating the images of a plurality of virtual characters for the user to evaluate based on the characteristics;
2. the re-creation method of the invention utilizes a recommendation algorithm to generate an image list which is based on possible liking of a current user and is used as a re-created alternative image library, and can generate a latest image which is suitable for the theme content again based on the key words of the theme content to be displayed, so that the new image can be more fit with the theme to be displayed by the virtual character;
3. the recreating method of the invention distinguishes the algorithm which needs to obtain the favorite characteristics of the user after the user carries out a great deal of evaluation operation on the images of a great number of virtual characters, and carries out analysis on the user favorite and the user approximation degree on the scoring matrix of the user random evaluation by using the optimized analysis algorithm to predict the scoring of the images which are not scored by the current user, thereby obtaining the favorite attributes of the current user to the virtual characters on the premise of reducing the operation frequency of the user as much as possible and ensuring the freshness of the images, and greatly improving the accuracy and the calculation efficiency of the recreating images which are fit with the user favorite;
the inventive re-creation system adopts modular design and cooperation of each unit, and can flexibly optimize and change software and hardware in later period, thereby saving a large amount of later maintenance and upgrading cost.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. Like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a schematic representation of the steps of the inventive re-creation process;
FIG. 2 is a diagram illustrating a first image set screening to obtain a second image set according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a virtual character reference template according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an interaction module according to an embodiment of the invention;
fig. 5 is a schematic diagram of the final adjustment of the final virtual character by the user in the embodiment of the present invention.
The drawings illustrate schematically: 400-an interaction module; 410-auxiliary components; 420-a sensor; 430-a display; 440-a processor.
Detailed Description
In order to make the technical solution and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the embodiments thereof; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Other systems, methods, and/or features of the present embodiments will become apparent to one with skill in the art upon examination of the following detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims. Additional features of the disclosed embodiments are described in, and will be apparent from, the detailed description that follows.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by the terms "upper", "lower", "left", "right", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of the description, but not to indicate or imply that the device or component referred to must have a specific orientation.
The first embodiment is as follows:
the inventive method is set forth below in conjunction with the detailed description;
the image of the virtual character has high degree of freedom, and the user has subjective and very different consideration factors for the image of the virtual character; if the user's preference characteristics need to be satisfied well, the current practice generally includes:
(1) The user describes the favorite characteristics of the image in detail, and the system classifies, quantifies and recreates a new image; however, for some users with poor description ability or users with weak subjective awareness, the method is not efficient;
(2) Generating a large amount of virtual character images, and judging by the user so as to obtain the specific preference of the user; the judging mode comprises a binary system judgment of yes/no preference or a score system judgment; however, a user needs to perform a large amount of operations, and the implementation efficiency is low;
aiming at the defects, the invention provides an image analysis technology, combines matrix singular value decomposition and a collaborative filtering algorithm, and can predict the evaluation of the user on the new image of the recreated virtual character according to the result of the user judging the limited virtual character image;
a virtual character image recreating system based on image technology comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the relevant image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of a virtual character and the image element library;
the image generation module is used for generating the image of the virtual character, and comprises the steps of calling a specified image element from the image element library and synthesizing the image element into an virtual character image so as to generate a new virtual character image;
the interaction module is used for interacting with a user and receiving selection and grading of the user on the virtual character image; the method also comprises the steps of showing the virtual character to the user;
the calculation module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of an image of a virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m characters in the first character set, and randomly dividing the m characters into h batches to be displayed to p users;
s300: selecting at least one image of the interested virtual character from the images of the virtual characters displayed in each batch by the p users; the finally selected images of the q virtual characters are defined as q second images and form a second image set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user for n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: according to the keywords of the content to be displayed, acquiring image elements related to the keywords, and synthesizing the image elements with the virtual character in the recommended character list to obtain a final virtual character;
preferably, in step S200, the reference character media provided by the user includes a picture or a video clip of the reference character; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate first images of n virtual characters;
preferably, in step S300, the user is involved in scoring each selected second shape;
preferably, in step S400, the following steps are further included:
s401: generating an original scoring matrix Y based on the scoring of the p users on the second images of the q virtual characters;
s402: and extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix.
Preferably, in step S402, the singular value decomposition is performed on the raw scoring matrix to extract a user-feature matrix U, which includes the following calculation methods:
Figure 746136DEST_PATH_IMAGE002
formula 1;
in formula 1, Y is the original scoring matrix, U is a user-feature matrix representing matrix vector description between a user and potential features; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types contained in each image of the virtual character;
preferably, in step S402, the method further includes calculating similarities of p users by using the user-feature matrix U, and obtaining a user similarity matrix SM by using the set of calculated similarities of p users; the user similarity matrix SM is used for describing similarity of preference of p users to n first images;
preferably, after obtaining the user similarity matrix SM, the method further includes:
calculating a neighbor set { Neib } of the current user about r based on the user similarity matrix SM and a preset range value r;
predicting the grade of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original grade matrix Y, and generating a predicted grade list of the current user;
based on a prediction scoring list of a current user, sorting n first images in the first image set, and forming a recommended image list by the top v images with the highest scores;
in step S100, the reference template character of the virtual character refers to the virtual character having only basic elements, as shown in fig. 3; for example, an avatar having only a basic skeleton without muscle contour design attachment, or a three-dimensional design with only line frames without coloring, or an avatar having only a plurality of character modules such as head, torso, limbs, incomplete overall composition, or an avatar having only a character body without any decorative design such as clothing, hairstyle, skin tone;
the reference template is used for enabling the virtual character to have a new image formed by combining with other image elements, so that at least elements contained in the reference template or elements not contained in the reference template can be set based on the actual application requirements of the virtual character;
in step S200, the user, the controller, and the designer, including the virtual character, may provide an instructive direction or theme for the subsequent image re-creation by providing a plurality of the reference media for the re-creation system; frequently, people can view a large number of interesting image design elements through a large number of channels; for example, image elements that like the clothing of Tang dynasty, china, or image elements that like the theme of beach clothing; however, most non-image designed persons cannot accurately describe the concrete representation of their favorite elements because these elements may have a lot of details of colors, patterns, shapes;
therefore, by using the related image technology, a large amount of character image information recorded in the reference medium can be acquired; in some embodiments, the image analysis step for the character image in the reference medium comprises:
(1) Preprocessing the reference media, including adjusting the contrast and sharpness of pictures, decomposing videos according to frames, intercepting images and the like so as to decompose the reference media into a plurality of images and obtain better image characteristic analysis source data; wherein, as the reference media comprises a plurality of characters, the reference characters comprise the characters specified by the user; the method also comprises the following steps that image elements to be referred to in the character are specified by the user;
(2) Carrying out digital description on the image and extracting the characteristics of image elements; algorithms for describing images are many, and the following are used in many ways: SIFT descriptor, fingerprint algorithm function, bundling features algorithm, hash function, etc.; different algorithms can be designed according to different images and image elements, for example, the image characteristics are extracted by a method of local N-order moments of the images;
(3) Coding the image characteristic information, and making a lookup table by coding the massive images; pooling and down-sampling the image with higher resolution, and extracting and coding the image features after reducing the operation amount;
(4) Similarity matching operation: utilizing the coded value of the image and utilizing the data of a search engine in the image element library of the database to carry out global or local similarity calculation; setting a threshold value according to the required robustness, and then carrying out cache recording on image elements with high similarity to target features or elements;
(5) Combining or decomposing these image elements in preparation for synthesis of the reference template;
the algorithms and theories included in the image processing steps are well known and applied by persons skilled in the relevant image processing field, and are not described herein;
for example, in one implementation method, a user provides a woman image of a Tang dynasty figure as the reference figure medium, and a large number of Tang dynasty clothing, makeup, jewelry, postures, headwear and other figure elements can be called from the figure element library through image analysis processing; synthesizing the image elements into the reference template by adopting random combination, thereby generating a large number of character images as the first image and forming the first image set;
further, as shown in fig. 2, randomly assigning the first character set to obtain scores of p users;
as can be seen, the following table illustrates, where p =3,n =8; in the random allocation mechanism as in step S200, p users do not score all n characters in the first character set, so that the subsequent steps S300 and S400 need to be performed; in consideration of efficiency and accuracy of the balance algorithm, the extracted m images and the values m and h divided into h batches need to be reasonably set by related technicians according to specific application examples, and no regulation is made herein;
user scoring Image a Image b Image c Image d Image e Image f Image g Image h
User A 4 8 7 5
User B 2 3 6 8
User C 7 3 5 8 8
Further, there is a partial absence in the scoring of n first characters by p users, so steps S300 and S400 solve the following problem:
(1) Problem of potential feature extraction for users:
the latent features refer to the kinds of elements included in the avatar of the virtual character, for example, the kinds of clothes, such as western-style clothes, evening dress, casual clothes, or chinese style, western style, indian style, etc.; or as cosmetic types, for example:
the potential feature extraction of the user is the premise of carrying out preference analysis on the user, the accurate and complete extraction of the individual potential features of the user can play a great role in improving the final recommendation precision, and great help is brought to the aspects of improving the user experience and satisfaction; in the prior art, a label-based slope one algorithm is commonly used, the method makes up the deficiency of personalized reference data to a certain extent, but because the user scores a plurality of images is a subjective process, usually the labels cannot completely represent the main characteristics of the user or a movie, so that a potential characteristic matrix generated by SVD can well represent the potential characteristic vector of the user;
(2) Analyzing the user similarity according to the user characteristics:
the method has the advantages that the analysis on the user similarity is widely applied to a collaborative filtering-based recommendation system, the user similarity is used for generally distinguishing users with the same preference and characteristics and the same expressed behaviors or potential common interests, the user group with similar potential characteristics is extracted, and the analysis recommendation algorithm for the user characteristics is used in the group, so that the influence of users with different characteristics on the algorithm can be effectively avoided, and the recommendation precision and the operation speed are improved;
(3) Problem of score matrix too sparse:
the sparse scoring matrix is one of the difficult problems which puzzle the user preference recommendation for a long time, and the analysis recommendation algorithm of the user characteristics has better performance on the dense matrix; one effective solution is to interpolate the missing values of the matrix, one is to reduce the dimensionality of the matrix; according to the embodiment, the proper decomposition dimensionality is calculated according to the data obtained in the step S300, so that the problem of matrix sparsity can be solved well;
further, in formula 1, SVD represents singular value decomposition, Y is the original scoring matrix, U is a user-feature matrix representing matrix vector description between a user and a potential feature, and element values in U represent preference degree of the user to the potential feature; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction; z is dimension reduction, and in the embodiment, an optimal solution of the z value can be obtained through a value testing method according to the time for dimension reduction and the final recommendation accuracy; after the decomposition, the scoring condition of the potential features existing in the image by the user can be obtained;
further, in step S402, calculating the similarity of p users and a user similarity matrix SM by using the user-feature matrix U; the user similarity matrix SM is calculated by the following formula:
first, the user similarity sim (a, b) is calculated, where a, b represent any two potential features that are not identical:
Figure DEST_PATH_IMAGE003
formula 2;
in formula 2, U ab Show the same timeA set of users with preferences for potential features a and b;
Figure 600959DEST_PATH_IMAGE004
the average score of the user representing the potential feature a,
Figure DEST_PATH_IMAGE005
the average score of the users representing the potential feature b,
Figure 318380DEST_PATH_IMAGE006
representing the score of the potential feature a by the user u,
Figure DEST_PATH_IMAGE007
representing the user u scoring the potential feature b; wherein user u is one of the p users;
through the above calculation mode, similarity calculation of the user is performed based on any two different potential characteristics, and the user similarity matrix SM can be obtained after all calculation results are combined; each element in the user similarity matrix SM, i.e. the similarity of any two users to a potential feature;
further, in an embodiment, a range value r is set, based on the user similarity matrix SM, the preference of any two users to any potential feature is compared, and r users most similar to the current user are obtained through statistics to form the neighbor set { Neib };
further, predicting the grade of the current user for each of the n first characters based on the neighbor set { Neib } and the original grade matrix Y; since similar users have the same preference degree for similar potential features, the scores of the users for any two images can be estimated by analyzing the difference degree of the potential features between any two images and using the scores of the similar two users for any two images; using this method for supplementing the unscored avatars in the raw scoring matrix Y based on the scores of any one user;
a preferred implementation method is that firstly, the difference degree between the two images is calculated in the following way:
Figure 916851DEST_PATH_IMAGE008
formula 3;
in formula 3, dev e,f The image e and the image f are difference indexes of the image e and the image f, and the image e and the image f are any two first images in the first image set; s e,f (x) Represents a set of users who have scored character e and character f simultaneously, and requires S e,f (x) \\ { Neib }; card () is the number of elements in a set, card (S) e,f (x) Is calculating S e,f (x) The number of users; u. u e And u f Respectively representing the scores of the user u on the image e and the image f;
further, a prediction score for each character is obtained using the following calculation formula;
Figure DEST_PATH_IMAGE009
formula 4;
in the above formula, w is a user who has not evaluated the character e, pd w,e Predictive scoring of user w for an avatar e, S (w) is a set of avatars scored by user w in said first set of avatars; i is the ith image in S (w); w is a group of i Scoring the ith character for the user W; dev e,i Is the difference index of the image e and the ith image; card (S (w)) is the number of said first personas that subscriber w has scored.
Example two:
this embodiment should be understood to include at least all of the features of any of the foregoing embodiments and further modifications thereon;
further, the recommended character list of the virtual character obtained through the above steps includes a plurality of characters
The character image is based on the personal preference of the user, but the actual application scene of the virtual human is not reasonably adjusted;
for example, for a newscast or entertainment program by an avatar, the desired avatar's avatar should have a more preferential setting; generally, virtual characters based on news broadcast programs require image modesty and conscience; the image of the virtual character based on the entertainment program can be more exaggerated or has a sense of happiness;
as mentioned above, we define the main tone of the character by several keywords such as "strict", "quartic", etc., and can use this as the starting point of the character design;
therefore, in some preferred embodiments, the image generation module needs to obtain one or more keywords, perform semantic analysis on the keywords, generate a plurality of feature labels, and then obtain at least one image element meeting these feature criteria from the database;
a method of obtaining keywords, in some embodiments, includes active input by a user through the interaction module;
in some embodiments, a related system running the virtual character performs topic analysis according to the display content to be implemented by the virtual character, so as to obtain keywords;
in some embodiments, the method includes obtaining, by the re-creation system, a topic that is currently more discussed as a keyword through the internet in combination with recent current affairs, weather, social media popularity information, and the like.
Example three:
this embodiment should be understood to include at least all of the features of any of the foregoing embodiments and further modifications thereon;
the user can receive the finally generated virtual character through the interaction module and browse, check and modify the virtual character;
FIG. 4 is a diagram illustrating an exemplary composition of the interactive module; as shown, the interactive module 400 may include auxiliary components 410, sensors 420, a display 430, and a processor 440;
optionally, the user can use, fix and wear the interactive module through the auxiliary component 410; in some embodiments, the auxiliary component 410 may be worn on the face or head of the user; by way of example only, the auxiliary component 410 may include glasses, helmets, goggles, masks, contact lenses, and the like, or any combination thereof;
optionally, the sensor 420 may be configured to collect user information related to the user and environmental conditions of the interactive module; in some embodiments, the user information may include physiological information of the user and information input by the user; for example, the user information may include a heart rate of the user, a blood pressure of the user, brain activity of the user, biometric data related to the user, facial images of the user, expressions of the user, actions performed by the user, or audio uttered by the user; as another example, the user information may include information input by a user through an input device, including a keyboard, mouse, microphone, etc., or any combination thereof; in some embodiments, environmental conditions may be data of the ambient environment the user uses the interactive module, such as ambient temperature, ambient humidity level, ambient pressure, geographic location of the user, user orientation and head position, etc.;
in some embodiments, the sensor 420 may include an image sensor (e.g., a camera), an audio sensor (e.g., a microphone), a location sensor, a humidity sensor, a biosensor, an ambient light sensor, or the like, or any combination thereof; for example only, the image sensor may be configured to collect facial images of a user, and the microphone may be configured to collect audio messages emitted by the user; further, sensor 420 includes sending the collected information to processor 440;
optionally, display 430 is used to display information; the display 430 may display a virtual character, may display one or more text messages or pattern messages, and may enable interaction between the virtual character and the user through the messages; in some embodiments, display 430 may be a screen of an entity; in some embodiments, display 430 may be a transparent display, for example in a visor or face mask of a helmet; in some embodiments, display 430 may be a different display lens than the visor or face mask of the helmet;
optionally, processor 440 includes a processor for supporting the running of the final avatar generated from the avatar generation module; processor 440 may run one or more applications and run the avatar in a suitable manner, including running the necessary software environment, data links; running a suitable display interface for fully displaying the virtual character, such as a three-dimensional customized environment which can be controlled by a user or an environment which can enlarge and display the virtual character;
further, the interaction module comprises a plurality of parameters for displaying the final virtual character image; such as color codes, size codes, proportion of five sense organs, garment systems, makeup systems, etc.;
further, the interaction module comprises an operation method for providing parameters of the final virtual character customized by a user, and the interaction module comprises the step of determining the parameters or the character part through a mouse, a touch and a keyboard; the interaction module changes one or more parameters through sliding, rotating, clicking, character inputting and the like so as to optimize the final virtual character image again;
as shown in fig. 5, by adjusting the parameters of the virtual character for a plurality of times, the user can perform active optimization again on the final virtual character image, so as to generate a series of multiple changes based on the same virtual character image.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. That is, the methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in an order different than described, and/or various components may be added, omitted, and/or combined. Moreover, features described with respect to certain configurations may be combined in various other configurations, as different aspects and elements of the configurations may be combined in a similar manner. Further, elements therein may be updated as technology evolves, i.e., many of the elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of the exemplary configurations including implementations. However, configurations may be practiced without these specific details, for example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configurations will provide those skilled in the art with an enabling description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
In conclusion, it is intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is illustrative only and is not intended to limit the scope of the invention. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (9)

1. A virtual character image recreating system based on image technology is characterized in that the recreating system comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the relevant image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of the virtual character and the image element library;
the image generation module is used for generating the image of the virtual character, and comprises the steps of calling a specified image element from the image element library and synthesizing the image element into an virtual character image so as to generate a new virtual character image;
the interaction module is used for interacting with a user and receiving selection and grading of the user on the virtual character image; the method also comprises the steps of showing the virtual character to the user;
the calculating module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of the image of the virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m characters in the first character set, and randomly dividing the m characters into h batches to be displayed to p users;
s300: selecting at least one image of the interested virtual character from the images of the virtual characters displayed in each batch by the p users; the finally selected characters of the q virtual characters are defined as q second characters and form a second character set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user for n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: and according to the keywords of the content to be displayed, acquiring image elements related to the keywords, and synthesizing the image elements with the virtual character in the recommended character list to obtain the final virtual character.
2. The system of claim 1, wherein in step S200, the reference character media provided by the user includes a picture of the reference character, a video clip; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate a first image of n virtual characters.
3. An image technology based virtual character recreating system as claimed in claim 2, wherein in step S300, the user is included to score each of the second shape images selected.
4. An avatar recreating system based on image technology as claimed in claim 3, further comprising the following steps in step S400:
s401: generating an original scoring matrix Y based on the scores of the p users on the second images of the q virtual characters;
s402: and extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix.
5. The virtual character image recreating system based on image technology as claimed in claim 4, wherein in step S402, the original scoring matrix is decomposed with singular values to extract a user-feature matrix U, which includes the following calculation methods:
Figure DEST_PATH_IMAGE001
formula 1;
in formula 1, Y is the original scoring matrix, U is a user-feature matrix representing matrix vector description between a user and potential features; c is an image-characteristic matrix which represents matrix vector description between the image and the potential characteristics of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types included in each avatar of the virtual character.
6. An avatar recreating system based on image technology as claimed in claim 5, further comprising in step S402, calculating the similarity of p users using said user-feature matrix U, and obtaining a user similarity matrix SM by the set of calculated similarities of p users; the user similarity matrix SM is used to describe the similarity of preference of p users for n first personas.
7. An avatar recreating system based on image technology as claimed in claim 6, further comprising after obtaining said user similarity matrix SM:
calculating a neighbor set { Neib } of the current user about r based on the user similarity matrix SM and a preset range value r;
predicting the grade of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original grade matrix Y, and generating a predicted grade list of the current user;
and based on a prediction scoring list of the current user, sequencing n first images in the first image set, and forming the top v images with the highest scores into the recommended image list.
8. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the re-creation method comprised in the re-creation system as claimed in any one of claims 1 to 7.
9. A readable storage medium, having stored thereon a computer program for performing, when being executed by a processor, the steps of the re-creation method comprised in the re-creation system as claimed in any one of the claims 1 to 7.
CN202211069029.6A 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology Active CN115168745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211069029.6A CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211069029.6A CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Publications (2)

Publication Number Publication Date
CN115168745A true CN115168745A (en) 2022-10-11
CN115168745B CN115168745B (en) 2022-11-22

Family

ID=83480931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211069029.6A Active CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Country Status (1)

Country Link
CN (1) CN115168745B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115458128A (en) * 2022-11-10 2022-12-09 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points
CN116091667A (en) * 2023-03-06 2023-05-09 环球数科集团有限公司 Character artistic image generation system based on AIGC technology
CN117150089A (en) * 2023-10-26 2023-12-01 环球数科集团有限公司 Character artistic image changing system based on AIGC technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064293A (en) * 2018-08-09 2018-12-21 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium
JP2018206025A (en) * 2017-06-02 2018-12-27 キヤノン株式会社 Information processing device and information processing method
US10664903B1 (en) * 2017-04-27 2020-05-26 Amazon Technologies, Inc. Assessing clothing style and fit using 3D models of customers
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
US20220108358A1 (en) * 2020-10-07 2022-04-07 Roblox Corporation Providing personalized recommendations of game items
CN114721572A (en) * 2022-03-01 2022-07-08 河北雄安三千科技有限责任公司 Visual display method, device, medium, equipment and system for dream

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10664903B1 (en) * 2017-04-27 2020-05-26 Amazon Technologies, Inc. Assessing clothing style and fit using 3D models of customers
JP2018206025A (en) * 2017-06-02 2018-12-27 キヤノン株式会社 Information processing device and information processing method
CN109064293A (en) * 2018-08-09 2018-12-21 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
US20220108358A1 (en) * 2020-10-07 2022-04-07 Roblox Corporation Providing personalized recommendations of game items
CN114721572A (en) * 2022-03-01 2022-07-08 河北雄安三千科技有限责任公司 Visual display method, device, medium, equipment and system for dream

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫洲等: "基于用户和项目组合的协同过滤推荐算法", 《电脑知识与技术》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115458128A (en) * 2022-11-10 2022-12-09 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points
CN115458128B (en) * 2022-11-10 2023-03-24 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points
CN116091667A (en) * 2023-03-06 2023-05-09 环球数科集团有限公司 Character artistic image generation system based on AIGC technology
CN116091667B (en) * 2023-03-06 2023-06-20 环球数科集团有限公司 Character artistic image generation system based on AIGC technology
CN117150089A (en) * 2023-10-26 2023-12-01 环球数科集团有限公司 Character artistic image changing system based on AIGC technology
CN117150089B (en) * 2023-10-26 2023-12-22 环球数科集团有限公司 Character artistic image changing system based on AIGC technology

Also Published As

Publication number Publication date
CN115168745B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115168745B (en) Virtual character image recreating method and system based on image technology
US20190057723A1 (en) Visualization of image themes based on image content
Kim et al. Application of interactive genetic algorithm to fashion design
CN101055647B (en) Method and device for processing image
Adithya et al. Hand gestures for emergency situations: A video dataset based on words from Indian sign language
CN105118082A (en) Personalized video generation method and system
CN110110118A (en) Dressing recommended method, device, storage medium and mobile terminal
CN113393550B (en) Fashion garment design synthesis method guided by postures and textures
Liu et al. Texture-aware emotional color transfer between images
Xu et al. Saliency prediction on omnidirectional image with generative adversarial imitation learning
KR20180093632A (en) Method and apparatus of recognizing facial expression base on multi-modal
CN115905593A (en) Method and system for recommending existing clothes to be worn and put on based on current season style
CN115130493A (en) Face deformation recommendation method, device, equipment and medium based on image recognition
Zhang et al. Multi-view dimensionality reduction via canonical random correlation analysis
CN116701706B (en) Data processing method, device, equipment and medium based on artificial intelligence
CN113705301A (en) Image processing method and device
CN109359543B (en) Portrait retrieval method and device based on skeletonization
Liu et al. A3GAN: An attribute-aware attentive generative adversarial network for face aging
KR20000063344A (en) Facial Caricaturing method
He et al. Fa-gans: Facial attractiveness enhancement with generative adversarial networks on frontal faces
Nejati et al. A study on recognizing non-artistic face sketches
Valstar Timing is everything: A spatio-temporal approach to the analysis of facial actions
Kim et al. Fashion design using interactive genetic algorithm with knowledge-based encoding
Zhou et al. ULME-GAN: a generative adversarial network for micro-expression sequence generation
Kumar et al. Appearance based feature extraction and selection methods for facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant