CN115168745B - Virtual character image recreating method and system based on image technology - Google Patents

Virtual character image recreating method and system based on image technology Download PDF

Info

Publication number
CN115168745B
CN115168745B CN202211069029.6A CN202211069029A CN115168745B CN 115168745 B CN115168745 B CN 115168745B CN 202211069029 A CN202211069029 A CN 202211069029A CN 115168745 B CN115168745 B CN 115168745B
Authority
CN
China
Prior art keywords
image
user
character
matrix
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211069029.6A
Other languages
Chinese (zh)
Other versions
CN115168745A (en
Inventor
张卫平
黄筱雨
丁烨
张思琪
张伟
李显阔
李蕙男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Digital Group Co Ltd
Original Assignee
Global Digital Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Digital Group Co Ltd filed Critical Global Digital Group Co Ltd
Priority to CN202211069029.6A priority Critical patent/CN115168745B/en
Publication of CN115168745A publication Critical patent/CN115168745A/en
Application granted granted Critical
Publication of CN115168745B publication Critical patent/CN115168745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to and provides a method and system for recreating a virtual character image based on an image technology, wherein the recreating method takes a reference template of a virtual character as an original template for creating a new image; analyzing a reference character medium provided by a user through an image technology so as to obtain at least one item of reference character characteristics in a reference character, generating a first character combination according to the character characteristics, and obtaining a grading set of a plurality of users on a plurality of virtual characters in the first character combination; furthermore, a recommendation algorithm is used for obtaining a recommendation sequence in an image list of recommended virtual characters for the current user, and finally, a re-created character image is generated and is given to the virtual character by combining the keywords of the content to be displayed and is displayed on the current user.

Description

Virtual character image recreating method and system based on image technology
Technical Field
The invention relates to the technical field of electronic digital data processing. In particular to a method and a system for recreating virtual character images based on image technology.
Background
The image is the overall impression and evaluation of the social public to the individual; the image is the comprehensive reflection of the intrinsic quality and appearance expression of the character; with the rapid development of digital virtual technology, various types of programs and application scenes displayed by using a virtual person are more and more applied to social level, and the image requirements of people on virtual characters are gradually improved.
The prior virtual character image is generally generated in a fixed form by a program, and even if a variable image setting algorithm is included, the variability of the image cannot be flexibly changed according to the theme needing to be displayed and the personalized requirements of a user, so that the virtual character image has obvious limitation.
Referring to the related disclosed technical solutions, the technical solution with publication number CN114303116A proposes a method for controlling virtual roles by a multi-modal model. Internal models can be combined to make credible responses such as emotions through virtual characters, and changes of multiple factors including characters of the virtual characters are generated through various interactive links generated by users; the technical proposal of the publication No. US20200027271A1 proposes to optimize the subsequent images and related actions of the virtual human by analyzing the posture and actions of the current virtual human through a database of two-dimensional images and three-dimensional images; KR102223444B1 proposes a display system for fashion matching by using a virtual human, which captures physical data of a real user and generates a virtual character, so that the user can change elements including clothes, dressing, posture and the like for the virtual character in the virtual system, thereby examining matching effects of a series of image elements. The above technical solutions all refer to the system for creating and generating the virtual character image, however, there are still few references to the method and system for adaptive image optimization.
The foregoing discussion of the background art is intended to facilitate an understanding of the present invention only. This discussion is not an acknowledgement or admission that any of the material referred to is part of the common general knowledge.
Disclosure of Invention
The invention aims to provide a method and a system for recreating a virtual character image based on an image technology; the re-creation method takes a reference template of a virtual character as an original template for creating a new image; analyzing a reference character medium provided by a user through an image technology so as to obtain at least one reference character characteristic in a reference character image, and generating a first character combination according to the character characteristic, wherein the first character combination is used for obtaining a grading set of a plurality of users on a plurality of virtual character images in the first character combination; further, a recommendation algorithm is used for obtaining a recommendation sequence in an image list of recommended virtual characters for the current user, and finally, a recreated character image is generated and given to the virtual character by combining keywords of the content to be displayed and displayed on the current user.
The invention adopts the following technical scheme:
a virtual character image recreating system based on image technology is characterized in that the recreating system comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the related image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of a virtual character and the image element library;
the image generating module is used for generating the image of the virtual character, and comprises the steps of calling specified image elements from the image element library and synthesizing the image elements into an image of the virtual character so as to generate a new image of the virtual character;
the interaction module is used for interacting with a user and receiving selection and grading of the user on the virtual character image; the virtual character display method further comprises the steps of showing the virtual character to a user;
the calculating module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of the image of the virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m characters in the first character set, and randomly dividing the m characters into h batches to be displayed to p users;
s300: the p users select at least one image of interest virtual character from the displayed images of virtual characters in each batch; the finally selected characters of the q virtual characters are defined as q second characters and form a second character set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user on n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: according to the keywords of the content to be displayed, image elements related to the keywords are obtained and synthesized with the virtual character in the recommended character list, and therefore the final virtual character is obtained;
preferably, in step S200, the reference character media provided by the user includes a picture or a video clip of the reference character; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate first images of n virtual characters;
preferably, in step S300, the user is involved in scoring each selected second shape;
preferably, in step S400, the following steps are further included:
s401: generating an original scoring matrix Y based on the scores of the p users on the second images of the q virtual characters;
s402: and extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix.
Preferably, in step S402, the singular value decomposition is performed on the raw scoring matrix to extract a user-feature matrix U, which includes the following calculation methods:
Figure 100002_DEST_PATH_IMAGE001
formula 1;
in formula 1, Y is the raw scoring matrix, and U is a user-feature matrix representing a matrix vector description between a user and a potential feature; c is an image-characteristic matrix which represents matrix vector description between the image and the potential characteristics of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types contained in each image of the virtual character;
preferably, in step S402, the method further includes calculating similarities of p users by using the user-feature matrix U, and obtaining a user similarity matrix SM by using the set of calculated similarities of p users; the user similarity matrix SM is used for describing similarity of preference of p users to n first images;
preferably, after obtaining the user similarity matrix SM, the method further includes:
calculating a neighbor set { Neib } of the current user about r based on the user similarity matrix SM and a preset range value r;
predicting the grade of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original grade matrix Y, and generating a predicted grade list of the current user;
and based on a prediction scoring list of the current user, sequencing n first images in the first image set, and forming the top v images with the highest scores into the recommended image list.
The beneficial effects obtained by the invention are as follows:
1. the re-creation method of the invention uses the image technology to analyze based on a reference image which is interesting to the user, thereby obtaining the characteristics of the image which is interesting to the user, and generating the images of a plurality of virtual characters for the user to evaluate based on the characteristics;
2. the re-creation method of the invention utilizes a recommendation algorithm to generate an image list which is based on possible liking of a current user and is used as a re-created alternative image library, and can generate a latest image which is suitable for the theme content again based on the key words of the theme content to be displayed, so that the new image can be more fit with the theme to be displayed by the virtual character;
3. the recreating method of the invention distinguishes the algorithm which needs to obtain the favorite characteristics of the user after the user carries out a great deal of evaluation operation on the images of a great number of virtual characters, and carries out analysis on the user favorite and the user approximation degree on the scoring matrix of the user random evaluation by using the optimized analysis algorithm to predict the scoring of the images which are not scored by the current user, thereby obtaining the favorite attributes of the current user to the virtual characters on the premise of reducing the operation frequency of the user as much as possible and ensuring the freshness of the images, and greatly improving the accuracy and the calculation efficiency of the recreating images which are fit with the user favorite;
the inventive re-creation system adopts modular design and cooperation of each unit, and can flexibly optimize and change software and hardware in later period, thereby saving a large amount of later maintenance and upgrading cost.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. Like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a schematic representation of the steps of the inventive re-creation process;
FIG. 2 is a diagram illustrating a first image set screening to obtain a second image set according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a reference template of a virtual character according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an interaction module according to an embodiment of the invention;
fig. 5 is a schematic diagram of the final adjustment of the final virtual character by the user in the embodiment of the present invention.
The drawings illustrate schematically: 400-an interaction module; 410-auxiliary components; 420-a sensor; 430-a display; 440-a processor.
Detailed Description
In order to make the technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to embodiments thereof; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Other systems, methods, and/or features of the present embodiments will become apparent to one with skill in the art upon examination of the following detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims. Additional features of the disclosed embodiments are described in, and will be apparent from, the detailed description below.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by the terms "upper", "lower", "left", "right", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of the description, but not to indicate or imply that the device or component referred to must have a specific orientation.
The first embodiment is as follows:
the inventive method is set forth below in conjunction with the detailed description;
the image of the virtual character has high degree of freedom, and the user has subjective and very different consideration factors for the image of the virtual character; if the user's preference needs to be well satisfied, the current practice generally includes:
(1) The user describes the favorite characteristics of the image in detail, and the system classifies, quantifies and recreates a new image; however, for some users with poor description ability or users with weak subjective awareness, the method is not efficient;
(2) Generating a large amount of virtual character images, and judging by the user so as to obtain the specific preference of the user; the judging mode comprises a binary system judgment of yes/no preference or a judgment in a score system; however, a user needs to perform a large amount of operations, and the implementation efficiency is low;
aiming at the defects, the invention provides an image analysis technology, combines matrix singular value decomposition and collaborative filtering algorithm, and can predict the evaluation of the user on the new image of the recreated virtual character according to the result of judging the limited virtual character image by the user;
a virtual character image recreating system based on image technology comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the relevant image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of the virtual character and the image element library;
the image generating module is used for generating the image of the virtual character, and comprises the steps of calling specified image elements from the image element library and synthesizing the image elements into an image of the virtual character so as to generate a new image of the virtual character;
the interaction module is used for interacting with a user and receiving selection and grading of the user on the virtual character image; the method also comprises the steps of showing the virtual character to the user;
the calculating module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of the image of the virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m characters in the first character set, and randomly dividing the m characters into h batches to be displayed to p users;
s300: selecting at least one image of the interested virtual character from the images of the virtual characters displayed in each batch by the p users; the finally selected characters of the q virtual characters are defined as q second characters and form a second character set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user for n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: according to the keywords of the content to be displayed, acquiring image elements related to the keywords, and synthesizing the image elements with the virtual character in the recommended character list to obtain a final virtual character;
preferably, in step S200, the reference character media provided by the user includes a picture or a video clip of the reference character; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate first images of n virtual characters;
preferably, in step S300, the user is involved in scoring each of the selected second images;
preferably, in step S400, the following steps are further included:
s401: generating an original scoring matrix Y based on the scoring of the p users on the second images of the q virtual characters;
s402: and extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix.
Preferably, in step S402, the singular value decomposition is performed on the raw scoring matrix to extract a user-feature matrix U, which includes the following calculation methods:
Figure 746136DEST_PATH_IMAGE002
formula 1;
in formula 1, Y is the raw scoring matrix, and U is a user-feature matrix representing a matrix vector description between a user and a potential feature; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types contained in each avatar of the virtual character;
preferably, in step S402, the method further includes calculating similarities of p users by using the user-feature matrix U, and obtaining a user similarity matrix SM by using a set of the calculated similarities of the p users; the user similarity matrix SM is used for describing similarity of preference of p users to n first images;
preferably, after obtaining the user similarity matrix SM, the method further includes:
calculating a neighbor set { Neib } of the current user about r based on the user similarity matrix SM and a preset range value r;
predicting the grade of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original grade matrix Y, and generating a predicted grade list of the current user;
based on a prediction scoring list of a current user, sorting n first images in the first image set, and forming the top v images with the highest scores into a recommended image list;
in step S100, the reference template character of the avatar refers to the avatar having only basic elements, as shown in fig. 3; for example, an avatar having only a basic skeleton without muscle contour design attachment, or a three-dimensional design with only line frames without coloring, or an avatar having only a plurality of character modules such as head, torso, limbs, incomplete overall composition, or an avatar having only a character body without any decorative design such as clothing, hairstyle, skin tone;
the reference template is used for enabling the virtual character to be synthesized with other image elements to form a new image, so that at least contained elements or not contained elements of the reference template can be set based on the actual application requirements of the virtual character;
in step S200, the user, the controller, and the designer, including the virtual character, may provide an instructive direction or theme for the subsequent image re-creation by providing a plurality of the reference media for the re-creation system; frequently, people can view a large number of interesting image design elements through a large number of channels; for example, image elements that like the clothing of Tang dynasty, china, or image elements that like the theme of beach clothing; however, most non-image designed persons cannot accurately describe the concrete representation of their favorite elements because these elements may have a lot of details of colors, patterns, shapes;
therefore, by using the related image technology, a large amount of character image information recorded in the reference medium can be acquired; in some embodiments, the image analysis step for the character representation in the reference media comprises:
(1) Preprocessing the reference media, including adjusting the contrast and sharpness of pictures, decomposing videos according to frames, intercepting images and the like, so as to decompose the reference media into a plurality of images and obtain better image characteristic analysis source data; wherein, as the reference media comprises a plurality of characters, the reference characters comprise the characters specified by the user; the method also comprises the following steps that image elements to be referred to in the character are specified by the user;
(2) Carrying out digital description on the image and extracting the characteristics of image elements; algorithms for describing images are numerous, and a few are used: SIFT descriptor, fingerprint algorithm function, bundling features algorithm, hash function and the like; different algorithms can be designed according to different images and image elements, such as a method for extracting image characteristics by using local N-order moments of the images;
(3) Coding the image characteristic information, and coding the massive images to be used as a lookup table; pooling and down-sampling the image with higher resolution, and extracting and coding the image characteristics after reducing the operation amount;
(4) Similarity matching operation: utilizing the coded value of the image and utilizing the data of a search engine in the image element library of the database to carry out global or local similarity calculation; setting a threshold value according to the required robustness, and then carrying out cache recording on image elements with high similarity to target features or elements;
(5) Combining or decomposing these image elements in preparation for synthesis of the reference template;
the algorithms and theories included in the image processing steps are well known and applied by persons skilled in the relevant image processing field, and are not described herein;
for example, in one implementation method, a user provides a woman image of a Tang dynasty figure as the reference figure medium, and a large number of Tang dynasty clothing, makeup, jewelry, postures, headwear and other figure elements can be called from the figure element library through image analysis processing; synthesizing the image elements into the reference template by adopting random combination, thereby generating a large number of character images as the first image and forming the first image set;
further, as shown in fig. 2, the first image set is randomly distributed to obtain scores of p users;
as can be seen, the following table illustrates, where p =3,n =8; in the random allocation mechanism as step S200, p users do not score all n characters in the first character set, so the subsequent steps S300 and S400 need to be performed; in consideration of efficiency and accuracy of the balance algorithm, the extracted m images and the values m and h divided into h batches need to be reasonably set by related technicians according to specific application examples, and no regulation is made herein;
user scoring Image a Image b Image c Image d Image e Image f Image g Image h
User A 4 8 7 5
User B 2 3 6 8
User C 7 3 5 8 8
Further, there is a partial absence in the scoring of n first characters by p users, so steps S300 and S400 solve the following problem:
(1) Problem of potential feature extraction for users:
the latent features refer to the kinds of elements included in the avatar of the virtual character, for example, the kinds of clothes, such as western-style clothes, evening dress, casual clothes, or chinese style, western style, indian style, etc.; or as cosmetic types, for example:
the potential feature extraction of the user is a precondition for carrying out preference analysis on the user, the accurate and complete extraction of the individual potential features of the user can greatly improve the final recommendation precision, and can greatly help to improve the user experience and satisfaction; in the prior art, a slope one algorithm based on labels is commonly used, the method makes up the deficiency of personalized reference data to a certain extent, but because the process that a user scores a plurality of images is a subjective process, usually the labels cannot completely represent the main characteristics of the user or a movie, a potential characteristic matrix generated by SVD decomposition can well represent the potential characteristic vectors of the user;
(2) Analyzing the user similarity according to the user characteristics:
the method has the advantages that the analysis on the user similarity is widely applied to a collaborative filtering-based recommendation system, the user similarity is used for generally distinguishing users with the same preference and characteristics and the same expressed behaviors or potential common interests, the user group with similar potential characteristics is extracted, and the analysis recommendation algorithm for the user characteristics is used in the group, so that the influence of users with different characteristics on the algorithm can be effectively avoided, and the recommendation precision and the operation speed are improved;
(3) Problem of score matrix too sparse:
the sparse scoring matrix is one of the difficult problems which puzzle the user preference recommendation for a long time, and the analysis recommendation algorithm of the user characteristics has better performance on the dense matrix; one effective solution is to interpolate the missing values of the matrix, one is to reduce the dimensionality of the matrix; in the embodiment, a proper decomposition dimension is calculated according to the data obtained in the step S300, so that the problem of matrix sparsity can be solved well;
further, in formula 1, SVD represents singular value decomposition, Y is the original scoring matrix, U is a user-feature matrix representing matrix vector description between a user and a potential feature, and element values in U represent the preference degree of the user to the potential feature; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction; z is dimension reduction, and in the embodiment, an optimal solution of the value of z can be obtained through a trial value method according to the time for dimension reduction and the final recommendation accuracy; after the decomposition, the scoring condition of the potential features existing in the image by the user can be obtained;
further, in step S402, calculating the similarity of p users and a user similarity matrix SM by using the user-feature matrix U; the user similarity matrix SM is calculated by the following formula:
first, the user similarity sim (a, b) is calculated, where a, b represent any two potential features that are not the same:
Figure DEST_PATH_IMAGE003
formula 2;
in formula 2, U ab Representing a set of users having a preference for both potential features a and b;
Figure 600959DEST_PATH_IMAGE004
the user average score representing the potential feature a,
Figure DEST_PATH_IMAGE005
the average score of the users representing the potential feature b,
Figure 318380DEST_PATH_IMAGE006
represents the user u pairsThe score of the potential feature a is determined,
Figure DEST_PATH_IMAGE007
representing the user u scoring the potential feature b; wherein the user u is one of the p users;
through the calculation mode, the similarity calculation of the user is carried out based on any two different potential characteristics, and the user similarity matrix SM can be obtained after all calculation results form a set; each element in the user similarity matrix SM is the similarity of any two users to a potential feature;
further, in one embodiment, a range value r is set, based on the user similarity matrix SM, the preference of any two users to any potential feature is compared, and r users most similar to the current user are obtained through statistics to form the neighbor set { Neib };
further, predicting the score of the current user on each of the n first characters based on the neighbor set { Neib } and the original score matrix Y; since similar users have the same preference degree for similar potential features, the user's score for any two images can be estimated by analyzing the degree of difference of the potential features between any two images, using the scores of any two images by the similar users; using this method for supplementing the unscored avatars in the raw scoring matrix Y based on the scores of any one user;
a preferred implementation method is to first calculate the degree of difference between the two images in the following way:
Figure 916851DEST_PATH_IMAGE008
formula 3;
in formula 3, dev e,f The image e and the image f are difference indexes of the image e and the image f, and the image e and the image f are any two first images in the first image set; s. the e,f (x) Represents a set of users who have scored character e and character f simultaneously, and requires S e,f (x)Î{Neib }; card () is the number of elements in a set, card (S) e,f (x) Is calculating S e,f (x) The number of users; u. u e And u f Respectively representing the scores of the user u on the image e and the image f;
further, a prediction score for each character is obtained using the following calculation formula;
Figure DEST_PATH_IMAGE009
formula 4;
in the above formula, w is a user who has not evaluated the character e, pd w,e Predictive scoring of user w for an avatar e, S (w) is a set of avatars scored by user w in the first set of avatars; i is the ith character in S (w); w is a group of i Scoring the ith character for the user W; dev e,i Is the difference index of the image e and the ith image; card (S (w)) counts the number of said first personas that user w has made a score.
Example two:
this embodiment should be understood to include at least all of the features of any of the foregoing embodiments and further modifications thereon;
further, the recommended character list of the virtual character obtained through the above steps includes a plurality of characters
The character image is based on the personal preference of the user, but the actual application scene of the virtual human is not reasonably adjusted;
for example, for a newscast or entertainment program by an avatar, the desired avatar's avatar should have a more preferential setting; generally, virtual characters based on news broadcast programs require image modesty and conscience; the image of the virtual character based on the entertainment program can be more exaggerated or has a sense of happiness;
as mentioned above, we define the main tone of the character by several keywords such as "strict", "quartic", etc., and can use this as the starting point of the character design;
therefore, in some preferred embodiments, the image generation module needs to obtain one or more keywords, perform semantic analysis on the keywords, generate a plurality of feature labels, and then obtain at least one image element meeting these feature standards from the database;
a method of obtaining keywords, in some embodiments, includes active input by a user through the interaction module;
in some embodiments, a related system running the virtual character performs topic analysis according to the display content to be implemented by the virtual character, so as to obtain keywords;
in some embodiments, the method includes obtaining, by the re-creation system, a topic that is currently more discussed as a keyword through the internet in combination with recent current affairs, weather, social media popularity information, and the like.
Example three:
this embodiment should be understood to include at least all of the features of any of the foregoing embodiments and further modifications thereon;
the user can receive the finally generated virtual character through the interaction module and browse, check and modify the virtual character;
FIG. 4 is a diagram illustrating an exemplary composition of the interactive module; as shown, the interactive module 400 may include auxiliary components 410, sensors 420, a display 430, and a processor 440;
optionally, the user can use, fix and wear the interactive module through the auxiliary component 410; in some embodiments, the assistance component 410 may be worn on the face or head of the user; by way of example only, the auxiliary component 410 may include glasses, helmets, goggles, masks, contact lenses, and the like, or any combination thereof;
optionally, the sensor 420 may be configured to collect user information related to the user and environmental conditions of the interactive module; in some embodiments, the user information may include physiological information of the user and information input by the user; for example, the user information may include a heart rate of the user, a blood pressure of the user, brain activity of the user, biometric data related to the user, facial images of the user, expressions of the user, actions performed by the user, or audio uttered by the user; as another example, the user information may include information input by a user through an input device, including a keyboard, mouse, microphone, etc., or any combination thereof; in some embodiments, environmental conditions may be data of the ambient environment the user uses the interactive module, such as ambient temperature, ambient humidity level, ambient pressure, geographic location of the user, user orientation and head position, etc.;
in some embodiments, the sensor 420 may include an image sensor (e.g., a camera), an audio sensor (e.g., a microphone), a location sensor, a humidity sensor, a biosensor, an ambient light sensor, or the like, or any combination thereof; for example only, the image sensor may be configured to collect an image of a user's face and the microphone may be configured to collect an audio message emitted by the user; further, sensor 420 includes sending the collected information to processor 440;
optionally, display 430 is used to display information; the display 430 may display a virtual character, may display one or more text messages or pattern messages, and may enable interaction between the virtual character and the user through the messages; in some embodiments, display 430 may be a screen of an entity; in some embodiments, display 430 may be a transparent display, for example in a visor or face mask of a helmet; in some embodiments, display 430 may be a different display lens than a visor or mask of a helmet;
optionally, processor 440 includes logic for supporting the operation of the final avatar generated from the avatar generation module; processor 440 may run one or more applications and run the avatar in a suitable manner, including running the necessary software environment, data links; running a suitable display interface for fully displaying the virtual character, such as a three-dimensional customized environment which can be controlled by a user or an environment for amplifying and displaying the virtual character;
further, the interaction module comprises a plurality of parameters for displaying the final virtual character image; such as color codes, size codes, proportion of five sense organs, garment systems, makeup systems, etc.;
further, the interaction module comprises an operation method for providing parameters for customizing the final virtual character image by a user, and the interaction module comprises the step of determining the parameters or the image part through a mouse, a touch and a keyboard; the interaction module changes one or more parameters through sliding, rotating, clicking, character inputting and the like so as to optimize the final virtual character image again;
as shown in fig. 5, by adjusting the parameters of the virtual character for a plurality of times, the user can perform active optimization on the final virtual character again, so as to generate a series of multiple changes based on the same virtual character.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. That is, the methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in an order different than that described, and/or various components may be added, omitted, and/or combined. Moreover, features described with respect to certain configurations may be combined in various other configurations, as different aspects and elements of the configurations may be combined in a similar manner. Further, elements therein may be updated as technology evolves, i.e., many elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of example configurations, including implementations. However, configurations may be practiced without these specific details, for example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configurations will provide those skilled in the art with an enabling description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
In conclusion, it is intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that these examples are illustrative only and are not intended to limit the scope of the invention. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (3)

1. A virtual character image recreating system based on image technology is characterized in that the recreating system comprises an acquisition module, an image processing module, a database, an image generation module, an interaction module and a calculation module; wherein
The acquisition module is used for acquiring a reference template of a virtual character from a database; and a material for obtaining the related image of the specified characteristics from the image element library;
the image processing module is used for analyzing reference image media provided by a user and acquiring a plurality of characteristics of image characters in the reference image media;
the database is used for storing a reference template of a virtual character and the image element library;
the image generation module is used for generating the image of the virtual character, and comprises the steps of calling a specified image element from the image element library and synthesizing the image element into an virtual character image so as to generate a new virtual character image;
the interaction module is used for interacting with a user and receiving the selection and the grading of the virtual character image by the user; the method also comprises the steps of showing the virtual character to the user;
the calculating module is used for calculating the similarity of a plurality of users;
the recreating system comprises a virtual character recreating method based on image technology; the re-creation method comprises the following steps:
s100: acquiring a reference template of the image of the virtual character;
s200: generating a first image set by combining a reference image medium provided by a user on the basis of the reference template; the first image set comprises first images of n virtual characters; randomly extracting m images in the first image set, and randomly dividing the m images into h batches to be displayed to p users;
s300: the p users select at least one image of interest virtual character from the displayed images of virtual characters in each batch; the finally selected images of the q virtual characters are defined as q second images and form a second image set;
s400: analyzing the second image set formed by p users through selection by adopting a collaborative filtering method, generating a prediction scoring list of the current user for n virtual characters in the first image set, and generating a recommended image list through the prediction scoring list;
s500: according to the keywords of the content to be displayed, acquiring image elements related to the keywords, and synthesizing the image elements with the virtual character in the recommended character list to obtain a final virtual character;
in step S200, the reference character media provided by the user includes a picture of the reference character and a video clip; analyzing one or more avatar characteristics of a reference character avatar of the reference avatar media by using image techniques; searching image materials from an image element library and synthesizing the image materials to the reference template based on one or more image characteristics of the reference character image so as to generate first images of n virtual characters;
in step S300, the user is involved in scoring each selected second image;
in step S400, the following steps are further included:
s401: generating an original scoring matrix Y based on the scoring of the p users on the second images of the q virtual characters;
s402: extracting a user-feature matrix U by adopting singular value decomposition based on the original scoring matrix;
in step S402, extracting a user-feature matrix U from the raw scoring matrix by singular value decomposition, which includes the following calculation methods:
Figure DEST_PATH_IMAGE001
formula 1;
in formula 1, Y is the original scoring matrix, U is a user-feature matrix representing matrix vector description between a user and potential features; c is an image-feature matrix which represents matrix vector description between the image and the potential features of the virtual character; s is a diagonal matrix and represents a singular value matrix z x z after dimensionality reduction, wherein z is dimensionality reduction;
wherein the potential features are element types contained in each avatar of the virtual character;
in step S402, the method further includes calculating similarities of p users using the user-feature matrix U, and obtaining a user similarity matrix SM by using the set of calculated similarities of p users; the user similarity matrix SM is used for describing similarity of preference of p users to n first images;
after obtaining the user similarity matrix SM, the method further includes:
based on the user similarity matrix SM and a preset range value r, calculating a neighbor set { Neib } of the current user about r;
predicting the score of the current user on each of the n first characters by using the neighbor set { Neib } of the current user about r and the original score matrix Y, and generating a predicted score list of the current user;
and based on a prediction scoring list of the current user, sequencing n first images in the first image set, and forming the top v images with the highest scores into the recommended image list.
2. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the re-creation method comprised in the re-creation system of claim 1.
3. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the re-creation method comprised in the re-creation system as claimed in claim 1.
CN202211069029.6A 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology Active CN115168745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211069029.6A CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211069029.6A CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Publications (2)

Publication Number Publication Date
CN115168745A CN115168745A (en) 2022-10-11
CN115168745B true CN115168745B (en) 2022-11-22

Family

ID=83480931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211069029.6A Active CN115168745B (en) 2022-09-02 2022-09-02 Virtual character image recreating method and system based on image technology

Country Status (1)

Country Link
CN (1) CN115168745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115458128B (en) * 2022-11-10 2023-03-24 北方健康医疗大数据科技有限公司 Method, device and equipment for generating digital human body image based on key points
CN116091667B (en) * 2023-03-06 2023-06-20 环球数科集团有限公司 Character artistic image generation system based on AIGC technology
CN117150089B (en) * 2023-10-26 2023-12-22 环球数科集团有限公司 Character artistic image changing system based on AIGC technology

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064293A (en) * 2018-08-09 2018-12-21 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776861B1 (en) * 2017-04-27 2020-09-15 Amazon Technologies, Inc. Displaying garments on 3D models of customers
JP7042561B2 (en) * 2017-06-02 2022-03-28 キヤノン株式会社 Information processing equipment, information processing method
US10953334B2 (en) * 2019-03-27 2021-03-23 Electronic Arts Inc. Virtual character generation from image or video data
US20220108358A1 (en) * 2020-10-07 2022-04-07 Roblox Corporation Providing personalized recommendations of game items
CN114721572A (en) * 2022-03-01 2022-07-08 河北雄安三千科技有限责任公司 Visual display method, device, medium, equipment and system for dream

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064293A (en) * 2018-08-09 2018-12-21 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115168745A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN115168745B (en) Virtual character image recreating method and system based on image technology
CN105426850B (en) Associated information pushing device and method based on face recognition
Kim et al. Application of interactive genetic algorithm to fashion design
CN101055647B (en) Method and device for processing image
Adithya et al. Hand gestures for emergency situations: A video dataset based on words from Indian sign language
CN110021061A (en) Collocation model building method, dress ornament recommended method, device, medium and terminal
CN105118082A (en) Personalized video generation method and system
CN110110118A (en) Dressing recommended method, device, storage medium and mobile terminal
KR101893554B1 (en) Method and apparatus of recognizing facial expression base on multi-modal
CN108182232A (en) Personage's methods of exhibiting, electronic equipment and computer storage media based on e-book
Davis et al. An expressive three-mode principal components model for gender recognition
CN110351580B (en) Television program topic recommendation method and system based on non-negative matrix factorization
Xu et al. Saliency prediction on omnidirectional image with generative adversarial imitation learning
CN113393550A (en) Fashion garment design synthesis method guided by postures and textures
CN115905593A (en) Method and system for recommending existing clothes to be worn and put on based on current season style
Cui et al. Multi-source learning for skeleton-based action recognition using deep LSTM networks
Zhang et al. Multi-view dimensionality reduction via canonical random correlation analysis
Kwolek et al. Recognition of JSL fingerspelling using deep convolutional neural networks
CN110413818A (en) Paster recommended method, device, computer readable storage medium and computer equipment
CN113947798A (en) Background replacing method, device and equipment of application program and storage medium
He et al. FA-GANs: Facial attractiveness enhancement with generative adversarial networks on frontal faces
Nejati et al. A study on recognizing non-artistic face sketches
Valstar Timing is everything: A spatio-temporal approach to the analysis of facial actions
CN115130493A (en) Face deformation recommendation method, device, equipment and medium based on image recognition
CN111768334A (en) Human face cartoon image design method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant