CN116664727B - Game animation model identification method and processing system - Google Patents

Game animation model identification method and processing system Download PDF

Info

Publication number
CN116664727B
CN116664727B CN202310930524.XA CN202310930524A CN116664727B CN 116664727 B CN116664727 B CN 116664727B CN 202310930524 A CN202310930524 A CN 202310930524A CN 116664727 B CN116664727 B CN 116664727B
Authority
CN
China
Prior art keywords
frame
identified
game image
game
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310930524.XA
Other languages
Chinese (zh)
Other versions
CN116664727A (en
Inventor
肖健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen China Mobile Games Network Technology Co ltd
Original Assignee
Shenzhen China Mobile Games Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Mobile Games Network Technology Co ltd filed Critical Shenzhen China Mobile Games Network Technology Co ltd
Priority to CN202310930524.XA priority Critical patent/CN116664727B/en
Publication of CN116664727A publication Critical patent/CN116664727A/en
Application granted granted Critical
Publication of CN116664727B publication Critical patent/CN116664727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of computers, and provides a game animation model identification method and a processing system, wherein the method comprises the following steps: analyzing each frame of game image to be identified of the game animation file to be identified; extracting feature point clouds to be processed of each frame of game images to be identified; rejecting mismatching point clouds of feature point clouds to be processed of each frame of game image to be identified to obtain target feature point clouds, and performing point cloud splicing to obtain an object skeleton curve; performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics; and fusing the object skeleton curve of each frame of game image to be identified and the target texture characteristics of the object skeleton curve, inputting the obtained animation model to be matched into a model identification classifier, and obtaining the game animation model of each frame of game image to be identified. The application can identify the unknown game animation model in the game file only according to the object skeleton curve and the texture characteristics thereof, thereby realizing the real-time identification of the unknown game animation model.

Description

Game animation model identification method and processing system
Technical Field
The application relates to the technical field of computers, in particular to an animation model identification technology, and specifically relates to a game animation model identification method and a processing system.
Background
The existing game animation model identification method mainly uses a deep neural network (such as a convolutional neural network) to train an identification model, and carries out end-to-end classification and identification on the game animation model through the identification model. However, the training data set of the recognition model is delayed, and thus the unknown game animation model cannot be recognized in real time.
Disclosure of Invention
The embodiment of the application provides a game animation model identification method and a processing system, aiming at realizing the identification of an unknown game animation model in real time.
In a first aspect, an embodiment of the present application provides a game animation model recognition method, including:
analyzing the obtained game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified;
Performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified;
and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified.
In one embodiment, the performing texture recognition on each frame of game image to be recognized to obtain the target texture feature of each frame of game image to be recognized includes:
performing texture recognition on each frame of game image to be recognized through each texture recognizer of the texture recognizer group to obtain a first texture feature of each frame of game image to be recognized; wherein each texture identifier is formed by a two-dimensional Gaussian function D xx And a two-dimensional complex sine function D yy Composition, expression of texture identifier is
Acquiring a texture file from a texture library according to the file type of the game animation file to be identified; wherein the texture file comprises a plurality of second texture features;
And determining the target texture characteristic of each frame of game image to be identified according to the first texture characteristic and the plurality of second texture characteristics of each frame of game image to be identified.
In one embodiment, the determining the target texture feature of each frame of the game image to be identified according to the first texture feature and the plurality of second texture features of each frame of the game image to be identified includes:
calculating the feature distance between the first texture feature and the second texture feature of each frame of game image to be identified, and sorting the feature distances of each frame of game image to be identified in a descending order to obtain the feature distance of each frame of game image to be identified after sorting;
carrying out average value calculation on a first characteristic distance in the characteristic distances after sequencing of each frame of game images to be identified to obtain an average value characteristic distance of each frame of game images to be identified; the first characteristic distances are the preset number of characteristic distances before the sequenced characteristic distances;
performing difference calculation on each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified to obtain a distance difference between each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified; the second characteristic distance is the rest characteristic distance except the first characteristic distance in the ordered characteristic distances;
And determining the target texture feature of each frame of game image to be identified according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of game image to be identified.
In one embodiment, the determining the target texture feature of each frame of the game image to be identified according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of the game image to be identified includes:
determining the target characteristic distance of each frame of game image to be identified by determining the second characteristic distance of which the distance difference value is smaller than or equal to the preset difference value in each frame of game image to be identified;
and eliminating the target feature distance with the largest numerical value and the target feature distance with the smallest numerical value in each frame of game image to be identified, and carrying out average value on the second texture features corresponding to the rest target feature distances to obtain the target texture features of each frame of game image to be identified.
In one embodiment, the fusing the object skeleton curve of each frame of the game image to be identified and the target texture feature thereof to obtain the animation model to be matched of each frame of the game image to be identified includes:
convolving the object skeleton curve of each frame of game image to be identified and the target texture characteristic thereof in the x-axis direction, the y-axis direction and the z-axis direction to obtain a first animation model characteristic of the object skeleton curve of each frame of game image to be identified in the x-axis direction, a second animation model characteristic of each frame of game image to be identified in the y-axis direction and a third animation model characteristic of each frame of game image to be identified in the z-axis direction, a fourth animation model characteristic of the target texture characteristic of each frame of game image to be identified in the x-axis direction, a fifth animation model characteristic of each frame of game image to be identified in the y-axis direction and a sixth animation model characteristic of each frame of game image to be identified in the z-axis direction;
And carrying out feature fusion on the first animation model feature and the fourth animation model feature of each frame of game image to be identified, carrying out feature fusion on the second animation model feature and the fifth animation model feature, and carrying out feature fusion on the third animation model feature and the sixth animation model feature to obtain an animation model to be matched of each frame of game image to be identified.
In one embodiment, the inputting the animation model to be matched of each frame of the game image to be identified into the model identification classifier, to obtain the game animation model of each frame of the game image to be identified, includes:
inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain an enhanced animation model of each frame of game image to be identified;
determining resolution loss, color loss and texture loss based on differences between the animation model to be matched and the enhanced animation model of each frame of game image to be identified;
and obtaining a game animation model of each frame of game image to be identified based on the resolution loss, the color loss and the texture loss of each frame of game image to be identified.
In one embodiment, the obtaining the game animation model of each frame of the game image to be identified based on the resolution loss, the color loss and the texture loss of each frame of the game image to be identified includes:
If the resolution loss of each frame of game image to be identified is less than or equal to the first preset loss, the color loss is less than or equal to the second preset loss, and the texture loss is less than or equal to the third preset loss, determining the to-be-matched animation model or the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified;
if any one of the resolution loss, the color loss and the texture loss of each frame of game image to be identified is determined to be larger than the preset loss, carrying out average value processing on the animation model to be matched and the enhanced animation model of each frame of game image to be identified, and determining the obtained average value model as the game animation model of each frame of game image to be identified;
if the resolution loss of each frame of game image to be identified is determined to be larger than the first preset loss, the color loss is larger than the second preset loss, and the texture loss is larger than the third preset loss, determining the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified.
In one embodiment, the resolution loss of each frame of game image to be identified is determined based on the following formula:
The color loss of each frame of game image to be identified is determined based on the following formula:
the texture penalty for each frame of game image to be identified is determined based on the following formula:
indicating a loss of resolution and,representing an index the function of the function is that,representing the animated model to be matched,the enhanced animation model is represented by a representation,representing the mean model of the animation model to be matched and the enhanced animation model,representing the super-parameters of the network,representing a loss of color and a color loss,representing texture loss.
In a second aspect, an embodiment of the present application provides a game animation model recognition system, including:
the analysis module is used for analyzing the acquired game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
the feature extraction module is used for inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
the framework curve extraction module is used for carrying out mismatching elimination on the feature point clouds to be processed of each frame of game image to be identified to obtain target feature point clouds of each frame of game image to be identified, and splicing the target feature point clouds of each frame of game image to be identified to obtain an object framework curve of each frame of game image to be identified;
The texture recognition module is used for carrying out texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
the texture feature fusion module is used for fusing the object skeleton curve of each frame of game image to be identified and the target texture features of the object skeleton curve to obtain an animation model to be matched of each frame of game image to be identified;
the game animation model identification module is used for inputting the animation model to be matched of each frame of game image to be identified into the model identification classifier to obtain the game animation model of each frame of game image to be identified.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the game animation model identification method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a non-transitory computer readable storage medium, the non-transitory computer readable storage medium including a computer program which, when executed by a processor, implements the game animation model identification method of the first aspect.
The game animation model identification method and the processing system provided by the embodiment of the application analyze the acquired game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified; inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified; performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified; and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified. In the game animation model identification process, an unknown game animation model in a game file can be identified only according to the object skeleton curve and the texture characteristics thereof, and the unknown game animation model is identified in real time.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the drawings that are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings described below are some embodiments of the application and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a game animation model recognition method provided by an embodiment of the present application;
FIG. 2 is a block diagram of a game animation model recognition system provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings of embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
FIG. 1 is a flow chart of a game animation model recognition method provided by an embodiment of the application. Referring to fig. 1, an embodiment of the present application provides a game animation model recognition method, including:
step 100, analyzing the obtained game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
step 200, inputting each frame of game image to be identified into a feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified;
step 300, carrying out mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified;
step 400, carrying out texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized;
step 500, fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified;
Step 600, inputting the animation model to be matched of each frame of game image to be identified into the model identification classifier to obtain the game animation model of each frame of game image to be identified.
It should be noted that, the game animation model recognition method according to the embodiment of the present invention is exemplified by a game animation model recognition system. When a user needs to identify a game animation model in a game animation file, the game animation file to be identified needs to be input into a game animation model identification system.
Therefore, after the game animation model recognition system obtains the game animation file to be recognized, the game animation file to be recognized is analyzed, and each frame of game image to be recognized of the game animation file to be recognized is obtained. Further, the game animation model recognition system inputs each frame of game image to be recognized into a feature extraction model to obtain feature point clouds to be processed of each frame of game image to be recognized, wherein the feature extraction model is obtained based on training of each frame of image sample and corresponding feature point cloud samples.
Further, the game animation model recognition system performs mismatching elimination on the feature point clouds to be processed of each frame of game images to be recognized, namely eliminates mismatching point clouds of the feature point clouds to be processed of each frame of game images to be recognized, and obtains target feature point clouds of each frame of game images to be recognized. Further, the game animation model recognition system splices the target feature point clouds of each frame of game images to be recognized to obtain an object skeleton curve of each frame of game images to be recognized.
Further, the game animation model recognition system performs texture recognition on each frame of game image to be recognized to obtain target texture features of each frame of game image to be recognized, wherein the target texture features of each frame of game image to be recognized are texture features of each frame of game image to be recognized under different rotation angles, different scales and/or different illumination conditions.
Further, the game animation model recognition system fuses the object skeleton curve and the target texture characteristic of each frame of game image to be recognized to obtain an animation model to be matched of each frame of game image to be recognized. And finally, the game animation model identification system inputs the animation model to be matched of each frame of game image to be identified into the model identification classifier to obtain the game animation model of each frame of game image to be identified.
According to the game animation model identification method provided by the embodiment of the application, the acquired game animation file to be identified is analyzed to obtain each frame of game image to be identified of the game animation file to be identified; inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified; performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified; and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified. In the game animation model identification process, an unknown game animation model in a game file can be identified only according to the object skeleton curve and the texture characteristics thereof, and the unknown game animation model is identified in real time.
Based on the above embodiment, the performing texture recognition on each frame of game image to be recognized in step 400 to obtain the target texture feature of each frame of game image to be recognized includes:
performing texture recognition on each frame of game image to be recognized through each texture recognizer of the texture recognizer group to obtain a first texture feature of each frame of game image to be recognized; wherein each texture identifier is formed by a two-dimensional Gaussian function D xx And a two-dimensional complex sine function D yy Composition, expression of texture identifier is
Acquiring a texture file from a texture library according to the file type of the game animation file to be identified; wherein the texture file comprises a plurality of second texture features;
and determining the target texture characteristic of each frame of game image to be identified according to the first texture characteristic and the plurality of second texture characteristics of each frame of game image to be identified.
In particular, gamesThe animation model recognition system carries out texture recognition on each frame of game image to be recognized through each texture recognizer of a texture recognizer group to obtain a first texture feature of each frame of game image to be recognized, wherein each texture recognizer of the texture recognizer group is formed by a two-dimensional Gaussian function D xx And a two-dimensional complex sine function D yy Constitutive, therefore, the expression of texture identifier is
Further, the game animation model recognition system determines the file type of the game animation file to be recognized, and acquires a texture file from a texture library according to the file type of the game animation file to be recognized, wherein the texture file contains a plurality of second texture features.
Further, the game animation model recognition system determines a target texture feature of each frame of the game image to be recognized according to the first texture feature and the plurality of second texture features of each frame of the game image to be recognized.
The embodiment of the application provides the target texture features for the follow-up, so that the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture features thereof, and the unknown game animation model is identified in real time.
Further, determining the target texture feature of each frame of the game image to be identified according to the first texture feature and the plurality of second texture features of each frame of the game image to be identified, including:
calculating the feature distance between the first texture feature and the second texture feature of each frame of game image to be identified, and sorting the feature distances of each frame of game image to be identified in a descending order to obtain the feature distance of each frame of game image to be identified after sorting;
Carrying out average value calculation on a first characteristic distance in the characteristic distances after sequencing of each frame of game images to be identified to obtain an average value characteristic distance of each frame of game images to be identified; the first characteristic distances are the preset number of characteristic distances before the sequenced characteristic distances;
performing difference calculation on each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified to obtain a distance difference between each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified; the second characteristic distance is the rest characteristic distance except the first characteristic distance in the ordered characteristic distances;
and determining the target texture feature of each frame of game image to be identified according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of game image to be identified.
Specifically, the game animation model recognition system calculates feature distances between a first texture feature and a second texture feature of each frame of game images to be recognized, and sorts the feature distances of each frame of game images to be recognized in a descending order to obtain feature distances of each frame of game images to be recognized after sorting.
Further, the game animation model recognition system carries out mean value calculation on a first characteristic distance in the characteristic distances after the sequence of each frame of game images to be recognized to obtain a mean value characteristic distance of each frame of game images to be recognized, wherein the first characteristic distance is a preset number of characteristic distances before the sequence of the characteristic distances, and the preset number is set according to the actual situation.
Further, the game animation model recognition system calculates the difference value between each second characteristic distance and the mean characteristic distance in the characteristic distances of each frame of game images to be recognized to obtain a distance difference value between each second characteristic distance and the mean characteristic distance in the characteristic distances of each frame of game images to be recognized, wherein the second characteristic distances are the rest characteristic distances except the first characteristic distance in the sequenced characteristic distances;
further, the game animation model recognition system determines the target texture feature of each frame of game image to be recognized according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of game image to be recognized.
The embodiment of the application provides the target texture features for the follow-up, so that the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture features thereof, and the unknown game animation model is identified in real time.
Further, the determining the target texture feature of each frame of the game image to be identified according to the distance difference between the average feature distance and each second feature distance in the feature distances of each frame of the game image to be identified includes:
determining the target characteristic distance of each frame of game image to be identified by determining the second characteristic distance of which the distance difference value is smaller than or equal to the preset difference value in each frame of game image to be identified;
and eliminating the target feature distance with the largest numerical value and the target feature distance with the smallest numerical value in each frame of game image to be identified, and carrying out average value on the second texture features corresponding to the rest target feature distances to obtain the target texture features of each frame of game image to be identified.
Specifically, the game animation model recognition system determines the target feature distance of each frame of game image to be recognized by using the second feature distance of which the distance difference value is smaller than or equal to the preset difference value in each frame of game image to be recognized, wherein the preset difference value is set according to the actual situation.
Further, the game animation model recognition system eliminates the target feature distance with the largest numerical value and the target feature distance with the smallest numerical value in each frame of game image to be recognized, averages the second texture features corresponding to the rest target feature distances, and determines the average features obtained after the averaging as the target texture features of each frame of game image to be recognized.
The embodiment of the application provides the target texture features for the follow-up, so that the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture features thereof, and the unknown game animation model is identified in real time.
In one embodiment, the preset difference is 0.5, the ordered feature distances are A1, A2, A3, A4, A5, A6, A7, A8, and A9, the first 4 feature distances, A1, A2, A3, and A4, are determined as the first feature distances, and A5, A6, A7, A8, and A9 are determined as the second feature distances. And (3) averaging the A1, the A2, the A3 and the A4 to obtain the distance differences of the B, the A5, the A6, the A7, the A8 and the A9 and the B of 0.1, 0.3, 0.7, 0.75 and 0.5 respectively, and determining the A5, the A6 and the A9 as target characteristic distances.
Based on the above embodiment, the fusing of the object skeleton curve and the target texture feature of each frame of the game image to be identified described in step 500 to obtain the animation model to be matched of each frame of the game image to be identified includes:
convolving the object skeleton curve of each frame of game image to be identified and the target texture characteristic thereof in the x-axis direction, the y-axis direction and the z-axis direction to obtain a first animation model characteristic of the object skeleton curve of each frame of game image to be identified in the x-axis direction, a second animation model characteristic of each frame of game image to be identified in the y-axis direction and a third animation model characteristic of each frame of game image to be identified in the z-axis direction, a fourth animation model characteristic of the target texture characteristic of each frame of game image to be identified in the x-axis direction, a fifth animation model characteristic of each frame of game image to be identified in the y-axis direction and a sixth animation model characteristic of each frame of game image to be identified in the z-axis direction;
And carrying out feature fusion on the first animation model feature and the fourth animation model feature of each frame of game image to be identified, carrying out feature fusion on the second animation model feature and the fifth animation model feature, and carrying out feature fusion on the third animation model feature and the sixth animation model feature to obtain an animation model to be matched of each frame of game image to be identified.
Specifically, the game animation model recognition system convolves an object skeleton curve of each frame of game image to be recognized and a target texture feature thereof in an x-axis direction, a y-axis direction and a z-axis direction to obtain a first animation model feature of the object skeleton curve of each frame of game image to be recognized in the x-axis direction, a second animation model feature of each frame of game image to be recognized in the y-axis direction and a third animation model feature of each frame of game image to be recognized in the z-axis direction, and a fourth animation model feature of the target texture feature of each frame of game image to be recognized in the x-axis direction, a fifth animation model feature of each frame of game image to be recognized in the y-axis direction and a sixth animation model feature of each frame of game image to be recognized in the z-axis direction. Further, the game animation model recognition system performs feature fusion on the first animation model feature and the fourth animation model feature of each frame of game image to be recognized, performs feature fusion on the second animation model feature and the fifth animation model feature, and performs feature fusion on the third animation model feature and the sixth animation model feature to obtain an animation model to be matched of each frame of game image to be recognized.
According to the embodiment of the application, the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture characteristics thereof, so that the unknown game animation model is identified in real time.
Based on the above embodiment, the inputting the animation model to be matched of each frame of the game image to be identified in step 600 into the model identification classifier, to obtain the game animation model of each frame of the game image to be identified, includes:
inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain an enhanced animation model of each frame of game image to be identified;
determining resolution loss, color loss and texture loss based on differences between the animation model to be matched and the enhanced animation model of each frame of game image to be identified;
and obtaining a game animation model of each frame of game image to be identified based on the resolution loss, the color loss and the texture loss of each frame of game image to be identified.
Specifically, the game animation model recognition system inputs to-be-matched animation models of each frame of to-be-recognized game image into a model recognition classifier to obtain enhanced animation models of each frame of to-be-recognized game image, wherein the model recognition classifier is trained according to the labeled original animation models and the labeled corresponding enhanced animation models thereof.
Further, the game animation model recognition system calculates the resolution loss, the color loss and the texture loss of each frame of game images to be recognized according to the differences between the animation models to be matched and the enhanced animation models of each frame of game images to be recognized.
Further, the game animation model recognition system obtains a game animation model of each frame of the game image to be recognized according to the resolution loss, the color loss and the texture loss of each frame of the game image to be recognized.
The resolution loss of each frame of game image to be identified is determined based on the following formula:
the color loss of each frame of game image to be identified is determined based on the following formula:
the texture penalty for each frame of game image to be identified is determined based on the following formula:
indicating a loss of resolution and,representing an index the function of the function is that,representing the animated model to be matched,the enhanced animation model is represented by a representation,representing the mean model of the animation model to be matched and the enhanced animation model,representing the super-parameters of the network,representing a loss of color and a color loss,representing texture loss.
According to the embodiment of the application, the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture characteristics thereof, so that the unknown game animation model is identified in real time.
Further, based on the resolution loss, the color loss and the texture loss of each frame of game image to be identified, a game animation model of each frame of game image to be identified is obtained, which comprises the following steps:
if the resolution loss of each frame of game image to be identified is less than or equal to the first preset loss, the color loss is less than or equal to the second preset loss, and the texture loss is less than or equal to the third preset loss, determining the to-be-matched animation model or the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified;
if any one of the resolution loss, the color loss and the texture loss of each frame of game image to be identified is determined to be larger than the preset loss, carrying out average value processing on the animation model to be matched and the enhanced animation model of each frame of game image to be identified, and determining the obtained average value model as the game animation model of each frame of game image to be identified;
if the resolution loss of each frame of game image to be identified is determined to be larger than the first preset loss, the color loss is larger than the second preset loss, and the texture loss is larger than the third preset loss, determining the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified.
Specifically, if it is determined that the resolution loss of each frame of game image to be identified is less than or equal to a first preset loss, the color loss is less than or equal to a second preset loss, and the texture loss is less than or equal to a third preset loss, the game animation model identification system determines the to-be-matched animation model or the enhanced animation model of each frame of game image to be identified as a game animation model of each frame of game image to be identified, wherein the first preset loss, the second preset loss and the third preset loss are set according to the actual setting.
Further, if it is determined that any one of the resolution loss, the color loss and the texture loss of each frame of the game image to be identified is greater than the preset loss, the game animation model identification system performs mean processing on the to-be-matched animation model and the enhanced animation model of each frame of the game image to be identified, and determines the obtained mean model as the game animation model of each frame of the game image to be identified, so that the situation can be understood as: 1. the resolution loss is less than or equal to a first preset loss, the color loss is greater than a second preset loss, and the texture loss is less than or equal to a third preset loss; or, 2, the resolution loss is smaller than or equal to the first preset loss, the color loss is larger than the second preset loss, and the texture loss is larger than the third preset loss; or, 3, the differentiation loss is larger than the first preset loss, the color loss is smaller than or equal to the second preset loss, and the texture loss is smaller than or equal to the third preset loss; or, 4, the differentiation loss is larger than the first preset loss, the color loss is smaller than or equal to the second preset loss, and the texture loss is larger than the third preset loss; or, 5, the resolution loss is smaller than or equal to the first preset loss, the color loss is smaller than or equal to the second preset loss, and the texture loss is larger than the third preset loss; or, the differentiation loss is larger than the first preset loss, the color loss is larger than the second preset loss, and the texture loss is smaller than or equal to the third preset loss.
Further, if it is determined that the resolution loss of each frame of game image to be identified is greater than the first preset loss, the color loss is greater than the second preset loss, and the texture loss is greater than the third preset loss, the game animation model identification system determines the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified.
According to the embodiment of the application, the unknown game animation model in the game file can be identified only according to the object skeleton curve and the target texture characteristics thereof, so that the unknown game animation model is identified in real time.
The following describes a game animation model recognition system provided by an embodiment of the present application, and the game animation model recognition system described below and the game animation model recognition method described above may be referred to correspondingly.
Referring to fig. 2, fig. 2 is a block diagram of a game animation model recognition system according to an embodiment of the present application, where the game animation model recognition system according to the embodiment of the present application includes:
the parsing module 201 is configured to parse the obtained game animation file to be recognized, so as to obtain each frame of game image to be recognized of the game animation file to be recognized;
The feature extraction module 202 is configured to input each frame of game image to be identified into the feature extraction model, so as to obtain a feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
the skeleton curve extraction module 203 is configured to perform mismatching elimination on feature point clouds to be processed of each frame of game images to be identified, obtain target feature point clouds of each frame of game images to be identified, and splice the target feature point clouds of each frame of game images to be identified, so as to obtain an object skeleton curve of each frame of game images to be identified;
the texture recognition module 204 is configured to perform texture recognition on each frame of game image to be recognized, so as to obtain a target texture feature of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
the texture feature fusion module 205 is configured to fuse an object skeleton curve of each frame of the game image to be identified and a target texture feature thereof, so as to obtain an animation model to be matched of each frame of the game image to be identified;
The game animation model recognition module 206 is configured to input the to-be-matched animation model of each frame of to-be-recognized game image to the model recognition classifier, so as to obtain a game animation model of each frame of to-be-recognized game image.
The game animation model identification system provided by the embodiment of the application analyzes the acquired game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified; inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified; performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified; and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified. In the game animation model identification process, an unknown game animation model in a game file can be identified only according to the object skeleton curve and the texture characteristics thereof, and the unknown game animation model is identified in real time.
In one embodiment, texture recognition module 204 is further to:
performing texture recognition on each frame of game image to be recognized through each texture recognizer of the texture recognizer group to obtain a first texture feature of each frame of game image to be recognized; wherein each texture identifier is formed by a two-dimensional Gaussian function D xx And a two-dimensional complex sine function D yy Composition, expression of texture identifier is
Acquiring a texture file from a texture library according to the file type of the game animation file to be identified; wherein the texture file comprises a plurality of second texture features;
and determining the target texture characteristic of each frame of game image to be identified according to the first texture characteristic and the plurality of second texture characteristics of each frame of game image to be identified.
In one embodiment, texture recognition module 204 is further to:
calculating the feature distance between the first texture feature and the second texture feature of each frame of game image to be identified, and sorting the feature distances of each frame of game image to be identified in a descending order to obtain the feature distance of each frame of game image to be identified after sorting;
carrying out average value calculation on a first characteristic distance in the characteristic distances after sequencing of each frame of game images to be identified to obtain an average value characteristic distance of each frame of game images to be identified; the first characteristic distances are the preset number of characteristic distances before the sequenced characteristic distances;
Performing difference calculation on each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified to obtain a distance difference between each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified; the second characteristic distance is the rest characteristic distance except the first characteristic distance in the ordered characteristic distances;
and determining the target texture feature of each frame of game image to be identified according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of game image to be identified.
In one embodiment, texture recognition module 204 is further to:
determining the target characteristic distance of each frame of game image to be identified by determining the second characteristic distance of which the distance difference value is smaller than or equal to the preset difference value in each frame of game image to be identified;
and eliminating the target feature distance with the largest numerical value and the target feature distance with the smallest numerical value in each frame of game image to be identified, and carrying out average value on the second texture features corresponding to the rest target feature distances to obtain the target texture features of each frame of game image to be identified.
In one embodiment, texture feature fusion module 205 is further configured to:
Convolving the object skeleton curve of each frame of game image to be identified and the target texture characteristic thereof in the x-axis direction, the y-axis direction and the z-axis direction to obtain a first animation model characteristic of the object skeleton curve of each frame of game image to be identified in the x-axis direction, a second animation model characteristic of each frame of game image to be identified in the y-axis direction and a third animation model characteristic of each frame of game image to be identified in the z-axis direction, a fourth animation model characteristic of the target texture characteristic of each frame of game image to be identified in the x-axis direction, a fifth animation model characteristic of each frame of game image to be identified in the y-axis direction and a sixth animation model characteristic of each frame of game image to be identified in the z-axis direction;
and carrying out feature fusion on the first animation model feature and the fourth animation model feature of each frame of game image to be identified, carrying out feature fusion on the second animation model feature and the fifth animation model feature, and carrying out feature fusion on the third animation model feature and the sixth animation model feature to obtain an animation model to be matched of each frame of game image to be identified.
In one embodiment, game animation model identification module 206 is further to:
inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain an enhanced animation model of each frame of game image to be identified;
Determining resolution loss, color loss and texture loss based on differences between the animation model to be matched and the enhanced animation model of each frame of game image to be identified;
and obtaining a game animation model of each frame of game image to be identified based on the resolution loss, the color loss and the texture loss of each frame of game image to be identified.
In one embodiment, game animation model identification module 206 is further to:
if the resolution loss of each frame of game image to be identified is less than or equal to the first preset loss, the color loss is less than or equal to the second preset loss, and the texture loss is less than or equal to the third preset loss, determining the to-be-matched animation model or the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified;
if any one of the resolution loss, the color loss and the texture loss of each frame of game image to be identified is determined to be larger than the preset loss, carrying out average value processing on the animation model to be matched and the enhanced animation model of each frame of game image to be identified, and determining the obtained average value model as the game animation model of each frame of game image to be identified;
If the resolution loss of each frame of game image to be identified is determined to be larger than the first preset loss, the color loss is larger than the second preset loss, and the texture loss is larger than the third preset loss, determining the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified.
Fig. 3 illustrates a physical schematic diagram of an electronic device, as shown in fig. 3, where the electronic device may include: processor 310, communication interface (Communication Interface) 320, memory 330 and communication bus 340, wherein processor 310, communication interface 320, memory 330 accomplish communication with each other through communication bus 340. Processor 310 may invoke a computer program of memory 330 to perform the steps of a game animation model identification method, including, for example:
analyzing the obtained game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
Performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified;
performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified;
and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified.
In addition, the logic instructions of the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present application further provide a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium includes a computer program, where the computer program is stored on the non-transitory computer-readable storage medium, and when the computer program is executed by a processor, the computer program is capable of executing the steps of the game animation model recognition method provided in the foregoing embodiments, for example, including:
analyzing the obtained game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified;
Performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified;
and inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for identifying a game animation model, comprising:
analyzing the obtained game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
performing mismatching elimination on the feature point clouds to be processed of each frame of game images to be identified to obtain target feature point clouds of each frame of game images to be identified, and splicing the target feature point clouds of each frame of game images to be identified to obtain an object skeleton curve of each frame of game images to be identified;
performing texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
fusing object skeleton curves and target texture features of each frame of game image to be identified to obtain an animation model to be matched of each frame of game image to be identified;
And inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain the game animation model of each frame of game image to be identified.
2. The method for recognizing game animation model according to claim 1, wherein the performing texture recognition on each frame of game image to be recognized to obtain the target texture feature of each frame of game image to be recognized comprises:
performing texture recognition on each frame of game image to be recognized through each texture recognizer of the texture recognizer group to obtain a first texture feature of each frame of game image to be recognized; wherein each texture identifier is formed by a two-dimensional Gaussian function D xx And a two-dimensional complex sine function D yy Composition, expression of texture identifier is
Acquiring a texture file from a texture library according to the file type of the game animation file to be identified; wherein the texture file comprises a plurality of second texture features;
and determining the target texture characteristic of each frame of game image to be identified according to the first texture characteristic and the plurality of second texture characteristics of each frame of game image to be identified.
3. The method for recognizing a game animation model according to claim 2, wherein the determining the target texture feature of each frame of the game image to be recognized based on the first texture feature and the plurality of second texture features of each frame of the game image to be recognized comprises:
Calculating the feature distance between the first texture feature and the second texture feature of each frame of game image to be identified, and sorting the feature distances of each frame of game image to be identified in a descending order to obtain the feature distance of each frame of game image to be identified after sorting;
carrying out average value calculation on a first characteristic distance in the characteristic distances after sequencing of each frame of game images to be identified to obtain an average value characteristic distance of each frame of game images to be identified; the first characteristic distances are the preset number of characteristic distances before the sequenced characteristic distances;
performing difference calculation on each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified to obtain a distance difference between each second characteristic distance and the average characteristic distance in the characteristic distances of each frame of game image to be identified; the second characteristic distance is the rest characteristic distance except the first characteristic distance in the ordered characteristic distances;
and determining the target texture feature of each frame of game image to be identified according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of game image to be identified.
4. A game animation model recognition method according to claim 3, wherein the determining the target texture feature of each frame of the game image to be recognized according to the distance difference between each second feature distance and the mean feature distance in the feature distances of each frame of the game image to be recognized comprises:
determining the second characteristic distance of which the distance difference value is smaller than or equal to a preset difference value in each frame of game image to be identified as the target characteristic distance of each frame of game image to be identified;
and eliminating the target feature distance with the largest numerical value and the target feature distance with the smallest numerical value in each frame of game image to be identified, and carrying out average value on the second texture features corresponding to the rest target feature distances to obtain the target texture features of each frame of game image to be identified.
5. The method for identifying a game animation model according to claim 1, wherein the fusing the object skeleton curve of each frame of the game image to be identified and the target texture feature thereof to obtain the animation model to be matched of each frame of the game image to be identified comprises:
convolving the object skeleton curve of each frame of game image to be identified and the target texture characteristic thereof in the x-axis direction, the y-axis direction and the z-axis direction to obtain a first animation model characteristic of the object skeleton curve of each frame of game image to be identified in the x-axis direction, a second animation model characteristic of each frame of game image to be identified in the y-axis direction and a third animation model characteristic of each frame of game image to be identified in the z-axis direction, a fourth animation model characteristic of the target texture characteristic of each frame of game image to be identified in the x-axis direction, a fifth animation model characteristic of each frame of game image to be identified in the y-axis direction and a sixth animation model characteristic of each frame of game image to be identified in the z-axis direction;
And carrying out feature fusion on the first animation model feature and the fourth animation model feature of each frame of game image to be identified, carrying out feature fusion on the second animation model feature and the fifth animation model feature, and carrying out feature fusion on the third animation model feature and the sixth animation model feature to obtain an animation model to be matched of each frame of game image to be identified.
6. The method for recognizing game animation models according to claim 1, wherein the step of inputting the to-be-matched animation model of each frame of the to-be-recognized game image into the model recognition classifier to obtain the game animation model of each frame of the to-be-recognized game image comprises the steps of:
inputting the animation model to be matched of each frame of game image to be identified into a model identification classifier to obtain an enhanced animation model of each frame of game image to be identified;
determining resolution loss, color loss and texture loss based on differences between the animation model to be matched and the enhanced animation model of each frame of game image to be identified;
and obtaining a game animation model of each frame of game image to be identified based on the resolution loss, the color loss and the texture loss of each frame of game image to be identified.
7. The method for recognizing game animation model according to claim 6, wherein the obtaining the game animation model of each frame of the game image to be recognized based on the resolution loss, the color loss and the texture loss of each frame of the game image to be recognized comprises:
If the resolution loss of each frame of game image to be identified is less than or equal to the first preset loss, the color loss is less than or equal to the second preset loss, and the texture loss is less than or equal to the third preset loss, determining the to-be-matched animation model or the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified;
if any one of the resolution loss, the color loss and the texture loss of each frame of game image to be identified is determined to be larger than the preset loss, carrying out average value processing on the animation model to be matched and the enhanced animation model of each frame of game image to be identified, and determining the obtained average value model as the game animation model of each frame of game image to be identified;
if the resolution loss of each frame of game image to be identified is determined to be larger than the first preset loss, the color loss is larger than the second preset loss, and the texture loss is larger than the third preset loss, determining the enhanced animation model of each frame of game image to be identified as the game animation model of each frame of game image to be identified.
8. The method of claim 6, wherein the resolution loss of each frame of game image to be identified is determined based on the following formula:
The method comprises the steps of carrying out a first treatment on the surface of the The color loss of each frame of game image to be identified is determined based on the following formula:
the method comprises the steps of carrying out a first treatment on the surface of the The texture penalty for each frame of game image to be identified is determined based on the following formula:
;/>indicating loss of resolution->Representing an exponential function>Representing an animated model to be matched +.>Representing an enhanced animation model->Mean model representing the animation model to be matched and the enhanced animation model,/-for>Representing network superparameter->Indicating color loss, ++>Representing texture loss.
9. A game animation model recognition system, comprising:
the analysis module is used for analyzing the acquired game animation file to be identified to obtain each frame of game image to be identified of the game animation file to be identified;
the feature extraction module is used for inputting each frame of game image to be identified into the feature extraction model to obtain feature point cloud to be processed of each frame of game image to be identified; the feature extraction model is obtained by training each frame of image sample and a corresponding feature point cloud sample;
the framework curve extraction module is used for carrying out mismatching elimination on the feature point clouds to be processed of each frame of game image to be identified to obtain target feature point clouds of each frame of game image to be identified, and splicing the target feature point clouds of each frame of game image to be identified to obtain an object framework curve of each frame of game image to be identified;
The texture recognition module is used for carrying out texture recognition on each frame of game image to be recognized to obtain target texture characteristics of each frame of game image to be recognized; the target texture features are texture features of each frame of game image to be identified under different rotation angles, different scales and/or different illumination conditions;
the texture feature fusion module is used for fusing the object skeleton curve of each frame of game image to be identified and the target texture features of the object skeleton curve to obtain an animation model to be matched of each frame of game image to be identified;
the game animation model identification module is used for inputting the animation model to be matched of each frame of game image to be identified into the model identification classifier to obtain the game animation model of each frame of game image to be identified.
10. A non-transitory computer readable storage medium comprising a computer program, characterized in that the computer program when executed by a processor implements the game animation model identification method of any of claims 1 to 8.
CN202310930524.XA 2023-07-27 2023-07-27 Game animation model identification method and processing system Active CN116664727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310930524.XA CN116664727B (en) 2023-07-27 2023-07-27 Game animation model identification method and processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310930524.XA CN116664727B (en) 2023-07-27 2023-07-27 Game animation model identification method and processing system

Publications (2)

Publication Number Publication Date
CN116664727A CN116664727A (en) 2023-08-29
CN116664727B true CN116664727B (en) 2023-12-08

Family

ID=87710015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310930524.XA Active CN116664727B (en) 2023-07-27 2023-07-27 Game animation model identification method and processing system

Country Status (1)

Country Link
CN (1) CN116664727B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102934144A (en) * 2010-06-09 2013-02-13 微软公司 Real-time animation of facial expressions
CN114241119A (en) * 2020-09-07 2022-03-25 深圳荆虹科技有限公司 Game model generation method, device and system and computer storage medium
CN114307153A (en) * 2021-12-23 2022-04-12 网易(杭州)网络有限公司 Game asset processing method and device, computer storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110634177A (en) * 2018-06-21 2019-12-31 华为技术有限公司 Object modeling movement method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102934144A (en) * 2010-06-09 2013-02-13 微软公司 Real-time animation of facial expressions
CN114241119A (en) * 2020-09-07 2022-03-25 深圳荆虹科技有限公司 Game model generation method, device and system and computer storage medium
CN114307153A (en) * 2021-12-23 2022-04-12 网易(杭州)网络有限公司 Game asset processing method and device, computer storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于渲染图像角度结构特征的三维模型检索方法;刘志;潘晓彬;;计算机科学(S2);全文 *

Also Published As

Publication number Publication date
CN116664727A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN107944020B (en) Face image searching method and device, computer device and storage medium
CN110020592B (en) Object detection model training method, device, computer equipment and storage medium
CN108701216B (en) Face recognition method and device and intelligent terminal
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
CN109145766B (en) Model training method and device, recognition method, electronic device and storage medium
CN112966742A (en) Model training method, target detection method and device and electronic equipment
CN110378278A (en) Training method, object search method, apparatus and the electronic equipment of neural network
CN113313053B (en) Image processing method, device, apparatus, medium, and program product
CN112381837A (en) Image processing method and electronic equipment
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
CN111915580A (en) Tobacco leaf grading method, system, terminal equipment and storage medium
CN114490998B (en) Text information extraction method and device, electronic equipment and storage medium
CN110751069A (en) Face living body detection method and device
CN109117746A (en) Hand detection method and machine readable storage medium
CN117197904A (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN112966685A (en) Attack network training method and device for scene text recognition and related equipment
CN116994021A (en) Image detection method, device, computer readable medium and electronic equipment
CN106709490B (en) Character recognition method and device
CN116664727B (en) Game animation model identification method and processing system
CN112418089A (en) Gesture recognition method and device and terminal
CN114913330B (en) Point cloud component segmentation method and device, electronic equipment and storage medium
CN115713669B (en) Image classification method and device based on inter-class relationship, storage medium and terminal
CN113947801B (en) Face recognition method and device and electronic equipment
CN115547514A (en) Pathogenic gene sequencing method, pathogenic gene sequencing device, electronic equipment and medium
CN112818972B (en) Method and device for detecting interest point image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant