CN115499635A - Data compression processing method and device - Google Patents

Data compression processing method and device Download PDF

Info

Publication number
CN115499635A
CN115499635A CN202211148120.7A CN202211148120A CN115499635A CN 115499635 A CN115499635 A CN 115499635A CN 202211148120 A CN202211148120 A CN 202211148120A CN 115499635 A CN115499635 A CN 115499635A
Authority
CN
China
Prior art keywords
data
compression
virtual
model
object data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211148120.7A
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211148120.7A priority Critical patent/CN115499635A/en
Publication of CN115499635A publication Critical patent/CN115499635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Abstract

An embodiment of the present specification provides a data compression processing method and apparatus, wherein the data compression processing method includes: inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model for object data extraction to obtain object data of the virtual object; performing visual characteristic identification on the virtual object based on the object data to obtain visual characteristics of the virtual object; and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.

Description

Data compression processing method and device
Technical Field
The present disclosure relates to the field of virtualization technologies, and in particular, to a data compression method and apparatus.
Background
The virtual world provides a simulation of the real world and can even provide scenes that are difficult to implement in the real world, and thus the virtual world is increasingly applied to various scenes. Because the virtual images and various virtual articles in the virtual world are displayed in the form of three-dimensional data, more storage space needs to be occupied, and a larger bandwidth and a longer transmission time need to be occupied in the process of transmitting data corresponding to the virtual objects in the virtual world.
Disclosure of Invention
One or more embodiments of the present specification provide a data compression processing method. The data compression processing method comprises the following steps: and inputting a virtual data set corresponding to a virtual object in the virtual world into an extraction model for object data extraction, and obtaining the object data of the virtual object. And performing visual characteristic identification on the virtual object based on the object data to obtain the visual characteristic of the virtual object. And reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a data compression processing apparatus including: and the data extraction module is configured to input a virtual data set corresponding to a virtual object in the virtual world into the extraction model to extract object data, so as to obtain the object data of the virtual object. And the visual characteristic identification module is configured to perform visual characteristic identification on the virtual object based on the object data to obtain the visual characteristic of the virtual object. And the data compression module is configured to read a compression model corresponding to the visual characteristics, and input the object data into the compression model to perform data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a data compression processing apparatus including: a processor; and a memory configured to store computer executable instructions that, when executed, cause the processor to: and inputting a virtual data set corresponding to a virtual object in the virtual world into an extraction model for object data extraction, and obtaining the object data of the virtual object. And performing visual feature recognition on the virtual object based on the object data to obtain the visual feature of the virtual object. And reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed by a processor, implement the following: and inputting a virtual data set corresponding to a virtual object in the virtual world into an extraction model to extract object data, so as to obtain the object data of the virtual object. And performing visual characteristic identification on the virtual object based on the object data to obtain the visual characteristic of the virtual object. And reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions in the present specification, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in the present specification, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor;
fig. 1 is a flowchart illustrating a data compression processing method according to one or more embodiments of the present disclosure;
fig. 2 is a processing flow diagram of a data compression processing method applied to a virtual compression scenario according to one or more embodiments of the present specification;
fig. 3 is a schematic diagram of a data compression processing apparatus according to one or more embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a data compression processing apparatus according to one or more embodiments of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
An embodiment of a data compression processing method provided in this specification:
in practical application, in the process of data compression of three-dimensional data, a planar lossy compression mode and a lossless compression mode based on the three-dimensional data are generally adopted, the planar lossy compression mode is adopted, multi-angle two-dimensional sampling is carried out on the three-dimensional data, then lossy compression is carried out on the two-dimensional data obtained by sampling, and the data quality is poor when the data are restored to a three-dimensional space after compression; based on the lossless compression of three-dimensional data, the compression degree is low, which causes that a large amount of storage and transmission bandwidth is occupied after the data is compressed;
based on this, the data compression processing method provided in this embodiment performs visual feature recognition according to the object data of the virtual object in the virtual world, obtains the visual feature type of the virtual object, inputs the object data into the compression model corresponding to the visual feature type to perform data compression processing, and outputs the compressed data, so that the object data is subjected to data compression processing through the pre-trained compression model corresponding to the visual feature type of the object data, thereby implementing classification compression on the virtual object, that is, the virtual object with different visual features is compressed by using a single compression model, so that a better compression effect can be achieved in terms of data accuracy and volume of the virtual object, and the data compression quality is improved.
Referring to fig. 1, the data compression processing method provided in this embodiment specifically includes step S102 to step S106.
Step S102, inputting a virtual data set corresponding to a virtual object in the virtual world into an extraction model for object data extraction, and obtaining object data of the virtual object.
The virtual world refers to a virtual simulation world which is realized based on decentralized cooperation and has an open economic system; in the virtual world, decentralized trading is performed by generating non-homogeneous identifications, and ownership of virtual assets of battle friends is traded. Specifically, a user in the real world can access the virtual world through the access device to perform decentralized transactions and other behaviors in the virtual world; wherein the other behavior comprises perception of the virtual object. The access device is configured to access the Virtual world, and may be a VR (Virtual Reality) device, an AR (Augmented Reality) device, or the like connected to the Virtual world, for example, a head-mounted VR device connected to the Virtual world. Optionally, a decentralized transaction is performed in the virtual world by generating a non-homogeneous identifier, and ownership of the virtual asset is possessed through the transaction.
The virtual objects comprise objects which are displayed in an image in a virtual world; for example, an avatar representing a user's avatar in the virtual world, and objects constituting the virtual environment in the virtual world or items arranged in the virtual environment; such as a stone, a tree in the virtual world, or a building in the virtual world. Optionally, the virtual object includes an object that can perform decentralized transaction and configure non-homogeneous identification in the virtual world.
The virtual data set comprises a data set composed of data for representing virtual objects in a virtual world; the data constituting the virtual data set may be multi-dimensional data (e.g., three-dimensional data) or point cloud data. The object data comprises data characterizing the virtual object itself in the virtual data set; for example foreground data of the virtual object.
In order to avoid that compression efficiency is low and compression effect of the virtual object is affected due to the fact that the virtual data set of the virtual object may include data of other non-virtual objects such as environment data where the virtual object is located, in the present embodiment, a virtual data set corresponding to the virtual object is collected in the virtual world, and in the process of data compression of the virtual object, object data extraction is performed on the virtual data set corresponding to the virtual object in the virtual world, so as to obtain object data of the virtual object.
In this embodiment, inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining virtual object data includes inputting the virtual data set into a foreground and background recognition model to perform foreground and background recognition, and using foreground data output by the foreground and background recognition model as object data of the virtual object.
In addition, the virtual data set corresponding to the virtual object in the virtual world is input into the extraction model to extract the object data, the object data of the virtual object can be obtained instead, the virtual data set corresponding to the virtual object in the virtual world is extracted to obtain the object data of the virtual object, and the new implementation manner is formed by the virtual data set, the virtual data set and other processing steps provided in this embodiment.
Step S104, performing visual characteristic identification on the virtual object based on the object data to obtain the visual characteristic of the virtual object.
Visual characteristics of the virtual object, including characteristics that the virtual object can be observed; such as shape, volume, etc. The present embodiment specifically describes a process of classifying and compressing a virtual object by taking a shape as an example.
In order to improve the effectiveness and accuracy of the obtained visual features of the virtual object, the shape recognition model obtained based on pre-training is used for performing shape recognition on the virtual object, and in an optional implementation manner provided by this embodiment, the visual feature recognition on the virtual object is implemented in the following manner:
inputting the object data into a shape recognition model for shape recognition to obtain the object shape of the virtual object;
the shape recognition model is obtained by training based on a labeled object data sample carrying shape labels.
Specifically, the shape of the virtual object is obtained by performing shape recognition on the virtual object based on the object data and the shape recognition model. It should be noted that the data or the sample in this embodiment is data in the virtual world.
In practical application, the training of the shape recognition model may be completed in advance, for example, the model training of the shape recognition model is performed on a cloud server, and in order to improve the recognition accuracy of the shape recognition model, the shape recognition model is obtained based on a labeled object data sample carrying a shape label. In an optional implementation manner provided by this embodiment, the annotation object data sample is determined in the following manner:
extracting object data from each virtual data set sample to obtain an object data sample set;
inputting each object data sample in the object data sample set into a feature encoder for feature encoding to obtain object features corresponding to each object data sample;
carrying out shape clustering processing on the object data sample set based on the object features to obtain a plurality of shape types and type sample sets under the shape types;
and carrying out shape type marking on each object data sample in the type sample set to obtain the marked object data sample.
In the embodiment, in order to improve the accuracy and effectiveness of the feature codes coded by the feature coder, the feature coder is trained, and in the process of training the feature coder, the feature coder is obtained through a training data reconstruction network; specifically, in the training process of the data reconstruction Network, a three-dimensional CNN (Convolutional Neural Network) is used as a backbone Network; the network comprises two parts, the first part is a feature encoder, and the second part is a decoder; in the process of network training, inputting each object sample data in the object sample data set into a feature encoder to obtain object features output by the feature encoder, and inputting the output object features into a decoder to reconstruct data to obtain reconstructed object data; and training the data reconstruction network by taking the Euclidean distance between the object data sample and the corresponding reconstruction object data as a loss function until the network converges. And obtaining a feature encoder of the trained data reconstruction network. The training of the data reconstruction network may also be completed in advance, for example, the training of the data reconstruction network on the cloud server.
And the labeled object data sample is determined through unsupervised clustering processing. Specifically, in the unsupervised clustering process, since shape classification cannot be accurately performed from the object data samples, after object data extraction is performed on each virtual data set sample and an object data sample set is obtained, feature recognition is performed on each object data sample in the object data sample set to obtain object features corresponding to each object data sample, clustering processing is performed on the object data samples in the object data sample set based on the object features of each object data sample to obtain a plurality of shape types and type sample sets under each shape type. Optionally, the performing shape clustering processing on the object data sample set based on the object features includes: and based on the object characteristics, carrying out shape clustering processing on the object data sample set by utilizing a clustering algorithm. The clustering algorithm comprises a K-means clustering algorithm.
After obtaining a plurality of shape types and type sample sets under each shape type, updating the type sample sets under each shape type in order to avoid clustering deviation of object data samples in each type sample set caused by shape clustering processing; wherein the updating includes, but is not limited to, merging of type sample sets of similar types, transferring of object data samples in the type sample sets, and deleting of noise data in the type sample sets. In an optional implementation manner provided by this embodiment, the update of the type sample set is implemented by adopting the following manner:
according to merging instructions of at least two shape types in the plurality of shape types, merging type sample sets under the at least two shape types;
alternatively, the first and second electrodes may be,
according to a shape type switching instruction of a target object data sample under any shape type, transferring the target object sample from a type sample set under any shape type to a type sample set under a target shape type;
alternatively, the first and second electrodes may be,
and deleting any object data sample from the type sample set under any shape type according to a deletion instruction of any object data sample under any shape type.
Specifically, after obtaining a plurality of shape types and a type sample set under each shape type, the type sample set is updated, and shape labeling is performed on the object data samples in the corresponding type sample set based on the updated shape types, so as to obtain labeled object data samples.
After the marked object data sample is obtained, a shape recognition model is trained on the basis of the marked object data sample, so that the shape recognition is carried out on the object data on the basis of the shape recognition model. In the process of training the shape recognition model, model training is carried out based on a multi-classification ResNet18 structure and a multi-classification softmax loss function until the model converges. The input is object data, and the output is an object shape.
In the process of shape recognition, extracting key data which characterize an outer area of a virtual object from object data, constructing an external feature of the virtual object based on the key data, and determining an object shape of the virtual object based on the external feature. For example, key data of an outer region of the virtual object in the object data is extracted, an external feature of the virtual object is drawn based on the key data, and an object shape corresponding to the external feature is determined as an object shape of the virtual object.
And step S106, reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
Firstly, extracting object data of a virtual data set corresponding to a virtual object in a virtual world to obtain object data of the virtual object, and then identifying visual characteristics of the virtual object based on the object data to obtain the visual characteristics of the virtual object; on the basis, reading a compression model corresponding to the visual features, and inputting the object data into the compression model to perform data compression processing to obtain compressed data. Optionally, the reading of the compression model corresponding to the visual feature includes: reading a compression model corresponding to the visual features from a compression model set; the compression model set comprises compression models corresponding to all visual features; and the compression model corresponding to the visual features is formed by a compression encoder and a decoder corresponding to the visual features.
In this embodiment, by designing a multi-network multi-task compression model set, object data of virtual objects with different visual characteristics is compressed by the compression model set, so that a data compression effect and compressed data quality are ensured. For each visual feature, a compression model is trained. In practical applications, the training of each compression model in the compression model set may be completed in advance, for example, the model training of the compression model set is performed on the cloud server,
in an optional implementation manner provided by this embodiment, taking the visual features corresponding to the virtual object as an example, a training process of the compression model is specifically described, specifically, the compression model corresponding to the visual features of the virtual object is obtained by training in the following manner:
inputting each object data sample in the type sample set under the visual characteristic into a compression encoder in a model to be trained for visual characteristic identification, and outputting a compression sample of each object data sample;
inputting the compressed samples into a decoder in the model to be trained for data reconstruction, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstructed data and the object data sample, adjusting parameters of the model to be trained based on the training loss, and obtaining the compression model after training is completed.
The model to be trained, such as a model of UNET structure; including both compression encoder and decoder sections. The input of the compression encoder is object data, the output of the compression encoder is compressed encoded data, the input of the decoder is compressed encoded data, and the output of the decoder is compressed data obtained by reconstructing the compressed encoded data.
After the compression models corresponding to the visual features are trained in the above manner, in order to reduce the number of compression models in the compression model set under the condition of ensuring the compression quality, in this embodiment, the compression model set is determined in the following manner:
performing model training based on the type sample set under each visual characteristic to obtain a compression model corresponding to each visual characteristic;
calculating gradient correlation of the compression model corresponding to each visual characteristic;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under visual characteristics corresponding to the at least two compression models;
and performing model training based on a merging type sample set obtained by merging processing to obtain a compression model corresponding to the updated visual features.
Further, in an optional implementation manner provided in this embodiment, in the process of performing model training on a merged type sample set obtained based on merging processing to obtain a compression model corresponding to updated visual features, a compression encoder is trained based on object data samples in the merged type sample set, and a decoder corresponding to each visual feature is trained based on object data samples corresponding to each visual feature in the merged type sample set, so as to obtain a compression encoder and at least two decoders.
Specifically, after the compression models corresponding to the visual features are obtained through training, in order to reduce the number of compression encoders in the compression model set, the gradient correlation of the compression models corresponding to the visual features is calculated by using the same batch of data, which may be type sample sets under the visual features, the type sample sets under the visual features corresponding to the compression models with the gradient correlation larger than a preset threshold are merged, the merged type sample sets share one compression encoder, and the decoders are independent of each other according to the visual features.
In the process of model training based on the merging type sample set obtained by merging processing, the same compression encoder is trained based on the merging type sample set, and a decoder of each visual characteristic is trained based on object data samples of different visual characteristics in the merging type sample set. For example, the gradient correlation between the compression model corresponding to the shape of the computer host and the compression model corresponding to the shape of the printer is greater than 95; alternatively, the gradient correlation between the compression model corresponding to the shape of a 1000 ml beverage bottle and the compression model corresponding to the shape of a 500 ml beverage bottle is greater than 95.
Taking the example that the gradient correlation between the compression model corresponding to the first visual gradient and the compression model corresponding to the second visual gradient is greater than the preset threshold, the process of performing model training based on the merging type sample set is specifically described as follows:
training a compression encoder and a first decoder based on first object data samples in the merged type sample set and training the compression encoder and a second decoder based on second object data samples in the merged type sample set;
acquiring a compression encoder corresponding to the first visual feature and the second visual feature obtained by training, a first decoder corresponding to the first visual feature and a second decoder corresponding to the second visual feature;
the first object data sample is an object data sample under the first visual characteristic, and the second object data sample is an object data sample under the second visual characteristic.
In specific implementation, after the visual characteristics of the virtual object are obtained, the compression model corresponding to the visual characteristics is read, and the object data is input into the compression model to be subjected to data compression processing to obtain compressed data; since the compression model group includes a plurality of compression encoders and a plurality of decoders, where the numbers of the compression encoders and the decoders are not necessarily the same, in the process of compressing the object data of the virtual object, the compression model corresponding to the visual feature needs to be read first, and in an optional implementation manner provided in this embodiment, the compression model corresponding to the visual feature of the virtual object is read in the following manner:
reading a compression encoder and a decoder corresponding to the visual features from a compression model set;
and constructing a compression model corresponding to the visual characteristics based on the read compression encoder and decoder.
Specifically, a compression encoder and a decoder corresponding to the visual features are read from the compression model set, a compression model composed of the read compression encoder and the read decoder is obtained, and the composed compression model is used as the compression model corresponding to the visual features.
Further, after the compression model corresponding to the visual features is read, data compression processing is carried out on the object data based on the compression model. In an optional implementation manner provided by this embodiment, inputting the object data into the compression model to perform data compression processing to obtain compressed data includes:
inputting the object data into the compression encoder for compression encoding to obtain encoded compressed data output by the compression encoder;
and inputting the coded compressed data into the decoder for data decoding to obtain the compressed data.
Specifically, in the process of inputting object data into a compression model for data compression processing, the object data is input and read into a compression encoder for compression encoding to obtain encoded compressed data, and the encoded compressed data is input and read into a decoder for data decoding to obtain compressed data; and the compressed data is data obtained by compressing object data.
In addition, in the step S106, the compression model corresponding to the visual characteristic is read, and the object data is input to the compression model to perform data compression processing, so as to obtain compressed data, instead of the step S, the object data is input to the compression model corresponding to the visual characteristic to perform data compression processing, so as to obtain compressed data, and the step S and the other processing steps provided in this embodiment form a new implementation manner.
In summary, in the data compression processing method provided in this embodiment, in order to improve the accuracy of the compressed virtual object and avoid the poor quality of the compressed data caused by compressing the data in the background region in the process of compressing the virtual data set corresponding to the virtual object in the virtual world, first, object data extraction is performed on the virtual data set corresponding to the virtual object to obtain object data of the virtual object, further, to improve the compression effect, the poor compression effect caused by performing data compression on the object data of the virtual object in the same manner is avoided, after the object data of the virtual object is obtained, shape recognition is performed on the virtual object based on the virtual data to obtain the shape of the virtual object, and on the basis of the shape recognition, the object data is input into the compression model corresponding to the shape of the virtual object to perform data compression to obtain the compressed data of the virtual object, so that the data instruction of the compressed data is improved while the compression effect is improved by the compression method of performing classified compression according to the shape.
The following takes an application of the data compression processing method provided in this embodiment to a virtual compression scenario as an example, and further describes the data compression processing method provided in this embodiment, with reference to fig. 2, the data compression processing method applied to the virtual compression scenario specifically includes the following steps.
Step S202, a virtual data set corresponding to a virtual object in the virtual world is acquired.
And step S204, screening out object data in the virtual data set based on the foreground and background classifier.
Specifically, the virtual data set is input into a foreground and background classifier for foreground and background classification, and object data output by the foreground and background classifier is obtained. The foreground and background classifier can also be a foreground and background recognition model.
Step S206, inputting the object data into the shape recognition model for shape recognition, and obtaining the object shape of the virtual object.
In step S208, the compression encoder and decoder corresponding to the object shape are read.
Step S210, the object data is input into a compression encoder for compression encoding, and encoded compressed data is obtained.
Step S212, inputting the coded compressed data into a decoder for data decoding, and obtaining the compressed data of the virtual object.
In addition, the above steps S208 to S212 may be replaced by inputting the object data into a compression model corresponding to the object shape to perform data compression processing, so as to obtain compressed data of the virtual object; and forms a new implementation with other processing steps provided by the embodiment.
An embodiment of a data compression processing apparatus provided in this specification is as follows:
in the foregoing embodiment, a data compression processing method is provided, and correspondingly, a data compression processing apparatus is also provided, which is described below with reference to the accompanying drawings.
Referring to fig. 3, a schematic diagram of a data compression processing apparatus provided in this embodiment is shown.
Since the device embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions may refer to the corresponding description of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides a data compression processing apparatus, including:
a data extraction module 302 configured to input a virtual data set corresponding to a virtual object in a virtual world into an extraction model for object data extraction, so as to obtain object data of the virtual object;
a visual characteristic identification module 304 configured to perform visual characteristic identification on the virtual object based on the object data to obtain a visual characteristic of the virtual object;
and a data compression module 306 configured to read the compression model corresponding to the visual characteristic, and input the object data into the compression model to perform data compression processing to obtain compressed data.
An embodiment of a data compression processing apparatus provided in this specification is as follows:
corresponding to the above-described data compression processing method, based on the same technical concept, one or more embodiments of the present specification further provide a data compression processing apparatus, where the data compression processing apparatus is configured to execute the above-described data compression processing method, and fig. 4 is a schematic structural diagram of the data compression processing apparatus provided in one or more embodiments of the present specification.
The present embodiment provides a data compression processing apparatus, including:
as shown in fig. 4, the data compression processing apparatus may have a relatively large difference due to different configurations or performances, and may include one or more processors 401 and a memory 402, where one or more stored applications or data may be stored in the memory 402. Memory 402 may be, among other things, transient storage or persistent storage. The application program stored in memory 402 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a data compression processing device. Still further, the processor 401 may be configured to communicate with the memory 402 to execute a series of computer-executable instructions in the memory 402 on a data compression processing device. The data compression processing apparatus may also include one or more power supplies 403, one or more wired or wireless network interfaces 404, one or more input/output interfaces 405, one or more keyboards 406, and the like.
In one particular embodiment, a data compression processing apparatus includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the data compression processing apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model for object data extraction to obtain object data of the virtual object;
performing visual characteristic identification on the virtual object based on the object data to obtain visual characteristics of the virtual object;
and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
An embodiment of a storage medium provided in this specification is as follows:
on the basis of the same technical concept, one or more embodiments of the present specification further provide a storage medium corresponding to the data compression processing method described above.
The storage medium provided in this embodiment is used to store computer-executable instructions, and when the computer-executable instructions are executed by the processor, the following processes are implemented:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
It should be noted that the embodiment related to the storage medium in this specification and the embodiment related to the data compression processing method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the foregoing corresponding method, and repeated parts are not described again.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 30's of the 20 th century, improvements in one technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as ABEL (Advanced Boolean Expression Language), AHDL (alternate Hardware Description Language), traffic, CUPL (core universal Programming Language), HDCal, jhddl (Java Hardware Description Language), lava, lola, HDL, PALASM, rhyd (Hardware Description Language), and vhigh-Language (Hardware Description Language), which is currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functions of the units may be implemented in the same software and/or hardware or in multiple software and/or hardware when implementing the embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of this document and is not intended to limit this document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (15)

1. A data compression processing method comprises the following steps:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
2. The data compression processing method according to claim 1, wherein the performing visual feature recognition on the virtual object based on the object data to obtain the visual feature of the virtual object comprises:
inputting the object data into a shape recognition model for shape recognition to obtain the object shape of the virtual object;
the shape recognition model is obtained by training based on a labeled object data sample carrying shape labels.
3. The data compression processing method according to claim 2, wherein the labeled object data sample is determined by:
extracting object data of each virtual data set sample to obtain an object data sample set;
inputting each object data sample in the object data sample set into a feature encoder for feature encoding to obtain object features corresponding to each object data sample;
performing shape clustering processing on the object data sample set based on the object features to obtain a plurality of shape types and a type sample set under each shape type;
and carrying out shape type marking on each object data sample in the type sample set to obtain the marked object data sample.
4. The data compression processing method according to claim 3, wherein after the operations of clustering the object data samples based on the object features to obtain a plurality of shape types and obtaining the type sample sets under the shape types are executed, and before the operations of labeling the object data samples in the type sample sets to obtain the labeled object data samples are executed, the method further comprises:
according to merging instructions of at least two shape types in the plurality of shape types, merging type sample sets under the at least two shape types;
and/or the presence of a gas in the gas,
according to a shape type switching instruction of a target object data sample under any shape type, transferring the target object sample from a type sample set under any shape type to a type sample set under a target shape type;
and/or the presence of a gas in the atmosphere,
and deleting any object data sample from the type sample set under any shape type according to a deletion instruction of any object data sample under any shape type.
5. The data compression processing method according to claim 1, wherein the compression model is obtained by training in the following way:
inputting each object data sample in the type sample set under the visual characteristic into a compression encoder in a model to be trained for visual characteristic identification, and outputting a compression sample of each object data sample;
inputting the compressed samples into a decoder in the model to be trained for data reconstruction, and outputting reconstructed data of each object data sample;
and calculating training loss based on the reconstructed data and the object data sample, adjusting parameters of the model to be trained based on the training loss, and obtaining the compression model after training is completed.
6. The data compression processing method according to claim 1, wherein the reading of the compression model corresponding to the visual feature includes:
reading a compression model corresponding to the visual features from a compression model set;
the compression model set comprises compression models corresponding to all visual features; and the compression model corresponding to the visual features is formed by a compression encoder and a compression decoder corresponding to the visual features.
7. The data compression processing method according to claim 6, wherein each compression model in the compression model set is determined as follows:
performing model training based on the type sample set under each visual characteristic to obtain a compression model corresponding to each visual characteristic;
calculating gradient correlation of the compression model corresponding to each visual characteristic;
if at least two compression models with gradient correlation larger than a preset threshold exist, merging type sample sets under visual characteristics corresponding to the at least two compression models;
and performing model training based on a merging type sample set obtained by merging processing to obtain a compression model corresponding to the updated visual features.
8. The data compression processing method according to claim 7, wherein the performing model training based on the merged type sample set obtained by the merging processing to obtain the compression model corresponding to the updated visual features comprises:
and training a decoder corresponding to each visual feature based on the object data samples corresponding to each visual feature in the merging type sample set to obtain a compression encoder and at least two decoders.
9. The data compression processing method according to claim 1, wherein the reading of the compression model corresponding to the visual feature includes:
reading a compression encoder and a decoder corresponding to the visual features from a compression model set;
and constructing a compression model corresponding to the visual characteristics based on the read compression encoder and decoder.
10. The data compression processing method according to claim 9, wherein the inputting the object data into the compression model for data compression processing to obtain compressed data comprises:
inputting the object data into the compression encoder for compression encoding to obtain encoded compressed data output by the compression encoder;
and inputting the coded and compressed data into the decoder for data decoding to obtain the compressed data.
11. The data compression processing method according to claim 3, wherein the performing shape clustering processing on the object data sample set based on the object features comprises:
and based on the object characteristics, carrying out shape clustering processing on the object data sample set by utilizing a clustering algorithm.
12. The data compression processing method according to claim 1, wherein the virtual world carries out decentralized trading by generating a non-homogeneous identifier, and ownership of the virtual assets is possessed through the trading;
wherein the virtual objects comprise objects which can be processed by decentralized transaction in the virtual world and are configured with non-homogeneous identification.
13. A data compression processing apparatus comprising:
the data extraction module is configured to input a virtual data set corresponding to a virtual object in a virtual world into an extraction model for object data extraction, and obtain object data of the virtual object;
a visual characteristic identification module configured to perform visual characteristic identification on the virtual object based on the object data to obtain a visual characteristic of the virtual object;
and the data compression module is configured to read a compression model corresponding to the visual characteristics, and input the object data into the compression model to perform data compression processing to obtain compressed data.
14. A data compression processing apparatus comprising:
a processor; and the number of the first and second groups,
a memory configured to store computer-executable instructions that, when executed, cause the processor to:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model to extract object data, and obtaining object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
15. A storage medium storing computer-executable instructions that when executed by a processor implement the following:
inputting a virtual data set corresponding to a virtual object in a virtual world into an extraction model for object data extraction to obtain object data of the virtual object;
performing visual feature recognition on the virtual object based on the object data to obtain visual features of the virtual object;
and reading a compression model corresponding to the visual characteristics, and inputting the object data into the compression model to perform data compression processing to obtain compressed data.
CN202211148120.7A 2022-09-20 2022-09-20 Data compression processing method and device Pending CN115499635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211148120.7A CN115499635A (en) 2022-09-20 2022-09-20 Data compression processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211148120.7A CN115499635A (en) 2022-09-20 2022-09-20 Data compression processing method and device

Publications (1)

Publication Number Publication Date
CN115499635A true CN115499635A (en) 2022-12-20

Family

ID=84470776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211148120.7A Pending CN115499635A (en) 2022-09-20 2022-09-20 Data compression processing method and device

Country Status (1)

Country Link
CN (1) CN115499635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953559A (en) * 2023-01-09 2023-04-11 支付宝(杭州)信息技术有限公司 Virtual object processing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181792A1 (en) * 1999-12-20 2002-12-05 Shouichi Kojima Image data compressing method and restoring method
JP2003046999A (en) * 2001-07-30 2003-02-14 Toshiba Corp Image monitoring system, monitored image distributing method therefor and camera therefor using network
CN1629888A (en) * 2003-12-17 2005-06-22 中国科学院自动化研究所 A skeletonized object rebuild method
JP2006185354A (en) * 2004-12-28 2006-07-13 Nikon Corp Residual capacity management device, compression function-equipped memory card and external equipment
US20130024545A1 (en) * 2010-03-10 2013-01-24 Tangentix Limited Multimedia content delivery system
US20160173882A1 (en) * 2014-12-15 2016-06-16 Miovision Technologies Incorporated System and Method for Compressing Video Data
US20170251214A1 (en) * 2016-02-26 2017-08-31 Versitech Limited Shape-adaptive model-based codec for lossy and lossless compression of images
US20200312015A1 (en) * 2019-04-01 2020-10-01 Microsoft Technology Licensing, Llc Depth-compressed representation for 3d virtual scene
US20210192796A1 (en) * 2017-10-17 2021-06-24 Nokia Technologies Oy An Apparatus, A Method And A Computer Program For Volumetric Video
US20210297491A1 (en) * 2018-08-07 2021-09-23 Signify Holding B.V. Systems and methods for compressing sensor data using clustering and shape matching in edge nodes of distributed computing networks

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020181792A1 (en) * 1999-12-20 2002-12-05 Shouichi Kojima Image data compressing method and restoring method
JP2003046999A (en) * 2001-07-30 2003-02-14 Toshiba Corp Image monitoring system, monitored image distributing method therefor and camera therefor using network
CN1629888A (en) * 2003-12-17 2005-06-22 中国科学院自动化研究所 A skeletonized object rebuild method
JP2006185354A (en) * 2004-12-28 2006-07-13 Nikon Corp Residual capacity management device, compression function-equipped memory card and external equipment
US20130024545A1 (en) * 2010-03-10 2013-01-24 Tangentix Limited Multimedia content delivery system
US20160173882A1 (en) * 2014-12-15 2016-06-16 Miovision Technologies Incorporated System and Method for Compressing Video Data
US20170251214A1 (en) * 2016-02-26 2017-08-31 Versitech Limited Shape-adaptive model-based codec for lossy and lossless compression of images
US20210192796A1 (en) * 2017-10-17 2021-06-24 Nokia Technologies Oy An Apparatus, A Method And A Computer Program For Volumetric Video
US20210297491A1 (en) * 2018-08-07 2021-09-23 Signify Holding B.V. Systems and methods for compressing sensor data using clustering and shape matching in edge nodes of distributed computing networks
US20200312015A1 (en) * 2019-04-01 2020-10-01 Microsoft Technology Licensing, Llc Depth-compressed representation for 3d virtual scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953559A (en) * 2023-01-09 2023-04-11 支付宝(杭州)信息技术有限公司 Virtual object processing method and device
CN115953559B (en) * 2023-01-09 2024-04-12 支付宝(杭州)信息技术有限公司 Virtual object processing method and device

Similar Documents

Publication Publication Date Title
Zhang et al. Asymmetric two-stream architecture for accurate RGB-D saliency detection
CN109658455B (en) Image processing method and processing apparatus
CN107957989B (en) Cluster-based word vector processing method, device and equipment
CN111401273B (en) User feature extraction system and device for privacy protection
CN114238904B (en) Identity recognition method, and training method and device of dual-channel hyper-resolution model
CN110688897A (en) Pedestrian re-identification method and device based on joint judgment and generation learning
CN112308113A (en) Target identification method, device and medium based on semi-supervision
CN114581710A (en) Image recognition method, device, equipment, readable storage medium and program product
CN115499635A (en) Data compression processing method and device
CN114358243A (en) Universal feature extraction network training method and device and universal feature extraction network
CN110390015B (en) Data information processing method, device and system
CN115374298A (en) Index-based virtual image data processing method and device
CN116883737A (en) Classification method, computer device, and storage medium
CN115358777A (en) Advertisement putting processing method and device of virtual world
CN115393022A (en) Cross-domain recommendation processing method and device
CN115810073A (en) Virtual image generation method and device
CN115439912A (en) Method, device, equipment and medium for recognizing expression
CN115048661A (en) Model processing method, device and equipment
Karczmarek et al. Chain code-based local descriptor for face recognition
CN115953559B (en) Virtual object processing method and device
CN113298892A (en) Image coding method and device, and storage medium
CN115359219B (en) Virtual world virtual image processing method and device
CN111539520A (en) Method and device for enhancing robustness of deep learning model
CN116188731A (en) Virtual image adjusting method and device of virtual world
CN115827935B (en) Data processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination