CN113344778B - Imaging control method, device and equipment and computer readable storage medium - Google Patents

Imaging control method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN113344778B
CN113344778B CN202110894408.8A CN202110894408A CN113344778B CN 113344778 B CN113344778 B CN 113344778B CN 202110894408 A CN202110894408 A CN 202110894408A CN 113344778 B CN113344778 B CN 113344778B
Authority
CN
China
Prior art keywords
target
control signal
imaging
model
imaged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110894408.8A
Other languages
Chinese (zh)
Other versions
CN113344778A (en
Inventor
唐浩
陆豪放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tishi Technology Co ltd
Original Assignee
Chengdu Tishi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tishi Technology Co ltd filed Critical Chengdu Tishi Technology Co ltd
Priority to CN202110894408.8A priority Critical patent/CN113344778B/en
Publication of CN113344778A publication Critical patent/CN113344778A/en
Application granted granted Critical
Publication of CN113344778B publication Critical patent/CN113344778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/18

Abstract

The invention discloses an imaging control method, an imaging control device, imaging control equipment and a computer readable storage medium, wherein the imaging control method comprises the following steps: collecting two-dimensional RGB data of a target to be imaged; determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on a target to be imaged; and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using the target imaging model to obtain a target image with corresponding dimensionality. The invention improves the imaging control convenience, reduces the research and development threshold and saves the cost.

Description

Imaging control method, device and equipment and computer readable storage medium
Technical Field
The present invention relates to the field of imaging technologies, and in particular, to an imaging control method, an imaging control apparatus, an imaging control device, and a computer-readable storage medium.
Background
With the rapid development of artificial intelligence technology, in many business scenarios (such as face recognition), imaging is required to realize relevant intelligent control.
Most of the prior imaging construction related technologies are realized by modeling a three-dimensional imaging target, and in application, the difference of two-dimensional and three-dimensional presentation effects is realized by the rotation or deformation of the three-dimensional imaging target. The technical path has huge labor cost, the three-dimensional model of the imaging target needs to be constructed through very accurate manpower, the accurate matching of the mapping of the imaging target and the three-dimensional model and the two-dimensional effect projection imaging need to be realized through the technologies of point matching, covering, rendering and the like, the property changes of the three-dimensional model such as deformation and the like need to be realized, and each step requires the refined matching of the model. Not only brings very large cost overhead to research and development, but also has very high research and development difficulty due to the precision requirement and precision of the model.
In summary, how to effectively solve the problems of high research and development threshold, high labor cost and the like caused by high precision requirement of the existing imaging control method on the model is a problem which needs to be solved urgently by technical personnel in the field at present.
Disclosure of Invention
The invention aims to provide an imaging control method, which improves the imaging control convenience, reduces the research and development threshold and saves the cost; another object of the present invention is to provide an imaging control apparatus, a device, and a computer-readable storage medium.
In order to solve the technical problems, the invention provides the following technical scheme:
an imaging control method comprising:
collecting two-dimensional RGB data of a target to be imaged;
determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged;
and carrying out imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model to obtain a target image with corresponding dimensionality.
In a specific embodiment of the present invention, the method further includes a training process of the target control signal model, where the training process of the target control signal model includes:
marking the change characteristics of the three-dimensional target to be imaged to obtain marking control signal characteristics;
determining form information of the characteristic of the labeling control signal;
selecting a depth model architecture according to the morphological information, and constructing an original control signal model according to the selected depth model architecture;
and performing iterative training on the original control signal model by using the characteristic of the labeled control signal to obtain the target control signal model.
In a specific embodiment of the present invention, labeling a variation characteristic of a three-dimensional target to be imaged to obtain a labeling control signal characteristic includes:
for the same change characteristic, carrying out preset times of marking on the change characteristic of the three-dimensional target to be imaged to obtain each characteristic marking result;
and carrying out mean value calculation on the characteristic labeling results to obtain the labeling control signal characteristics.
In a specific embodiment of the present invention, the method further includes a training process of the target imaging model, where the training process of the target imaging model includes:
collecting two-dimensional RGB sample data of the target to be imaged;
down sampling the two-dimensional RGB sample data by using a pre-constructed original imaging model, controlling the down-sampled two-dimensional RGB sample data to up-sample by using the target control signal characteristic, and outputting imaging information;
and performing countermeasure training on the original imaging model by using the imaging information to obtain the target imaging model.
In an embodiment of the present invention, after acquiring two-dimensional RGB sample data of the target to be imaged, the method further includes:
and carrying out filtering operation on the two-dimensional RGB sample data.
In one embodiment of the present invention, the method further comprises:
when the target control signal features are multi-dimensional features, constructing a multiple mapping model according to the target control signal features;
and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model, wherein the imaging control operation comprises the following steps:
and utilizing the target imaging model to combine the two-dimensional RGB data, the target control signal characteristics and the multiple mapping model to carry out imaging control operation on the target to be imaged.
In a specific embodiment of the present invention, after determining a target control signal feature corresponding to the two-dimensional RGB data by using a target control signal model, before performing an imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal feature by using a target imaging model, the method further includes:
acquiring a control signal time interval corresponding to the target control signal characteristic;
uniformly sampling control signal characteristics in the control signal time interval to obtain sampling control signal characteristics;
performing frame interpolation operation on the target control signal characteristic by using the sampling control signal characteristic to obtain a control signal characteristic after frame interpolation;
and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model, wherein the imaging control operation comprises the following steps:
and carrying out imaging control operation on the target to be imaged according to the two-dimensional RGB data and the control signal characteristics after frame interpolation by using the target imaging model.
An imaging control apparatus comprising:
the data acquisition module is used for acquiring two-dimensional RGB data of a target to be imaged;
the characteristic determining module is used for determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged;
and the imaging control module is used for performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model to obtain a target image with corresponding dimensionality.
An imaging control apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the imaging control method as described above when executing the computer program.
A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the imaging control method as set forth above.
The imaging control method provided by the invention comprises the steps of collecting two-dimensional RGB data of a target to be imaged; determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on a target to be imaged; and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using the target imaging model to obtain a target image with corresponding dimensionality.
According to the technical scheme, the target control signal model is trained in advance, and when the target to be imaged needs to be imaged, the target control signal model is used for determining the target control signal characteristics corresponding to the two-dimensional RGB data of the target to be imaged. And pre-training a target imaging model for imaging control by using the target control signal characteristics, and after extracting the target control signal characteristics, performing imaging control operation on the target to be imaged by using the target imaging model to obtain a target image with corresponding dimensionality. Therefore, two-dimensional data can be effectively utilized, the imaging effect control of corresponding dimensionality can be realized without constructing a three-dimensional model of an imaging target, the imaging control convenience is improved, the research and development threshold is reduced, and the cost is saved.
Correspondingly, the invention also provides an imaging control device, equipment and a computer readable storage medium corresponding to the imaging control method, which have the technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating an embodiment of an imaging control method according to the present invention;
FIG. 2 is a flowchart of another embodiment of an imaging control method according to the present invention;
FIG. 3 is a schematic diagram of an imaging model training network architecture according to an embodiment of the present invention;
FIG. 4 is a block diagram of an imaging control apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of a configuration of an imaging control apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an imaging control apparatus according to this embodiment.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an implementation of an imaging control method according to an embodiment of the present invention, where the method may include the following steps:
s101: and collecting two-dimensional RGB data of the target to be imaged.
When imaging control is needed to be carried out on the target to be imaged, two-dimensional RGB data of the target to be imaged are collected, and therefore the characteristics of the target to be imaged are expressed through the two-dimensional RGB data. If the target to be imaged is a human face, the two-dimensional RGB data can be RGB data samples of 3 human faces with different angles, the 3 angles are { front face, left side face and right side face }, wherein the left side face and the right side face respectively represent that the included angle between the human face and the camera is 30 degrees, and the facial features are completely collected at the three angles.
The target to be imaged can be any target object which needs to be imaged, such as a human face.
S102: and determining target control signal characteristics corresponding to the two-dimensional RGB data by using the target control signal model.
The target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged.
And pre-training a target control signal model for determining target control signal characteristics corresponding to the two-dimensional RGB data. After the two-dimensional RGB data of the target to be imaged are acquired, target control signal characteristics corresponding to the two-dimensional RGB data are determined by using a target control signal model. The target control signal characteristics may include deformation and spatial variation information, for example, a human body, and the target control signal characteristics may include a posture, an imaging angle, illumination, and the like of the human body; taking the human face as an example, the target control signal features may include a facial expression and a human face orientation.
The depth model architecture for constructing the target control signal model can be adaptively selected according to the actual application scene, for example, when the target to be imaged is a human face, the ResNet-18 model can be selected as a standard convolution network main model for training to obtain the target control signal model.
S103: and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using the target imaging model to obtain a target image with corresponding dimensionality.
And pre-training a target imaging model for imaging control of the target to be imaged by adding target control signal characteristics. After the target control signal characteristics are determined, inputting the two-dimensional RGB data and the target control signal characteristics into a target imaging model, and performing imaging control operation on a target to be imaged by using the target imaging model according to the two-dimensional RGB data and the target control signal characteristics to obtain a target image with corresponding dimensionality. Still taking the target to be imaged as the face as an example, the imaging effect of the face can be controlled through the face expression and the face orientation, and the expression information of the target face at different angles is obtained.
The depth model architecture of the target imaging model can be adaptively selected according to the actual application scene, for example, when the target to be imaged is a human face, a Bruce-Young model, an interactive activation competition model and the like can be selected to train the human face imaging model, so that the target imaging model is obtained.
According to the technical scheme, the target control signal model is trained in advance, and when the target to be imaged needs to be imaged, the target control signal model is used for determining the target control signal characteristics corresponding to the two-dimensional RGB data of the target to be imaged. And pre-training a target imaging model for imaging control by using the target control signal characteristics, and after extracting the target control signal characteristics, performing imaging control operation on the target to be imaged by using the target imaging model to obtain a target image with corresponding dimensionality. Therefore, two-dimensional data can be effectively utilized, the imaging effect control of corresponding dimensionality can be realized without constructing a three-dimensional model of an imaging target, the imaging control convenience is improved, the research and development threshold is reduced, and the cost is saved.
It should be noted that, based on the above embodiments, the embodiments of the present invention also provide corresponding improvements. In the following embodiments, steps that are the same as or correspond to those in the above embodiments may be referred to one another, and corresponding advantageous effects may also be referred to one another, which is not described in detail in the following modified embodiments.
Referring to fig. 2, fig. 2 is a flowchart of another implementation of the imaging control method according to the embodiment of the present invention, where the method may include the following steps:
s201: and marking the change characteristics of the three-dimensional target to be imaged to obtain the characteristics of the marking control signal.
The standard of the target control signal characteristic is defined in advance through the deformation and space change characteristics of the target to be imaged, and the standard can be a continuously-changed multidimensional characteristic, such as a continuous value of a rotation angle of a posture of (-180, 180), or a discretely-changed category, such as a sex (male and female) of a person being a discrete value.
When the target control signal model is trained, the change characteristics of the three-dimensional target to be imaged are labeled to obtain labeled control signal characteristics, for example, the related deformation characteristics and space change characteristics of the three-dimensional target to be imaged can be manually labeled, and the change characteristics are recorded and recorded as
Figure 526770DEST_PATH_IMAGE001
And the continuously changed marking data are recorded as:
Figure 337731DEST_PATH_IMAGE002
the label data of the dispersion change is recorded as
Figure 655580DEST_PATH_IMAGE003
Wherein, in the step (A),
Figure 577400DEST_PATH_IMAGE004
a set of change characteristics is represented that is,
Figure 297094DEST_PATH_IMAGE005
the (i) th variation characteristic is shown,
Figure 278957DEST_PATH_IMAGE006
represent a sum
Figure 84101DEST_PATH_IMAGE007
A change characteristic.
In one embodiment of the present invention, step S201 may include the following steps:
the method comprises the following steps: for the same change characteristic, carrying out preset times of marking on the change characteristic of the three-dimensional target to be imaged to obtain each characteristic marking result;
step two: and carrying out mean value calculation on the characteristic labeling results to obtain the labeling control signal characteristics.
For convenience of description, the above two steps may be combined for illustration.
When the change characteristics are labeled, the change characteristics of the three-dimensional target to be imaged can be labeled for a preset number of times according to the same change characteristics to obtain each characteristic labeling result, the mean value of each characteristic labeling result is calculated to obtain the labeling control signal characteristics, and therefore the representation accuracy is improved through the form of taking the mean value through multi-user labeling.
And establishing standard sampling data marking dimensions. The dimension can be used for accurately describing data labeling of a labeled sample, and can be multi-dimensional continuous labeling or single-dimensional discrete labeling. The labeling is carried out through a single-sample-multi-person labeling model so as to obtain accuracy, and can also be directly obtained through other existing methods or related methods or projects in the field of imaging targets.
S202: and determining the morphological information of the characteristic of the labeling control signal.
After the characteristic of the marking control signal is obtained, the form information of the characteristic of the marking control signal is determined. The modality information may include images, audio, text, and the like.
S203: and selecting a depth model architecture according to the morphological information, and constructing an original control signal model according to the selected depth model architecture.
After determining the form information of the marked control signal characteristics, selecting a depth model architecture according to the form information, selecting a proper depth model architecture, and constructing an original control signal model according to the selected depth model architecture. The depth model architecture for constructing the original control signal model is adaptively selected according to the morphological information of the characteristic of the labeled control signal, so that the model training speed is increased, and the effectiveness of the model output result is improved.
S204: and performing iterative training on the original control signal model by using the characteristic of the labeled control signal to obtain a target control signal model.
After the original control signal model is constructed and obtained, iterative training is carried out on the original control signal model by using the characteristic of the labeled control signal, so that the original control signal model is optimized, and the target control signal model is obtained.
If the characteristic of the labeled control signal is a continuous characteristic, the optimization objective function is as follows:
Figure 606350DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 118234DEST_PATH_IMAGE009
a model function of the control signal is represented,
Figure 333314DEST_PATH_IMAGE010
indicating the characteristic of the annotation control signal.
If the labeled signal is a discrete feature, the optimization objective function is:
Figure 297859DEST_PATH_IMAGE011
wherein,
Figure 889378DEST_PATH_IMAGE012
A model function of the control signal is represented,
Figure 255768DEST_PATH_IMAGE013
indicating the characteristic of the annotation control signal.
S205: collecting two-dimensional RGB sample data of a target to be imaged.
S206: and carrying out filtering operation on the two-dimensional RGB sample data.
After two-dimensional RGB sample data of a target to be imaged are acquired, filtering operation is carried out on the two-dimensional RGB sample data. The two-dimensional RGB sample data sampled randomly contains data which are partially lost or seriously damaged, and the two-dimensional RGB sample data are filtered, so that the data which are partially lost or seriously damaged are deleted, the effectiveness of the data is ensured, and the accuracy of the imaging model training is improved.
S207: and performing down-sampling on the two-dimensional RGB sample data by using the pre-constructed original imaging model, controlling the down-sampled two-dimensional RGB sample data to perform up-sampling by using the characteristics of the target control signal, and outputting imaging information.
After filtering operation is carried out on the two-dimensional RGB sample data, the two-dimensional RGB sample data is subjected to down-sampling by using a pre-constructed original imaging model, the two-dimensional RGB sample data subjected to down-sampling is controlled to be subjected to up-sampling through the characteristics of a target control signal, and imaging information is output. Encoding (encoding) is carried out by down-sampling two-dimensional RGB sample data, dimensionality of the two-dimensional RGB sample data is down-sampled, main change characteristics are discarded, up-sampling of a coded signal is carried out, signal restoration is carried out, wherein deformation characteristics or space transformation characteristics of the restored signal are controlled by adding target control signal characteristic input, target control signal characteristics are added into a Normalization structure, scaling and shifting transformation are carried out on intermediate layer characteristics through control signal parameters, and the basic form is as follows:
Figure 376171DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 155908DEST_PATH_IMAGE015
the expression of the normalization function is used,
Figure 223221DEST_PATH_IMAGE016
the weight of a parameter representing the scaling transform,
Figure 506435DEST_PATH_IMAGE017
the offset of the scaling transform is represented,
Figure 260721DEST_PATH_IMAGE018
which represents the mean value of the input and,
Figure 262175DEST_PATH_IMAGE019
which represents the variance of the input and,
Figure 929917DEST_PATH_IMAGE020
the weight of the parameter representing the shifting transformation,
Figure 5320DEST_PATH_IMAGE021
the offset of the shifting transform is represented,
Figure 998684DEST_PATH_IMAGE022
a preset constant is indicated to ensure that the denominator is greater than 0.
S208: and performing countermeasure training on the original imaging model by using the imaging information to obtain a target imaging model.
Referring to fig. 3, fig. 3 is a schematic diagram of an imaging model training network architecture according to an embodiment of the present invention. After the imaging information is output, the imaging information is utilized to carry out countermeasure training on the original imaging model, and a target imaging model is obtained. The basic form of the objective function for optimizing the original imaging model by countertraining is:
Figure 425117DEST_PATH_IMAGE023
Figure 630971DEST_PATH_IMAGE024
Figure 888777DEST_PATH_IMAGE025
Figure 990725DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 701192DEST_PATH_IMAGE027
a function representing the loss of the countermeasure network,
Figure 913999DEST_PATH_IMAGE028
a function of a discriminant model is represented,
Figure 291890DEST_PATH_IMAGE029
the representation is generated as a function of the model,
Figure 299161DEST_PATH_IMAGE030
represents a Perceptual loss (Perceptual loss) function,
Figure 762503DEST_PATH_IMAGE031
is shown as
Figure 513421DEST_PATH_IMAGE032
The number of channels of a layer,
Figure 480240DEST_PATH_IMAGE033
is shown as
Figure 923991DEST_PATH_IMAGE034
The height of the layer (height),
Figure 609050DEST_PATH_IMAGE035
is shown as
Figure 225976DEST_PATH_IMAGE036
The width of the layer(s) is,
Figure 250564DEST_PATH_IMAGE037
is shown as
Figure 927533DEST_PATH_IMAGE034
The input image of the layer (the output of the target generation network),
Figure 37572DEST_PATH_IMAGE038
which represents a real image of the object and,
Figure 927030DEST_PATH_IMAGE039
represents the basic model architecture of VGG19 trained through the ILSVRC2012 dataset,
Figure 331424DEST_PATH_IMAGE034
representing the intermediate output layer of the generative model.
By reserving in Normalize (x)
Figure 648136DEST_PATH_IMAGE040
Parameters to train and obtain imaging form under random control input, and target control signal model obtained by training to obtain control signal characteristics
Figure 307787DEST_PATH_IMAGE041
. Mapping models can be constructed
Figure 204199DEST_PATH_IMAGE042
The multi-layer input parameters in the model G are obtained by constructing the characteristics of the control signal (or directly sampling the data such as the labeling result in the labeling data) as the model input of the mapping model
Figure 117DEST_PATH_IMAGE043
. Fixing the model G, D to the model
Figure 18888DEST_PATH_IMAGE044
Training is carried out, and the optimization objective function is as follows:
Figure 103519DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure 600359DEST_PATH_IMAGE046
in order to map the model loss function,
Figure 454046DEST_PATH_IMAGE047
the number of characteristics of the control signal is represented,
Figure 643719DEST_PATH_IMAGE048
representing the labeled real signal data.
In the countercheck learning process, the imaged picture with consistent content is obtained by randomly inputting the input parameters of the AddIN operator. The generative model training loss function defines the description of the consistency of the contents of the generative target image through perception loss, and describes the reality of the contents of the imaging target through the counterstudy loss function. The control signal is up-sampled and input into an AddIN operator to control the imaging target, and the control effect of the imaging target is fed back through a control signal model. In the process, the generation model and the discrimination model of the counterstudy can not participate in the study process, and can also participate in the study process. The training loss function of the control signal mapping model is generally defined by the L1 loss.
S209: and collecting two-dimensional RGB data of the target to be imaged.
And sampling the two-dimensional RGB data of the target to be imaged in a random sampling mode. The sampled data may be derived from a variety of sources, including but not limited to capturing over a network, collecting via hardware devices, and the like.
S210: and determining target control signal characteristics corresponding to the two-dimensional RGB data by using the target control signal model.
The target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged.
S211: and acquiring a control signal time interval corresponding to the target control signal characteristic.
The model imaging principle is that the imaging presentation of a target is generated by abstracting the display form of an input object and then generating a model, and the problem of image distortion generated due to the fact that the abstract semantics of the control signal characteristics are changed steeply possibly exists in the time sequence display process. And aiming at the object initialization form information input by the model, extracting the characteristics of the control signal, and optimizing the time sequence imaging by performing time sequence interval optimization on the characteristics of the control signal output by the control signal model.
And acquiring a control signal time interval corresponding to the target control signal characteristic. In order to realize stable model output imaging, the model output frame is stabilized through interval time periods, so that strict frame rate control is carried out on data imaging. Taking each time interval as 1s as an example, if the desired model performs imaging output at a frame rate of 120fps, 120 control pictures are generated per second to achieve the desired effect.
S212: and uniformly sampling the control signal characteristics in the control signal time interval to obtain the sampling control signal characteristics.
And after the control signal time interval corresponding to the target control signal characteristic is obtained, uniformly sampling the control signal characteristic in the control signal time interval to obtain a sampling control signal characteristic.
Control signal compartmentalization: at intervals of time
Figure 215645DEST_PATH_IMAGE049
For example, wherein
Figure 516177DEST_PATH_IMAGE050
Performing time sequence frame interpolation on control signal data by using a current time node, recording a control signal sampling frequency parameter sample _ control _ count as sc, and outputting a sample set by a control signal model
Figure 224370DEST_PATH_IMAGE051
Wherein
Figure 584944DEST_PATH_IMAGE052
For controlling the time interval of the model output, the control data signal is sampled in the form of uniform sampling
Figure 644167DEST_PATH_IMAGE053
Wherein, in the step (A),
Figure 686072DEST_PATH_IMAGE054
in order to sample the time interval evenly,
Figure 576668DEST_PATH_IMAGE055
the control signal characteristic is sampled.
S213: and performing linear frame interpolation operation on the target control signal characteristic by using the sampling control signal characteristic to obtain the control signal characteristic after frame interpolation.
After the sampling control signal characteristics are obtained, linear frame interpolation operation is carried out on the target control signal characteristics by using the sampling control signal characteristics, and control signal characteristics after frame interpolation are obtained.
Using the characteristics of the sampling control signal as the control signal data of the model deformation time key frame, and performing frame interpolation on the data by a linear frame interpolation method
Figure 45826DEST_PATH_IMAGE056
The control signal carries out frame interpolation based on the output frame rate, and the frame interpolation method is
Figure 592345DEST_PATH_IMAGE057
Wherein, in the step (A),
Figure 500258DEST_PATH_IMAGE058
Figure 183044DEST_PATH_IMAGE059
respectively the characteristics of the sampling control signal closest to the current frame interpolation time. The characteristic output set of the control signal after the interpolation frame is recorded as
Figure 879561DEST_PATH_IMAGE060
The average single-frame image generation time is recorded as the operation efficiency based on the generation model
Figure 647797DEST_PATH_IMAGE061
Millisecond, to ensure the output frame rate of imaging reaches the target frame rate
Figure 828243DEST_PATH_IMAGE062
That is, the number of instances of the generative model to be constructed is
Figure 631113DEST_PATH_IMAGE063
The calculation criteria are based on the actual hardware device. At time intervals
Figure 504392DEST_PATH_IMAGE064
Control signal output for model
Figure 759924DEST_PATH_IMAGE065
And generating the characteristics of the control signal after the frame insertion.
S214: and performing imaging control operation on the target to be imaged by using the target imaging model according to the two-dimensional RGB data and the control signal characteristics after frame interpolation to obtain a target image with corresponding dimensionality.
And after the characteristics of the control signal after the frame interpolation are obtained, inputting the two-dimensional RGB data and the characteristics of the control signal after the frame interpolation into the target imaging model.
And carrying out imaging control operation on the target to be imaged by using the target imaging model to obtain a target image with corresponding dimensionality. The control imaging data corresponding to the time interval sequence after the frame insertion is recorded as
Figure 744060DEST_PATH_IMAGE066
By interpolating the target control signal characteristics in a time series unit, the control imaging effect is optimized, and meanwhile, the optimization of computing resources is completed. Target control signal characteristics in the control signal time interval are uniformly sampled, and sampling signal frame insertion processing is carried out on the basis of the target control signal characteristics, so that the stable change characteristic of the control signal is ensured. And inputting the characteristics of the control signal after frame interpolation which stably changes in unit time interval into the generation model example of the dispersed deployment, and decoupling the coherent imaging and the single-frame calculation efficiency of the generation model. And controlling, arranging and displaying the generated image so as to obtain a stable image change effect.
In one embodiment of the present invention, the method may further comprise the steps of:
when the target control signal characteristics are multi-dimensional characteristics, constructing a multi-mapping model according to the target control signal characteristics;
accordingly, step S214 may include the steps of:
and carrying out imaging control operation on the target to be imaged by utilizing the target imaging model in combination with the two-dimensional RGB data, the target control signal characteristics and the multiple mapping model.
And when the target control signal characteristics are multi-dimensional characteristics, constructing a multiple mapping model according to the target control signal characteristics, and performing attribute control of different dimensions on the imaging effect through multiple model control. And performing imaging control operation on the target to be imaged by using the target imaging model in combination with the two-dimensional RGB data, the target control signal characteristics and the multiple mapping model to obtain a target image with corresponding dimensionality.
Building a multidimensional control model
Figure 401437DEST_PATH_IMAGE067
Inputting control signal sequences in different dimensions
Figure 445617DEST_PATH_IMAGE068
Building multiple mapping models
Figure 985183DEST_PATH_IMAGE069
Training multiple mapping models
Figure 976272DEST_PATH_IMAGE070
. Completing the above model training, through model set
Figure 488156DEST_PATH_IMAGE071
Carrying out complete imaging effect, outputting standard imaging effect through imaging source picture, further adding control signal source signal to obtain coded target control signal characteristic
Figure 437658DEST_PATH_IMAGE072
And taking the target control signal as input to finish the control output of the target imaging source picture.
The control of the multiple control signals on the generated model imaging target picture is realized by superposing the multiple control signals, and a certain model can be trained independently in the process or the more accurate dimensionality control effect is realized by controlling the weight of the mapping loss function. The control of the imaging target is realized by generating a model encoding and decoding and controlling a signal mapping model. And inputting control target data to the coding model, and simultaneously inputting a control signal to the control signal mapping model to complete the image generation of the imaging effect in the inference process of the generation model and the control signal mapping model so as to achieve the effect of controlling imaging.
High-dimensional modeling and deformation modeling are not needed for the imaging target, so that the link of manual modeling is reduced, the imaging technical efficiency of the three-dimensional or two-dimensional target is greatly improved, and the technical threshold for controlling imaging in engineering imaging and virtual imaging applications is reduced. In the aspect of imaging effect, the target imaging is realized by constructing a standard data model and generating the model, so that the real imaging effect can be more similar, the combination with the scene and the background is more real, and the target imaging picture can be quickly and effectively generated.
In a specific example application of the present invention, to exemplify imaging of facial expressions, the following steps are included from model construction to imaging completion:
(1) and sampling the user to acquire the picture data of the user.
The step comprises the steps of collecting face data through a multi-angle camera and constructing an expression control model of a face.
This embodiment performs RGB data sampling of a face through 3 different angles. 3 angles are respectively { front faceLeft side, right side } (left side and right side respectively indicate that the contained angle of people's face and camera is 30, and the facial feature is gathered completely to three angles). The representation is expressed by the facial blenershape standard and is characterized by 51 dimensions. Carrying out 51-dimension data annotation on the sampling data by a manual annotation method, wherein each group of 3 facial expression picture data is recorded as an annotation result:
Figure 464520DEST_PATH_IMAGE073
and carrying out multiple manual labeling, and acquiring a labeling record according to a mean value method.
Annotating datasets manually
Figure 993721DEST_PATH_IMAGE074
And carrying out model training on the expression control signals. The ResNet-18 model can be selected as a standard convolution network main model, and output signals are controlled to be 51 dimensions. And training the model through a loss function of continuous type labeling data, and obtaining a control signal model.
(2) And constructing a generation model of facial expression change by sampling data.
And generating and learning the face data by constructing an encode-decode model. In the model generation model construction, the decoding (decode) model construction is realized by improving a data path in a style-gan2 model, a coding (encode) model is an up-sampling model of the model, and a D discrimination model in a training model is specifically as follows: the standard target imaging human face data is encoded through a decoding model, the encoded data is input through the top layer in the decoding model, and the standard target imaging human face data is encoded through the decoding model
Figure 422428DEST_PATH_IMAGE075
Inputting random control mapping signal in layer to generate imaging graph
Figure 746093DEST_PATH_IMAGE076
Wherein
Figure 525831DEST_PATH_IMAGE077
Is the encoded signal of the coding model.
Model training of generating model is carried out through sampling data picture, and face generating model with random input is obtained
Figure 593144DEST_PATH_IMAGE078
In the training process, signal parameters which are distributed according to a random standard are directly generated without a coding generation model, and random face imaging is generated through a decoding model. Constructing a countering learning loss function
Figure 876357DEST_PATH_IMAGE079
Figure 636503DEST_PATH_IMAGE080
And completing the training of the models G and D.
And introducing a coding model, removing random standard distribution signal parameters, inputting the sampled human face data as training data into the coding model, and generating imaging data through a coding and decoding process. Constructing perceptual loss functions
Figure 637957DEST_PATH_IMAGE081
And training the model G to ensure that the imaging data is consistent with the training.
The different loss functions need to be weighted to control the training effect.
Figure 526539DEST_PATH_IMAGE082
And
Figure 867522DEST_PATH_IMAGE083
the training weight of (a) is 1,
Figure 860886DEST_PATH_IMAGE084
the training loss function of (2) is constructed by VGG19, the characteristic of each downsampling layer output layer is taken as a perception characteristic, model training is carried out, and the loss weight of each layer is [10, 10, 10, 10 ]]There are 5 sensing layers. The parameters of the model are initialized by a standard MSRA initialization algorithm.
(3) The method comprises the following steps of accessing a signal control model, controlling a generation model to generate human face expression imaging through a control signal, and specifically comprising the following steps:
the face imaging effect is controlled by constructing two-dimensional control signal models, namely { face expression and face orientation }.
Sampling model data is input to the coding model in accordance with the output generation model G, and the mapping model M is trained in the decoding model using a control signal output in advance by sampling as training data.
Constructing a mapping model
Figure 21740DEST_PATH_IMAGE085
By constructing a multi-layer perception model as a mapping model architecture,
Figure 758752DEST_PATH_IMAGE086
and
Figure 954241DEST_PATH_IMAGE087
and 4 layers and 7 layers of linear transformation layers are respectively provided, and the ReLU function is used as an activation function of the mapping model.
Control signals to be sampled during training
Figure 587347DEST_PATH_IMAGE088
As training data, input to the mapping model
Figure 563394DEST_PATH_IMAGE089
In which the output mapping signal is input to the decoding model
Figure 776200DEST_PATH_IMAGE090
Layer, to intermediate layer feature transformation.
Using the sampled RGB picture data as the input of the coding model of G model, and using the sampled control signal data as the mapping model
Figure 888513DEST_PATH_IMAGE091
Input of (2) acquired imaging mapSlice as control signal model
Figure 161362DEST_PATH_IMAGE092
Obtaining the estimated control signal
Figure 359125DEST_PATH_IMAGE093
Constructing a loss function
Figure 516568DEST_PATH_IMAGE094
For model
Figure 686650DEST_PATH_IMAGE095
And (5) training. In this embodiment, the losses are weighted and the weight setting of 10 works best.
The present invention also provides an imaging control apparatus corresponding to the above method embodiment, and the imaging control apparatus described below and the imaging control method described above may be referred to in correspondence with each other.
Referring to fig. 4, fig. 4 is a block diagram of an imaging control apparatus according to an embodiment of the present invention, where the apparatus may include:
the data acquisition module 41 is used for acquiring two-dimensional RGB data of a target to be imaged;
a feature determination module 42, configured to extract a target control signal feature from the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on a target to be imaged;
and the imaging control module 43 is configured to perform imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using the target imaging model, so as to obtain a target image with a corresponding dimension.
According to the technical scheme, the target control signal model is trained in advance, and when the target to be imaged needs to be imaged, the target control signal model is used for determining the target control signal characteristics corresponding to the two-dimensional RGB data of the target to be imaged. And pre-training a target imaging model for imaging control by using the target control signal characteristics, and after extracting the target control signal characteristics, performing imaging control operation on the target to be imaged by using the target imaging model to obtain a target image with corresponding dimensionality. Therefore, two-dimensional data can be effectively utilized, the imaging effect control of corresponding dimensionality can be realized without constructing a three-dimensional model of an imaging target, the imaging control convenience is improved, the research and development threshold is reduced, and the cost is saved.
In one embodiment of the present invention, the apparatus includes a control signal model training module, and the control signal model training module includes:
the characteristic marking submodule is used for marking the change characteristics of the three-dimensional target to be imaged to obtain marking control signal characteristics;
the form information determining submodule is used for determining form information of the characteristic of the labeling control signal;
the original model construction submodule is used for selecting a depth model architecture according to the morphological information and constructing an original control signal model according to the selected depth model architecture;
and the control signal model training submodule is used for carrying out iterative training on the original control signal model by using the characteristic of the labeled control signal to obtain a target control signal model.
In one embodiment of the present invention, the feature labeling submodule includes:
the characteristic marking unit is used for marking the change characteristics of the three-dimensional target to be imaged for a preset number of times according to the same change characteristic to obtain each characteristic marking result;
and the mean value calculating unit is used for carrying out mean value calculation on the characteristic labeling results to obtain the labeling control signal characteristics.
In one embodiment of the present invention, the apparatus includes an imaging model training module, the imaging model training module including:
the data acquisition submodule is used for acquiring two-dimensional RGB sample data of a target to be imaged;
the imaging submodule is used for performing down-sampling on the two-dimensional RGB sample data by utilizing a pre-constructed original imaging model, controlling the down-sampled two-dimensional RGB sample data to perform up-sampling through the characteristics of a target control signal, and outputting imaging information;
and the imaging model training submodule is used for carrying out countermeasure training on the original imaging model by utilizing the imaging information to obtain a target imaging model.
In an embodiment of the present invention, the imaging model training module may further include:
and the data filtering submodule is used for filtering the two-dimensional RGB sample data after the two-dimensional RGB sample data of the target to be imaged is acquired.
In one embodiment of the present invention, the apparatus may further include:
the mapping model building module is used for building a multiple mapping model according to the characteristics of each target control signal when the characteristics of the target control signals are multidimensional characteristics;
the imaging control module is a module for performing imaging control operation on the target to be imaged by combining the target imaging model with the two-dimensional RGB data, the target control signal characteristics and the multiple mapping model.
In one embodiment of the present invention, the apparatus may further include:
the time interval acquisition module is used for acquiring a control signal time interval corresponding to the target control signal characteristic before the two-dimensional RGB data and the target control signal characteristic are input into the target imaging model after the target control signal characteristic corresponding to the two-dimensional RGB data is determined by using the target control signal model;
the uniform sampling module is used for uniformly sampling the control signal characteristics in the control signal time interval to obtain the sampling control signal characteristics;
the frame interpolation module is used for performing frame interpolation operation on the target control signal characteristics by using the sampling control signal characteristics to obtain control signal characteristics after frame interpolation;
the imaging control module 43 is specifically a module for performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the control signal characteristics after frame interpolation by using the target imaging model.
In correspondence with the above method embodiment, referring to fig. 5, fig. 5 is a schematic diagram of an imaging control apparatus provided by the present invention, which may include:
a memory 332 for storing a computer program;
a processor 322 for implementing the steps of the imaging control method of the above-described method embodiments when executing the computer program.
Specifically, referring to fig. 6, fig. 6 is a schematic diagram of a specific structure of an imaging control apparatus provided in this embodiment, which may generate relatively large differences due to different configurations or performances, and may include a processor (CPU) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Memory 332 may be, among other things, transient or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a data processing device. Still further, the processor 322 may be configured to communicate with the memory 332 to execute a series of instruction operations in the memory 332 on the imaging control device 301.
The imaging control apparatus 301 may also include one or more power sources 326, one or more wired or wireless network interfaces 350, one or more input-output interfaces 358, and/or one or more operating systems 341.
The steps in the imaging control method described above may be implemented by the structure of the imaging control apparatus.
Corresponding to the above method embodiment, the present invention further provides a computer-readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
collecting two-dimensional RGB data of a target to be imaged; determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on a target to be imaged; and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using the target imaging model to obtain a target image with corresponding dimensionality.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For the introduction of the computer-readable storage medium provided by the present invention, please refer to the above method embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the apparatus and the computer-readable storage medium disclosed in the embodiments correspond to the method disclosed in the embodiments, so that the description is simple, and the relevant points can be referred to the description of the method.
The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (8)

1. An imaging control method, characterized by comprising:
collecting two-dimensional RGB data of a target to be imaged;
determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged;
performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model to obtain a target image with corresponding dimensionality;
the training process of the target control signal model comprises the steps of marking the change characteristics of a three-dimensional target to be imaged to obtain the characteristics of a marked control signal; determining form information of the characteristic of the labeling control signal; selecting a depth model architecture according to the morphological information, and constructing an original control signal model according to the selected depth model architecture; performing iterative training on the original control signal model by using the characteristic of the labeled control signal to obtain the target control signal model;
the training process of the target imaging model comprises the steps of collecting two-dimensional RGB sample data of the target to be imaged; down sampling the two-dimensional RGB sample data by using a pre-constructed original imaging model, controlling the down-sampled two-dimensional RGB sample data to up-sample by using the target control signal characteristic, and outputting imaging information; and performing countermeasure training on the original imaging model by using the imaging information to obtain the target imaging model.
2. The imaging control method according to claim 1, wherein labeling the variation characteristic of the three-dimensional target to be imaged to obtain a labeling control signal characteristic comprises:
for the same change characteristic, carrying out preset times of marking on the change characteristic of the three-dimensional target to be imaged to obtain each characteristic marking result;
and carrying out mean value calculation on the characteristic labeling results to obtain the labeling control signal characteristics.
3. The imaging control method according to claim 1, further comprising, after acquiring two-dimensional RGB sample data of the target to be imaged:
and carrying out filtering operation on the two-dimensional RGB sample data.
4. The imaging control method according to any one of claims 1 to 3, characterized by further comprising:
when the target control signal features are multi-dimensional features, constructing a multiple mapping model according to the target control signal features;
and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model, wherein the imaging control operation comprises the following steps:
and utilizing the target imaging model to combine the two-dimensional RGB data, the target control signal characteristics and the multiple mapping model to carry out imaging control operation on the target to be imaged.
5. The imaging control method according to claim 1, wherein after determining the target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model, before performing the imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model, the method further comprises:
acquiring a control signal time interval corresponding to the target control signal characteristic;
uniformly sampling control signal characteristics in the control signal time interval to obtain sampling control signal characteristics;
performing frame interpolation operation on the target control signal characteristic by using the sampling control signal characteristic to obtain a control signal characteristic after frame interpolation;
and performing imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model, wherein the imaging control operation comprises the following steps:
and carrying out imaging control operation on the target to be imaged according to the two-dimensional RGB data and the control signal characteristics after frame interpolation by using the target imaging model.
6. An imaging control apparatus, characterized by comprising:
the data acquisition module is used for acquiring two-dimensional RGB data of a target to be imaged;
the characteristic determining module is used for determining target control signal characteristics corresponding to the two-dimensional RGB data by using a target control signal model; the target control signal characteristic is the characteristic of deformation imaging control or space change imaging control on the target to be imaged;
the imaging control module is used for carrying out imaging control operation on the target to be imaged according to the two-dimensional RGB data and the target control signal characteristics by using a target imaging model to obtain a target image with corresponding dimensionality;
the training process of the target control signal model comprises the steps of marking the change characteristics of a three-dimensional target to be imaged to obtain the characteristics of a marked control signal; determining form information of the characteristic of the labeling control signal; selecting a depth model architecture according to the morphological information, and constructing an original control signal model according to the selected depth model architecture; performing iterative training on the original control signal model by using the characteristic of the labeled control signal to obtain the target control signal model;
the training process of the target imaging model comprises the steps of collecting two-dimensional RGB sample data of the target to be imaged; down sampling the two-dimensional RGB sample data by using a pre-constructed original imaging model, controlling the down-sampled two-dimensional RGB sample data to up-sample by using the target control signal characteristic, and outputting imaging information; and performing countermeasure training on the original imaging model by using the imaging information to obtain the target imaging model.
7. An imaging control apparatus, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the imaging control method according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the imaging control method according to any one of claims 1 to 5.
CN202110894408.8A 2021-08-05 2021-08-05 Imaging control method, device and equipment and computer readable storage medium Active CN113344778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110894408.8A CN113344778B (en) 2021-08-05 2021-08-05 Imaging control method, device and equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110894408.8A CN113344778B (en) 2021-08-05 2021-08-05 Imaging control method, device and equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113344778A CN113344778A (en) 2021-09-03
CN113344778B true CN113344778B (en) 2021-11-05

Family

ID=77480776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110894408.8A Active CN113344778B (en) 2021-08-05 2021-08-05 Imaging control method, device and equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113344778B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109142382A (en) * 2018-10-12 2019-01-04 成都精工华耀科技有限公司 A kind of track visualization inspection two dimension and three-dimensional fusion imaging system
CN112585943A (en) * 2018-08-31 2021-03-30 索尼公司 Imaging apparatus, imaging system, imaging method, and imaging program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8279221B2 (en) * 2005-08-05 2012-10-02 Samsung Display Co., Ltd. 3D graphics processor and autostereoscopic display device using the same
JP2013115668A (en) * 2011-11-29 2013-06-10 Sony Corp Image processing apparatus, image processing method, and program
CN109521030A (en) * 2018-10-12 2019-03-26 成都精工华耀科技有限公司 A kind of track visualization inspection RGBD imaging system
CN109919876B (en) * 2019-03-11 2020-09-01 四川川大智胜软件股份有限公司 Three-dimensional real face modeling method and three-dimensional real face photographing system
EP3767543A1 (en) * 2019-07-17 2021-01-20 Robert Bosch GmbH Device and method for operating a neural network
CN112991526A (en) * 2021-05-18 2021-06-18 创新奇智(北京)科技有限公司 Method and device for marking three-dimensional posture of image, electronic equipment and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112585943A (en) * 2018-08-31 2021-03-30 索尼公司 Imaging apparatus, imaging system, imaging method, and imaging program
CN109142382A (en) * 2018-10-12 2019-01-04 成都精工华耀科技有限公司 A kind of track visualization inspection two dimension and three-dimensional fusion imaging system

Also Published As

Publication number Publication date
CN113344778A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
Zeng et al. Aggregated contextual transformations for high-resolution image inpainting
CN111325851B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN111047548B (en) Attitude transformation data processing method and device, computer equipment and storage medium
CN108961369B (en) Method and device for generating 3D animation
US20220028031A1 (en) Image processing method and apparatus, device, and storage medium
US20230072627A1 (en) Gaze correction method and apparatus for face image, device, computer-readable storage medium, and computer program product face image
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
KR20210074360A (en) Image processing method, device and apparatus, and storage medium
Liu et al. Optimization-based key frame extraction for motion capture animation
CN113159056A (en) Image segmentation method, device, equipment and storage medium
CN112258625B (en) Method and system for reconstructing single image to three-dimensional point cloud model based on attention mechanism
KR20230027274A (en) Distillation of Semantic Relationship Preservation Knowledge for Image-to-Image Transformation
JP2022172173A (en) Image editing model training method and device, image editing method and device, electronic apparatus, storage medium and computer program
CN114913303A (en) Virtual image generation method and related device, electronic equipment and storage medium
WO2023020358A1 (en) Facial image processing method and apparatus, method and apparatus for training facial image processing model, and device, storage medium and program product
Shen et al. Clipgen: A deep generative model for clipart vectorization and synthesis
Geng et al. Towards photo-realistic facial expression manipulation
CN113177526B (en) Image processing method, device, equipment and storage medium based on face recognition
CN114283152A (en) Image processing method, image processing model training method, image processing device, image processing equipment and image processing medium
Yin et al. Virtual reconstruction method of regional 3D image based on visual transmission effect
CN113344778B (en) Imaging control method, device and equipment and computer readable storage medium
CN117252984A (en) Three-dimensional model generation method, device, apparatus, storage medium, and program product
CN116993948A (en) Face three-dimensional reconstruction method, system and intelligent terminal
CN111598904B (en) Image segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant