CN107066095B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN107066095B
CN107066095B CN201710208681.4A CN201710208681A CN107066095B CN 107066095 B CN107066095 B CN 107066095B CN 201710208681 A CN201710208681 A CN 201710208681A CN 107066095 B CN107066095 B CN 107066095B
Authority
CN
China
Prior art keywords
target object
information
characteristic region
virtual model
deformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710208681.4A
Other languages
Chinese (zh)
Other versions
CN107066095A (en
Inventor
卢彦斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710208681.4A priority Critical patent/CN107066095B/en
Publication of CN107066095A publication Critical patent/CN107066095A/en
Application granted granted Critical
Publication of CN107066095B publication Critical patent/CN107066095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an information processing method and electronic equipment, wherein the method comprises the following steps: acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point; determining a virtual model corresponding to a target object based on at least one image information for the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model; acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to image processing technologies in the field of communications, and in particular, to an information processing method and an electronic device.
Background
In the use scene of the existing electronic equipment, the motion capture system of the virtual avatar mainly acquires the motion information of each part by tracking various sensors attached to the body. Such motion information may be used to drive the motion of the virtual object. Generally, in such a scenario, such a motion capture system is very expensive, requires a long preparation time before use, and has high requirements on the surrounding environment, so that a corresponding three-dimensional model cannot be quickly established for a user, and the application of such a model to more scenarios cannot be guaranteed.
Disclosure of Invention
The present invention is directed to an information processing method and an electronic device, which are used to solve the above problems in the prior art.
In order to achieve the above object, the present invention provides an information processing method applied to an electronic device, including:
acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
determining a virtual model corresponding to a target object based on at least one image information for the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
An embodiment of the present invention further provides an electronic device, including:
an acquisition unit configured to acquire at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
the model establishing unit is used for determining a virtual model corresponding to a target object based on at least one image information aiming at the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
the adjusting unit is used for acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
The invention provides an information processing method and electronic equipment, wherein a virtual model for a target object is established based on an image containing depth information; and then, according to the deformation information with the depth information, adjusting the virtual model to obtain the adjusted virtual model. Therefore, when the virtual model of the target object is obtained, the action acquisition of the target object through the sensor is avoided, so that the processing of the deformation adjustment of the virtual model is more convenient, and the feasibility that the virtual model can be applied to more scenes is improved.
Drawings
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a scenario 1 according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scenario 2 according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a scenario 3 according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a scenario of an embodiment of the present invention 4;
FIG. 6 is a diagram illustrating a scenario of an embodiment of the present invention 5;
fig. 7 is a schematic view of a composition structure of an electronic device according to an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
The first embodiment,
An embodiment of the present invention provides an information processing method, applied to an electronic device, as shown in fig. 1, including:
step 101: acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
step 102: determining a virtual model corresponding to a target object based on at least one image information for the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
step 103: acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
Here, the electronic device may be a device having an image processing function, and the specific form of the electronic device is not limited in this embodiment, for example, the electronic device may be a computer (PC or notebook computer) or the like.
The obtaining of the at least one image information may be obtaining the image information with depth information. The specific obtaining mode can be that at least one piece of pre-stored image information aiming at the target object is directly obtained from the memory; it is also possible to obtain image information for one or more angles of the target object by means of at least one image acquisition unit which is capable of acquiring depth information.
Based on at least one image information for a target object, determining a virtual model corresponding to the target object may be: and establishing and obtaining a virtual model corresponding to the target object based on the at least one image information of the target object.
It is noted that the virtual model may be a three-dimensional image of the same type as the target object, for example, as shown in fig. 2, the target object is a character, and then the corresponding virtual model is the three-dimensional model 22 based on the character 21 shown in fig. 2.
Further, the feature region of the target object may be a feature region determined according to actual conditions, and referring to fig. 2 as well, assuming that the target object is a person 21, the feature region may be determined as several basic feature regions of eyes, a mouth, and a nose; if a virtual model with better simulation effect is to be obtained, the feature areas can be increased, for example, more feature areas such as ears, hairlines, eyebrows and the like are increased.
It should be understood that the foregoing description has been made on the target object as a human being, but actually there may be other types of target objects, for example, a small animal, and then the corresponding characteristic region may include characteristic regions of legs, tails and the like in addition to eyes, mouth and nose. In other words, the feature region of the target object may be related to the type of the target object, and the corresponding relationship may be a preset corresponding relationship.
Determining a mode of a virtual model corresponding to a target object based on at least one image information of the target object, wherein the mode can be used for directly adopting a characteristic region in the image information of the target object to establish a corresponding virtual model; when the virtual model is established, the depth information in the image information needs to be combined, so that the established virtual model is a model capable of reflecting the depth information.
Further, referring to fig. 3, the image information of the target object may be associated with the virtual model by determining the outline of the virtual model and the relative position relationship between the feature regions based on the outline of the target object in the image information of the target object and the relative position relationship between the feature regions, so that the feature regions of the target object and the virtual feature regions in the virtual model are associated with each other. As shown in fig. 2, the feature regions 1, 2, 3 of the target object correspond to the virtual feature regions 11, 12, 13 in the virtual model, and the relative positional relationship between the feature regions 1, 2, 3 of the target object in the image information is the same as the relative positional relationship between the virtual feature regions 11, 12, 13.
Further, the acquiring deformation information of the feature region in the target object includes:
within a first preset time length, obtaining the change information of the depth information corresponding to each pixel point in at least one characteristic region of the target object, and forming the deformation information of the characteristic region in the target object at least based on the change information containing the depth information corresponding to each pixel point.
The deformation information of the characteristic region in the target object is obtained, and may be the deformation information of the characteristic region in the target object obtained in a preset time period. The deformation information can represent the position change information of the pixel points of each characteristic region of the target object and can also represent the change information of the depth information. For example, as shown in fig. 3, when the target object currently closes the eye, the eye feature region 31 shown in the figure can be considered to generate deformation information, wherein the deformation information can be characterized by the change of the contour coordinate value of the eye; further, the change of the depth information of the eye feature region 31 may be represented by a change value of the depth information. The first preset duration may be set according to an actual situation, for example, may be 10 seconds, and if the purpose of more accurately matching and adjusting the virtual model without delay is to be achieved, the first preset duration may be set to be smaller, for example, may have a length of 1 second, 30ms, and the like, which is not exhaustive here.
The adjusting and displaying the virtual model based on the deformation information of the characteristic region comprises:
calculating to obtain corresponding adjustment information of the depth information of each pixel point in the corresponding characteristic region in the virtual model at least based on the change information of the depth information corresponding to each pixel point in the characteristic region;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
And determining deformation information corresponding to the mouth change in the virtual model based on the corresponding proportional relation between the target object and the virtual model. For example, when the target object corresponds to the virtual model in a ratio of 1:1, the deformation information of the target object may be equal to the deformation information of the virtual model, and then the virtual model is directly adjusted based on the deformation information and finally displayed. In addition, assuming that when the target object and the virtual model do not correspond to each other in a ratio of 1:1, it is possible that the outline of the virtual model and the outline of each virtual feature region are larger or smaller than the size of the target object in the image information, the corresponding deformation in the virtual model may be determined based on the proportional correspondence between the two.
Further, the method for tracking how to track the deformation of the three-dimensional model may be tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. The non-rigid body deformation is obtained by estimating local affine transformation from A to B and establishing the optimal deformation relation from A to B through an optimization method. This deformation relationship can be consideredIs a function transformation f: A->B, our goal is to get the transformed a and B closest together. For the optimization problem, i.e., | f (A) - (B) | luminous flux2And minimum. Usually, we select a proper amount of points on a through uniform sampling, define a neighborhood with a certain size, and transform the points to B through the same affine transformation. The points at different positions are mapped to different affine transformations, thereby simulating general non-rigid body transformation. The basic guiding idea is that point deformations that are close together will be relatively close, and point deformations that are far apart will likely be very different. The more sampling points, the finer the variation and the greater the difficulty of optimization. The optimization of the problem is divided into four parts: a first part: general correspondence between a and B; a second part: constraining the corresponding relation; and a third part: local smooth constraint; the four parts are as follows: local rigidity constraints.
Further, when extracting the deformation information of the target object, there may be a problem that a part of the feature region of the target object is blocked. The following two processing methods can be used for this processing case:
a first kind,
When deformation information of the characteristic region in the target object is obtained, the method further includes:
judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at the first characteristic region based on deformation information corresponding to other characteristic regions except the first characteristic region of the target object.
The method can be applied to the situation that the occluded area is small, and in the scene, the deformation information of the characteristic area in the peripheral area can be averaged to obtain the deformation information of the occluded area. Of course, the deformation information of one of the peripheral feature regions may be directly used as the deformation information of the blocked region.
A second kind,
When deformation information of the characteristic region in the target object is obtained, the method further includes:
judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, detecting whether an image acquisition mode exists for the first characteristic region;
when an image acquisition mode exists for the first characteristic region, acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode.
Such a scene is suitable for the scene shown in fig. 4, in which a certain region of the target object is blocked, for example, the region is blocked by AR glasses, and this part may not be able to acquire real-time deformation information.
Particularly, the eye change condition is very frequent, so that some image acquisition units capable of acquiring depth information can be arranged in the AR glasses towards the direction of the user;
then, the image acquired by the partial image acquisition unit and the corresponding deformation information (including depth information) are combined with the deformation information of other characteristic areas to obtain the deformation information of the whole target object, so as to adjust the virtual model.
Generally, the method can be applied to scenes, and only generates a three-dimensional model for a target object so as to realize interesting images;
and the other method is that a three-dimensional model is established for the image information, and then the three-dimensional model is sent to the opposite terminal equipment which is used for the electronic equipment to communicate, so that the opposite terminal equipment can obtain a real-time updated three-dimensional model style of the target object based on the three-dimensional model and the adjustment information corresponding to the deformation of the three-dimensional model, and the real-time interaction feeling among users is improved.
Therefore, by adopting the scheme, the virtual model aiming at the target object can be established based on the image containing the depth information; and then, according to the deformation information with the depth information, adjusting the virtual model to obtain the adjusted virtual model. Therefore, when the virtual model of the target object is obtained, the action acquisition of the target object through the sensor is avoided, so that the processing of the deformation adjustment of the virtual model is more convenient, and the feasibility that the virtual model can be applied to more scenes is improved.
Example II,
An embodiment of the present invention provides an information processing method, applied to an electronic device, as shown in fig. 1, including:
step 101: acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
step 102: determining a virtual model corresponding to a target object based on at least one image information for the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
step 103: acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
Here, the electronic device may be a device having an image processing function, and the specific form of the electronic device is not limited in this embodiment, for example, the electronic device may be a computer (PC or notebook computer) or the like.
The obtaining of the at least one image information may be obtaining the image information with depth information. The specific obtaining mode can be that at least one piece of pre-stored image information aiming at the target object is directly obtained from the memory; it is also possible to obtain image information for one or more angles of the target object by means of at least one image acquisition unit which is capable of acquiring depth information.
The difference from the embodiment is that, in this embodiment, the determining the virtual model corresponding to the target object includes:
establishing an intermediate model corresponding to the target object, wherein the intermediate model and the target object have the same key features, and the proportion between the key features is consistent with that of the target object;
and matching at least one characteristic region contained in the intermediate model with at least one characteristic region contained in the virtual model to obtain a mapping relation between the characteristic region of the intermediate model and the characteristic region of the virtual model, so that the target object is matched with the virtual model. It should be noted that the virtual model may be a three-dimensional image of the same type as the target object, for example, if the target object is a character, the corresponding intermediate model may be a relatively compact three-dimensional model corresponding to the character; and then, corresponding the intermediate model to each characteristic region of the final virtual model, namely matching the target object with each characteristic region in the final virtual model.
Further, the feature region of the target object may be a feature region determined according to actual conditions, and referring to fig. 2 as well, assuming that the target object is a person 21, the feature region may be determined as several basic feature regions of eyes, a mouth, and a nose; if a virtual model 22 with better simulation effect is to be obtained, the feature areas can be increased, for example, more feature areas such as ears, hairline, eyebrows, etc. can be increased.
It should be understood that the foregoing description has been made on the target object as a human being, but actually there may be other types of target objects, for example, a small animal, and then the corresponding characteristic region may include characteristic regions of legs, tails and the like in addition to eyes, mouth and nose. In other words, the feature region of the target object may be related to the type of the target object, and the corresponding relationship may be a preset corresponding relationship.
Determining a mode of a virtual model corresponding to a target object based on at least one image information of the target object, wherein the mode can be used for directly adopting a characteristic region in the image information of the target object to establish a corresponding virtual model; when the virtual model is established, the depth information in the image information needs to be combined, so that the established virtual model is a model capable of reflecting the depth information.
Further, referring to fig. 2, the image information of the target object may be associated with the virtual model in such a manner that the contour of the virtual model and the relative positional relationship between the feature regions are determined based on the contour of the target object in the image information of the target object and the relative positional relationship between the feature regions, so that the feature regions of the target object and the virtual feature regions in the virtual model correspond to each other.
Further, the acquiring deformation information of the feature region in the target object includes:
within a first preset time length, obtaining the change information of the depth information corresponding to each pixel point in at least one characteristic region of the target object, and forming the deformation information of the characteristic region in the target object at least based on the change information containing the depth information corresponding to each pixel point.
The deformation information of the characteristic region in the target object is obtained, and may be the deformation information of the characteristic region in the target object obtained in a preset time period. The deformation information can represent the position change information of the pixel points of each characteristic region of the target object and can also represent the change information of the depth information. The first preset duration may be set according to an actual situation, for example, may be 10 seconds, and if the purpose of more accurately matching and adjusting the virtual model without delay is to be achieved, the first preset duration may be set to be smaller, for example, may have a length of 1 second, 30ms, and the like, which is not exhaustive here.
The adjusting and displaying the virtual model based on the deformation information of the characteristic region comprises:
calculating to obtain corresponding adjustment information of the depth information of each pixel point in the corresponding characteristic region in the virtual model at least based on the change information of the depth information corresponding to each pixel point in the characteristic region;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
And determining deformation information corresponding to the mouth change in the virtual model based on the corresponding proportional relation between the target object and the virtual model. For example, when the target object corresponds to the virtual model in a ratio of 1:1, the deformation information of the target object may be equal to the deformation information of the virtual model, and then the virtual model is directly adjusted based on the deformation information and finally displayed. In addition, assuming that when the target object and the virtual model do not correspond to each other in a ratio of 1:1, it is possible that the outline of the virtual model and the outline of each virtual feature region are larger or smaller than the size of the target object in the image information, the corresponding deformation in the virtual model may be determined based on the proportional correspondence between the two.
Further, the method for tracking how to track the deformation of the three-dimensional model may be tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. The non-rigid body deformation is obtained by estimating local affine transformation from A to B and establishing the optimal deformation relation from A to B through an optimization method. This deformation relationship can be considered as a function transformation f: A->B, our goal is to get the transformed a and B closest together. For the optimization problem, i.e., | f (A) - (B) | luminous flux2And minimum. Usually, we select a proper amount of points on a through uniform sampling, define a neighborhood with a certain size, and transform the points to B through the same affine transformation. The points at different positions are mapped to different affine transformations, thereby simulating general non-rigid body transformation. The basic guiding idea is that point deformations that are close together will be relatively close, and point deformations that are far apart will likely be very different. The more sampling points, the finer the variation and the greater the difficulty of optimization. The optimization of the problem is divided into four parts: a first part: general correspondence between a and B; a second part: constraining the corresponding relation; and a third part: local smooth constraint; the four parts are as follows:local rigidity constraints.
Further, when extracting the deformation information of the target object, there may be a problem that a part of the feature region of the target object is blocked. The following two processing methods can be used for this processing case:
a first kind,
When deformation information of the characteristic region in the target object is obtained, the method further includes:
judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at the first characteristic region based on deformation information corresponding to other characteristic regions except the first characteristic region of the target object.
The method can be applied to the situation that the occluded area is small, and in the scene, the deformation information of the characteristic area in the peripheral area can be averaged to obtain the deformation information of the occluded area. Of course, the deformation information of one of the peripheral feature regions may be directly used as the deformation information of the blocked region.
A second kind,
When deformation information of the characteristic region in the target object is obtained, the method further includes:
judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, detecting whether an image acquisition mode exists for the first characteristic region;
when an image acquisition mode exists for the first characteristic region, acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode.
Such a scene is suitable for the scene shown in fig. 3, in which a certain region of the target object is blocked, for example, the region is blocked by AR glasses, and this part may not be able to acquire real-time deformation information.
Particularly, the eye change condition is very frequent, so that some image acquisition units capable of acquiring depth information can be arranged in the AR glasses towards the direction of the user;
then, the image acquired by the partial image acquisition unit and the corresponding deformation information (including depth information) are combined with the deformation information of other characteristic areas to obtain the deformation information of the whole target object, so as to adjust the virtual model.
Generally, the method can be applied to scenes, and only generates a three-dimensional model for a target object so as to realize interesting images;
and the other method is that a three-dimensional model is established for the image information, and then the three-dimensional model is sent to the opposite terminal equipment which is used for the electronic equipment to communicate, so that the opposite terminal equipment can obtain a real-time updated three-dimensional model style of the target object based on the three-dimensional model and the adjustment information corresponding to the deformation of the three-dimensional model, and the real-time interaction feeling among users is improved.
First, in this embodiment, a multi-angle depth camera array is adopted to collect a depth map of a target object in real time, and completely capture surface information of a substitute, which may be a component style of the camera array of this embodiment as shown in fig. 5.
And collecting the depth map of the substitute in a certain time, and fusing the depth map into a complete three-dimensional model. This is because a single depth map has a large noise ratio, low quality depth information affects subsequent accuracy, and a plurality of depth maps are not a closed curved surface due to overlapping regions between them, and cannot correspond to a virtual object as a whole. Therefore, a high-quality complete three-dimensional model is formed by the depth map fusion technology.
And establishing a corresponding relation between the avatar model A and the virtual object. For example, a three-dimensional model of a person is associated with a portion of a virtual object, particularly critical areas such as the eyes, nose, chin, elbows, etc. In the step, key parts can be found in the three-dimensional model through a machine learning method.
And tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. Firstly, a three-dimensional model B of a current frame is generated by utilizing a multi-angle depth map, and secondly, a deformation relation from A to B is established by estimating local affine transformation on a key part. Since the target is inevitably occluded during the movement, generally, B is an incomplete model and has a quality lower than that of a, and an error is generated when the model of B is directly matched with the virtual object. And the three-dimensional model A of the substitute body is deformed to the position of B by utilizing the deformation relation, which is equivalent to obtaining the full version and the reinforced version of B.
And generating a deformation model of the virtual object. And according to the correspondence between the substitute three-dimensional model A and the virtual object, converting the deformation of the A into the deformation of the virtual object and driving the virtual object to move. As shown in fig. 6, the virtual model may be a preset three-dimensional avatar of a puppy, and each feature region of the person is matched with each feature region in the three-dimensional avatar of the puppy one by one to obtain a mapping relationship.
And displaying in real time. And displaying the change of the virtual object on the display terminal in real time.
Therefore, by adopting the scheme, the virtual model aiming at the target object can be established based on the image containing the depth information; and then, according to the deformation information with the depth information, adjusting the virtual model to obtain the adjusted virtual model. Therefore, when the virtual model of the target object is obtained, the action acquisition of the target object through the sensor is avoided, so that the processing of the deformation adjustment of the virtual model is more convenient, and the feasibility that the virtual model can be applied to more scenes is improved.
Example III,
An embodiment of the present invention provides an electronic device, as shown in fig. 7, including:
an acquisition unit 71 for acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
a model establishing unit 72, configured to determine, based on at least one image information for a target object, a virtual model corresponding to the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
an adjusting unit 73, configured to obtain deformation information of a feature region in the target object, and adjust and display the virtual model based on the deformation information of the feature region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
Here, the electronic device may be a device having an image processing function, and the specific form of the electronic device is not limited in this embodiment, for example, the electronic device may be a computer (PC or notebook computer) or the like.
The obtaining of the at least one image information may be obtaining the image information with depth information. The specific obtaining mode can be that at least one piece of pre-stored image information aiming at the target object is directly obtained from the memory; it is also possible to obtain image information for one or more angles of the target object by means of at least one image acquisition unit which is capable of acquiring depth information.
A model establishing unit 72, configured to establish a virtual model corresponding to the target object based on the at least one image information of the target object.
It is noted that the virtual model may be a three-dimensional image of the same type as the target object, for example, as shown in fig. 2, the target object is a character, and then the corresponding virtual model is the three-dimensional model 22 based on the character 21 shown in fig. 2.
Further, the feature region of the target object may be a feature region determined according to actual conditions, and referring to fig. 2 as well, assuming that the target object is a person 21, the feature region may be determined as several basic feature regions of eyes, a mouth, and a nose; if a virtual model with better simulation effect is to be obtained, the feature areas can be increased, for example, more feature areas such as ears, hairlines, eyebrows and the like are increased.
It should be understood that the foregoing description has been made on the target object as a human being, but actually there may be other types of target objects, for example, a small animal, and then the corresponding characteristic region may include characteristic regions of legs, tails and the like in addition to eyes, mouth and nose. In other words, the feature region of the target object may be related to the type of the target object, and the corresponding relationship may be a preset corresponding relationship.
Determining a mode of a virtual model corresponding to a target object based on at least one image information of the target object, wherein the mode can be used for directly adopting a characteristic region in the image information of the target object to establish a corresponding virtual model; when the virtual model is established, the depth information in the image information needs to be combined, so that the established virtual model is a model capable of reflecting the depth information.
Further, referring to fig. 3, the image information of the target object may be associated with the virtual model by determining the outline of the virtual model and the relative position relationship between the feature regions based on the outline of the target object in the image information of the target object and the relative position relationship between the feature regions, so that the feature regions of the target object and the virtual feature regions in the virtual model are associated with each other. As shown in fig. 2, the feature regions 1, 2, 3 of the target object correspond to the virtual feature regions 11, 12, 13 in the virtual model, and the relative positional relationship between the feature regions 1, 2, 3 of the target object in the image information is the same as the relative positional relationship between the virtual feature regions 11, 12, 13.
Further, the acquiring deformation information of the feature region in the target object includes:
within a first preset time length, obtaining the change information of the depth information corresponding to each pixel point in at least one characteristic region of the target object, and forming the deformation information of the characteristic region in the target object at least based on the change information containing the depth information corresponding to each pixel point.
The deformation information of the characteristic region in the target object is obtained, and may be the deformation information of the characteristic region in the target object obtained in a preset time period. The deformation information can represent the position change information of the pixel points of each characteristic region of the target object and can also represent the change information of the depth information. For example, as shown in fig. 3, when the target object currently closes the eye, the eye feature region 31 shown in the figure can be considered to generate deformation information, wherein the deformation information can be characterized by the change of the contour coordinate value of the eye; further, the change of the depth information of the eye feature region 31 may be represented by a change value of the depth information. The first preset duration may be set according to an actual situation, for example, may be 10 seconds, and if the purpose of more accurately matching and adjusting the virtual model without delay is to be achieved, the first preset duration may be set to be smaller, for example, may have a length of 1 second, 30ms, and the like, which is not exhaustive here.
The adjusting unit is used for calculating and obtaining corresponding adjusting information of the depth information of each pixel point in the corresponding characteristic region in the virtual model at least based on the change information of the depth information corresponding to each pixel point in the characteristic region;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
And determining deformation information corresponding to the mouth change in the virtual model based on the corresponding proportional relation between the target object and the virtual model. For example, when the target object corresponds to the virtual model in a ratio of 1:1, the deformation information of the target object may be equal to the deformation information of the virtual model, and then the virtual model is directly adjusted based on the deformation information and finally displayed. In addition, assuming that when the target object and the virtual model do not correspond to each other in a ratio of 1:1, it is possible that the outline of the virtual model and the outline of each virtual feature region are larger or smaller than the size of the target object in the image information, the corresponding deformation in the virtual model may be determined based on the proportional correspondence between the two.
Further, the method for tracking how to track the deformation of the three-dimensional model may be tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. The non-rigid body deformation is obtained by estimating local affine transformation from A to B and establishing the optimal deformation relation from A to B through an optimization method. This deformation relationship can be considered as a function transformation f: A->B, our goal is to get the transformed a and B closest together. For the optimization problem, i.e., | f (A) - (B) | luminous flux2And minimum. Usually, we select a proper amount of points on a through uniform sampling, define a neighborhood with a certain size, and transform the points to B through the same affine transformation. The points at different positions are mapped to different affine transformations, thereby simulating general non-rigid body transformation. The basic guiding idea is that point deformations that are close together will be relatively close, and point deformations that are far apart will likely be very different. The more sampling points, the finer the variation and the greater the difficulty of optimization. The optimization of the problem is divided into four parts: a first part: general correspondence between a and B; a second part: constraining the corresponding relation; and a third part: local smooth constraint; the four parts are as follows: local rigidity constraints.
Further, when extracting the deformation information of the target object, there may be a problem that a part of the feature region of the target object is blocked. The following two processing methods can be used for this processing case:
a first kind,
The adjusting unit is used for judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at the first characteristic region based on deformation information corresponding to other characteristic regions except the first characteristic region of the target object.
The method can be applied to the situation that the occluded area is small, and in the scene, the deformation information of the characteristic area in the peripheral area can be averaged to obtain the deformation information of the occluded area. Of course, the deformation information of one of the peripheral feature regions may be directly used as the deformation information of the blocked region.
A second kind,
The adjusting unit is used for judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, detecting whether an image acquisition mode exists for the first characteristic region;
when an image acquisition mode exists for the first characteristic region, acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode.
Such a scene is suitable for the scene shown in fig. 4, in which a certain region of the target object is blocked, for example, the region is blocked by AR glasses, and this part may not be able to acquire real-time deformation information.
Particularly, the eye change condition is very frequent, so that some image acquisition units capable of acquiring depth information can be arranged in the AR glasses towards the direction of the user;
then, the image acquired by the partial image acquisition unit and the corresponding deformation information (including depth information) are combined with the deformation information of other characteristic areas to obtain the deformation information of the whole target object, so as to adjust the virtual model.
Generally, the method can be applied to scenes, and only generates a three-dimensional model for a target object so as to realize interesting images;
and the other method is that a three-dimensional model is established for the image information, and then the three-dimensional model is sent to the opposite terminal equipment which is used for the electronic equipment to communicate, so that the opposite terminal equipment can obtain a real-time updated three-dimensional model style of the target object based on the three-dimensional model and the adjustment information corresponding to the deformation of the three-dimensional model, and the real-time interaction feeling among users is improved.
Therefore, by adopting the scheme, the virtual model aiming at the target object can be established based on the image containing the depth information; and then, according to the deformation information with the depth information, adjusting the virtual model to obtain the adjusted virtual model. Therefore, when the virtual model of the target object is obtained, the action acquisition of the target object through the sensor is avoided, so that the processing of the deformation adjustment of the virtual model is more convenient, and the feasibility that the virtual model can be applied to more scenes is improved.
Example four,
An embodiment of the present invention provides an electronic device, as shown in fig. 7, including:
an acquisition unit 71 for acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
a model establishing unit 72, configured to determine, based on at least one image information for a target object, a virtual model corresponding to the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
an adjusting unit 73, configured to obtain deformation information of a feature region in the target object, and adjust and display the virtual model based on the deformation information of the feature region; wherein the deformation information at least includes change information of depth information corresponding to the characteristic region of the target object.
Here, the electronic device may be a device having an image processing function, and the specific form of the electronic device is not limited in this embodiment, for example, the electronic device may be a computer (PC or notebook computer) or the like.
The obtaining of the at least one image information may be obtaining the image information with depth information. The specific obtaining mode can be that at least one piece of pre-stored image information aiming at the target object is directly obtained from the memory; it is also possible to obtain image information for one or more angles of the target object by means of at least one image acquisition unit which is capable of acquiring depth information.
The difference from the embodiment is that, in this embodiment, the determining the virtual model corresponding to the target object includes:
establishing an intermediate model corresponding to the target object, wherein the intermediate model and the target object have the same key features, and the proportion between the key features is consistent with that of the target object;
and matching at least one characteristic region contained in the intermediate model with at least one characteristic region contained in the virtual model to obtain a mapping relation between the characteristic region of the intermediate model and the characteristic region of the virtual model, so that the target object is matched with the virtual model. It should be noted that the virtual model may be a three-dimensional image of the same type as the target object, for example, if the target object is a character, the corresponding intermediate model may be a relatively compact three-dimensional model corresponding to the character; and then, corresponding the intermediate model to each characteristic region of the final virtual model, namely matching the target object with each characteristic region in the final virtual model.
Further, the feature region of the target object may be a feature region determined according to actual conditions, and referring to fig. 2 as well, assuming that the target object is a person 21, the feature region may be determined as several basic feature regions of eyes, a mouth, and a nose; if a virtual model 22 with better simulation effect is to be obtained, the feature areas can be increased, for example, more feature areas such as ears, hairline, eyebrows, etc. can be increased.
It should be understood that the foregoing description has been made on the target object as a human being, but actually there may be other types of target objects, for example, a small animal, and then the corresponding characteristic region may include characteristic regions of legs, tails and the like in addition to eyes, mouth and nose. In other words, the feature region of the target object may be related to the type of the target object, and the corresponding relationship may be a preset corresponding relationship.
Determining a mode of a virtual model corresponding to a target object based on at least one image information of the target object, wherein the mode can be used for directly adopting a characteristic region in the image information of the target object to establish a corresponding virtual model; when the virtual model is established, the depth information in the image information needs to be combined, so that the established virtual model is a model capable of reflecting the depth information.
Further, referring to fig. 2, the image information of the target object may be associated with the virtual model in such a manner that the contour of the virtual model and the relative positional relationship between the feature regions are determined based on the contour of the target object in the image information of the target object and the relative positional relationship between the feature regions, so that the feature regions of the target object and the virtual feature regions in the virtual model correspond to each other.
Further, the acquiring deformation information of the feature region in the target object includes:
within a first preset time length, obtaining the change information of the depth information corresponding to each pixel point in at least one characteristic region of the target object, and forming the deformation information of the characteristic region in the target object at least based on the change information containing the depth information corresponding to each pixel point.
The deformation information of the characteristic region in the target object is obtained, and may be the deformation information of the characteristic region in the target object obtained in a preset time period. The deformation information can represent the position change information of the pixel points of each characteristic region of the target object and can also represent the change information of the depth information. The first preset duration may be set according to an actual situation, for example, may be 10 seconds, and if the purpose of more accurately matching and adjusting the virtual model without delay is to be achieved, the first preset duration may be set to be smaller, for example, may have a length of 1 second, 30ms, and the like, which is not exhaustive here.
The adjusting unit is used for calculating and obtaining corresponding adjusting information of the depth information of each pixel point in the corresponding characteristic region in the virtual model at least based on the change information of the depth information corresponding to each pixel point in the characteristic region;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
And determining deformation information corresponding to the mouth change in the virtual model based on the corresponding proportional relation between the target object and the virtual model. For example, when the target object corresponds to the virtual model in a ratio of 1:1, the deformation information of the target object may be equal to the deformation information of the virtual model, and then the virtual model is directly adjusted based on the deformation information and finally displayed. In addition, assuming that when the target object and the virtual model do not correspond to each other in a ratio of 1:1, it is possible that the outline of the virtual model and the outline of each virtual feature region are larger or smaller than the size of the target object in the image information, the corresponding deformation in the virtual model may be determined based on the proportional correspondence between the two.
Further, the method for tracking how to track the deformation of the three-dimensional model may be tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. The non-rigid body deformation is obtained by estimating local affine transformation from A to B and establishing the optimal deformation relation from A to B through an optimization method. This deformation relationship can be considered as a function transformation f: A->B, our goal is to get the transformed a and B closest together. For the optimization problem, i.e., | f (A) - (B) | luminous flux2And minimum. Usually, we select a proper amount of points on a through uniform sampling, define a neighborhood with a certain size, and transform the points to B through the same affine transformation. The points at different positions are mapped to different affine transformations, thereby simulating general non-rigid body transformation. The basic guiding idea is that point deformations close to each other are relatively similar, and point deformations far away from each other are only similarThe energy can be very different. The more sampling points, the finer the variation and the greater the difficulty of optimization. The optimization of the problem is divided into four parts: a first part: general correspondence between a and B; a second part: constraining the corresponding relation; and a third part: local smooth constraint; the four parts are as follows: local rigidity constraints.
Further, when extracting the deformation information of the target object, there may be a problem that a part of the feature region of the target object is blocked. The following two processing methods can be used for this processing case:
a first kind,
The adjusting unit is used for judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at the first characteristic region based on deformation information corresponding to other characteristic regions except the first characteristic region of the target object.
The method can be applied to the situation that the occluded area is small, and in the scene, the deformation information of the characteristic area in the peripheral area can be averaged to obtain the deformation information of the occluded area. Of course, the deformation information of one of the peripheral feature regions may be directly used as the deformation information of the blocked region.
A second kind,
The adjusting unit is used for judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, detecting whether an image acquisition mode exists for the first characteristic region;
when an image acquisition mode exists for the first characteristic region, acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode.
Such a scene is suitable for the scene shown in fig. 3, in which a certain region of the target object is blocked, for example, the region is blocked by AR glasses, and this part may not be able to acquire real-time deformation information.
Particularly, the eye change condition is very frequent, so that some image acquisition units capable of acquiring depth information can be arranged in the AR glasses towards the direction of the user;
then, the image acquired by the partial image acquisition unit and the corresponding deformation information (including depth information) are combined with the deformation information of other characteristic areas to obtain the deformation information of the whole target object, so as to adjust the virtual model.
Generally, the method can be applied to scenes, and only generates a three-dimensional model for a target object so as to realize interesting images;
and the other method is that a three-dimensional model is established for the image information, and then the three-dimensional model is sent to the opposite terminal equipment which is used for the electronic equipment to communicate, so that the opposite terminal equipment can obtain a real-time updated three-dimensional model style of the target object based on the three-dimensional model and the adjustment information corresponding to the deformation of the three-dimensional model, and the real-time interaction feeling among users is improved.
First, in this embodiment, a multi-angle depth camera array is adopted to collect a depth map of a target object in real time, and completely capture surface information of a substitute, which may be a component style of the camera array of this embodiment as shown in fig. 5.
And collecting the depth map of the substitute in a certain time, and fusing the depth map into a complete three-dimensional model. This is because a single depth map has a large noise ratio, low quality depth information affects subsequent accuracy, and a plurality of depth maps are not a closed curved surface due to overlapping regions between them, and cannot correspond to a virtual object as a whole. Therefore, a high-quality complete three-dimensional model is formed by the depth map fusion technology.
And establishing a corresponding relation between the avatar model A and the virtual object. For example, a three-dimensional model of a person is associated with a portion of a virtual object, particularly critical areas such as the eyes, nose, chin, elbows, etc. In the step, key parts can be found in the three-dimensional model through a machine learning method.
And tracking the deformation of the three-dimensional model in real time. Namely, the deformation relation of the three-dimensional model over time is tracked by utilizing the correlation technology. One possible technique is non-rigid deformation. Firstly, a three-dimensional model B of a current frame is generated by utilizing a multi-angle depth map, and secondly, a deformation relation from A to B is established by estimating local affine transformation on a key part. Since the target is inevitably occluded during the movement, generally, B is an incomplete model and has a quality lower than that of a, and an error is generated when the model of B is directly matched with the virtual object. And the three-dimensional model A of the substitute body is deformed to the position of B by utilizing the deformation relation, which is equivalent to obtaining the full version and the reinforced version of B.
And generating a deformation model of the virtual object. And according to the correspondence between the substitute three-dimensional model A and the virtual object, converting the deformation of the A into the deformation of the virtual object and driving the virtual object to move. As shown in fig. 6, the virtual model may be a preset three-dimensional avatar of a puppy, and each feature region of the person is matched with each feature region in the three-dimensional avatar of the puppy one by one to obtain a mapping relationship.
And displaying in real time. And displaying the change of the virtual object on the display terminal in real time.
Therefore, by adopting the scheme, the virtual model aiming at the target object can be established based on the image containing the depth information; and then, according to the deformation information with the depth information, adjusting the virtual model to obtain the adjusted virtual model. Therefore, when the virtual model of the target object is obtained, the action acquisition of the target object through the sensor is avoided, so that the processing of the deformation adjustment of the virtual model is more convenient, and the feasibility that the virtual model can be applied to more scenes is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a device, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An information processing method applied to an electronic device, the method comprising:
acquiring at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
determining a virtual model corresponding to a target object based on at least one image information for the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least comprises the change information of the depth information corresponding to the characteristic region of the target object;
when deformation information of the characteristic region in the target object is obtained, the method further includes:
judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at a first characteristic region of the target object based on deformation information corresponding to other characteristic regions except the first characteristic region; or detecting whether an image acquisition mode exists for the first characteristic region, and acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode when the image acquisition mode exists for the first characteristic region.
2. The method according to claim 1, wherein the obtaining deformation information of the feature region in the target object comprises:
within a first preset time, obtaining change information of depth information corresponding to each pixel point in at least one characteristic region of the target object, and forming deformation information of the characteristic region in the target object at least based on the change information containing the depth information corresponding to each pixel point.
3. The method according to claim 2, wherein the adjusting and displaying the virtual model based on the deformation information of the feature region comprises:
calculating to obtain corresponding adjustment information of the depth information of each pixel point in the corresponding characteristic region in the virtual model at least based on the change information of the depth information corresponding to each pixel point in the characteristic region;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
4. The method of claim 1, wherein the determining the virtual model corresponding to the target object comprises:
establishing an intermediate model corresponding to the target object, wherein the intermediate model and the target object have the same key features, and the proportion between the key features is consistent with that of the target object;
and matching at least one characteristic region contained in the intermediate model with at least one characteristic region contained in the virtual model to obtain a mapping relation between the characteristic region of the intermediate model and the characteristic region of the virtual model, so that the target object is matched with the virtual model.
5. An electronic device, characterized in that the electronic device comprises:
an acquisition unit configured to acquire at least one image information for a target object; the image information consists of at least one pixel point, and the image information at least comprises depth information corresponding to each pixel point;
the model establishing unit is used for determining a virtual model corresponding to a target object based on at least one image information aiming at the target object; the target object comprises at least one characteristic region, the virtual model comprises at least one virtual characteristic region, and the characteristic region in the target object corresponds to the virtual characteristic region in the virtual model;
the adjusting unit is used for acquiring deformation information of a characteristic region in the target object, and adjusting and displaying the virtual model based on the deformation information of the characteristic region; wherein the deformation information at least comprises the change information of the depth information corresponding to the characteristic region of the target object;
the adjusting unit is used for judging whether the target object has occlusion aiming at the first characteristic region;
if the occlusion exists, calculating deformation information aiming at a first characteristic region of the target object based on deformation information corresponding to other characteristic regions except the first characteristic region; or detecting whether an image acquisition mode exists for the first characteristic region, and acquiring deformation information of the first characteristic region within a first preset time length through the image acquisition mode when the image acquisition mode exists for the first characteristic region.
6. The electronic device according to claim 5, wherein the adjusting unit is configured to obtain, within a first preset duration, change information of depth information corresponding to each pixel point in at least one feature region of the target object, so as to compose deformation information of the feature region in the target object based on at least the change information including the depth information corresponding to each pixel point.
7. The electronic device according to claim 6, wherein the adjusting unit is configured to calculate, based on at least change information of depth information corresponding to each pixel point in the feature region, corresponding adjustment information of depth information of each pixel point in the corresponding feature region in the virtual model;
and adjusting and displaying the virtual model based on the calculated adjustment information aiming at the depth information of each pixel point in the characteristic region in the virtual model.
8. The electronic device according to claim 5, wherein the model building unit is configured to build an intermediate model corresponding to the target object, wherein the intermediate model and the target object have the same key features, and a ratio between key features is consistent with the target object; and matching at least one characteristic region contained in the intermediate model with at least one characteristic region contained in the virtual model to obtain a mapping relation between the characteristic region of the intermediate model and the characteristic region of the virtual model, so that the target object is matched with the virtual model.
CN201710208681.4A 2017-03-31 2017-03-31 Information processing method and electronic equipment Active CN107066095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710208681.4A CN107066095B (en) 2017-03-31 2017-03-31 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710208681.4A CN107066095B (en) 2017-03-31 2017-03-31 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN107066095A CN107066095A (en) 2017-08-18
CN107066095B true CN107066095B (en) 2020-09-25

Family

ID=59603181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710208681.4A Active CN107066095B (en) 2017-03-31 2017-03-31 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN107066095B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764135B (en) * 2018-05-28 2022-02-08 北京微播视界科技有限公司 Image generation method and device and electronic equipment
CN109784299A (en) * 2019-01-28 2019-05-21 Oppo广东移动通信有限公司 Model treatment method, apparatus, terminal device and storage medium
CN113689325A (en) * 2021-07-12 2021-11-23 深圳数联天下智能科技有限公司 Method for digitizing beautiful eyebrows, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509343A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Binocular image and object contour-based virtual and actual sheltering treatment method
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN104050712A (en) * 2013-03-15 2014-09-17 索尼公司 Method and apparatus for establishing three-dimensional model
CN105513114A (en) * 2015-12-01 2016-04-20 深圳奥比中光科技有限公司 Three-dimensional animation generation method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508991B (en) * 2011-09-30 2014-07-16 北京航空航天大学 Method of constructing virtual experiment teaching scene based on image material
JP2014238731A (en) * 2013-06-07 2014-12-18 株式会社ソニー・コンピュータエンタテインメント Image processor, image processing system, and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN102509343A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Binocular image and object contour-based virtual and actual sheltering treatment method
CN104050712A (en) * 2013-03-15 2014-09-17 索尼公司 Method and apparatus for establishing three-dimensional model
CN105513114A (en) * 2015-12-01 2016-04-20 深圳奥比中光科技有限公司 Three-dimensional animation generation method and device

Also Published As

Publication number Publication date
CN107066095A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN110139115B (en) Method and device for controlling virtual image posture based on key points and electronic equipment
KR102069964B1 (en) Virtual reality-based apparatus and method to generate a three dimensional(3d) human face model using image and depth data
US10133171B2 (en) Augmenting physical appearance using illumination
JP4473754B2 (en) Virtual fitting device
KR101560508B1 (en) Method and arrangement for 3-dimensional image model adaptation
CN103207664B (en) A kind of image processing method and equipment
CN108537881B (en) Face model processing method and device and storage medium thereof
US8537155B2 (en) Image processing apparatus and method
JP5093053B2 (en) Electronic camera
KR102120046B1 (en) How to display objects
US20180260643A1 (en) Verification method and system
WO2019035155A1 (en) Image processing system, image processing method, and program
US10467793B2 (en) Computer implemented method and device
JP2008535116A (en) Method and apparatus for three-dimensional rendering
CN106997618A (en) A kind of method that virtual reality is merged with real scene
CN110648274B (en) Method and device for generating fisheye image
CN107066095B (en) Information processing method and electronic equipment
CN113657357B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109859100A (en) Display methods, electronic equipment and the computer readable storage medium of virtual background
US20030201996A1 (en) Method and apparatus for generating models of individuals
JP2006215743A (en) Image processing apparatus and image processing method
JP2004326179A (en) Image processing device, image processing method, image processing program, and recording medium storing it
JP2005063041A (en) Three-dimensional modeling apparatus, method, and program
CN116051722A (en) Three-dimensional head model reconstruction method, device and terminal
JP2000331190A (en) Virtual transformation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant