CN107343148B - Image completion method, apparatus and terminal - Google Patents
Image completion method, apparatus and terminal Download PDFInfo
- Publication number
- CN107343148B CN107343148B CN201710640059.0A CN201710640059A CN107343148B CN 107343148 B CN107343148 B CN 107343148B CN 201710640059 A CN201710640059 A CN 201710640059A CN 107343148 B CN107343148 B CN 107343148B
- Authority
- CN
- China
- Prior art keywords
- human body
- model
- user
- image
- completion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Abstract
The invention discloses a kind of image completion method, apparatus and terminals, wherein this method comprises: obtaining the human body 3D model of user using structure light;According to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, wherein target submodel is the corresponding model of any human organ;Using target submodel, completion processing is carried out to human body 3D model;According to the human body 3D model after completion, the image of user is generated.Hereby it is achieved that in shooting process, completion is carried out to human body 3D model in real time, so that in the image generated, the health of user is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, the energy of user is saved, user experience is improved.
Description
Technical field
The present invention relates to field of camera technology more particularly to a kind of image completion method, apparatus and terminal.
Background technique
With the fast development of network and electronic technology and the rapid proliferation of terminal, the function of terminal is become stronger day by day.Example
Such as, more and more terminals are configured with camera, and user can use camera shooting photo, video recording, Video chat etc..
In daily life, there may be deformity for the body having many consumers, for example, without both hands, nose or both feet,
Correspondingly, body is also incomplete in its image for being shot using camera.And many disabled users wish that shooting obtains
Image in, the body of oneself is healthy complete.
The prior art, using handling implements such as photoshop, can obtain shooting after the completion of user shoots image
Image handled, to make in the obtained image of shooting, the health of user is complete, but this mode, needs rear
Phase carries out image procossing, and complex disposal process wastes the energy of user, poor user experience.
Summary of the invention
The purpose of the present invention is intended to solve above-mentioned one of technical problem at least to a certain extent.
For this purpose, the application proposes a kind of image completion method, realize in shooting process, in real time to human body 3D model into
Row completion, so that the health of user is complete, improves the visual effect of image, meets use in the image generated
The demand at family, and be manually operated without user, the energy of user is saved, user experience is improved.
The application also proposes a kind of image completion device.
The application also proposes a kind of terminal.
The application also proposes a kind of computer readable storage medium.
The application first aspect proposes a kind of image completion method, which comprises
Using structure light, the human body 3D model of user is obtained;
According to the human body 3D model of the user, the target submodule for carrying out completion to the human body 3D model is determined
Type, wherein the target submodel is the corresponding model of any human organ;
Using the target submodel, completion processing is carried out to the human body 3D model;
According to the human body 3D model after completion, the image of the user is generated.
Image completion method provided by the embodiments of the present application obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, recycle target
Model carries out completion processing to human body 3D model and generates the image of user finally according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
The application second aspect proposes a kind of image completion device, and described device includes:
First obtains module, for utilizing structure light, obtains the human body 3D model of user;
First determining module, for the human body 3D model according to the user, determine for the human body 3D model into
The target submodel of row completion, wherein the target submodel is the corresponding model of any human organ;
Processing module carries out completion processing to the human body 3D model for utilizing the target submodel;
Generation module, for generating the image of the user according to the human body 3D model after completion.
Image completion device provided by the embodiments of the present application obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, recycle target
Model carries out completion processing to human body 3D model and generates the image of user finally according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
The application third aspect proposes a kind of terminal, including memory, processor and image processing circuit, the memory
For storing executable program code;The processor passes through the executable program code for reading and storing in the memory, and
The depth image of described image processing circuit output, to realize image completion method as described in relation to the first aspect.
Terminal provided by the embodiments of the present application obtains the human body 3D model of user first with structure light, then according to
The human body 3D model at family determines the target submodel for carrying out completion to human body 3D model, target submodel is recycled, to people
Body 3D model carries out completion processing and generates the image of user finally according to the human body 3D model after completion.Hereby it is achieved that
In shooting process, completion is carried out to human body 3D model in real time, so that the health of user is complete in the image generated,
The visual effect for improving image meets the demand of user, and is manually operated without user, saves the energy of user, changes
It has been apt to user experience.
The application fourth aspect proposes a kind of computer readable storage medium, is stored thereon with computer program, the program
Image completion method as described in relation to the first aspect is realized when being executed by processor.
Computer readable storage medium provided by the embodiments of the present application can be set arbitrarily with the terminal of camera function
In, when user shoots image, by executing the image completion method stored thereon, may be implemented in shooting process, in real time
Completion is carried out to human body 3D model, so that the health of user is complete, improves the vision of image in the image generated
Effect meets the demand of user, and is manually operated without user, saves the energy of user, improves user experience.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partially become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow chart of the image completion method of the application one embodiment;
Figure 1A is the speckle distribution map of the non-uniform structure light of the application one embodiment;
Figure 1B is the speckle distribution map of the uniform structure light of the application one embodiment;
Fig. 2 is the flow chart of the image completion method of the application another embodiment;
Fig. 3 is the structure chart of the image completion device of the application one embodiment;
Fig. 4 is the structure chart of the image completion device of the application another embodiment;
Fig. 5 is the structure chart of the terminal of the application one embodiment;
Fig. 6 is the structure chart of the image processing circuit of the application one embodiment.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to is used to explain the present invention, and is not considered as limiting the invention.
It is appreciated that term " first " used in the present invention, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes
It says, without departing from the scope of the invention, the first client can be known as the second client, and similarly, can incite somebody to action
Second client is known as the first client.The first client and the second client both client, but it is not same visitor
Family end.
Below with reference to the accompanying drawings the image completion method, apparatus and terminal of the embodiment of the present invention are described.
Various embodiments of the present invention be directed to the prior art, in order to shoot disabled user image in, the body of user
It is healthy complete, it needs to carry out image procossing in the later period, complex disposal process, wastes the energy of user, the problem of poor user experience,
It is proposed a kind of image completion method.
Image completion method provided in an embodiment of the present invention obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, to utilize target
Submodel carries out completion processing to human body 3D model and generates the image of user further according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
It is illustrated below with reference to image completion method of the Fig. 1 to the embodiment of the present application.
Fig. 1 is the flow chart of the image completion method of the application one embodiment.
As shown in Figure 1, this method comprises:
Step 101, using structure light, the human body 3D model of user is obtained.
Wherein, image completion method provided in an embodiment of the present invention, can be by image completion provided in an embodiment of the present invention
Device executes.Specifically, the image completion device, can be configured in any terminal with camera function.Wherein, terminal
Type it is very much, can according to application be selected, such as: mobile phone, computer etc..
Specifically, imaging device can be arranged in the terminal, for acquiring user images, and the human body 3D mould of acquisition user
Type.
Wherein, may include structured light projector and imaging sensor in imaging device, be respectively used to projective structure light and
Acquire structure light image;Alternatively, can also structured light projector and imaging sensor be separately provided, do not make in the terminal herein
Limitation.
When specific implementation, when user shoots image, the structured light projector in imaging device can use, to user institute
Region project structured light patterns, wherein the structured light patterns can for laser stripe, Gray code, sine streak or, with
The speckle pattern etc. of machine arrangement.Then by the perception deformed to structured light patterns and triangulation etc., the people of user is obtained
Body depth image.
Wherein, structure light can be structure light heterogeneous.
Specifically, structure light heterogeneous, can be formed by a variety of methods.
For example, frosted glass can be irradiated by infrared laser light source, formed to generate interference in the region where user
Structure light heterogeneous.
Alternatively, structure light heterogeneous can be formed in such a way that diffraction optical element is projected.Specifically, can
By, by single or multiple diffraction optical elements, being formed in the region where user heterogeneous after single laser light source collimation
Structure light.
Alternatively, directly diffraction optical element can also be passed through by the laser array being randomly distributed, in the area where user
Domain forms the speckle with the consistent irregular distribution of laser array, i.e., structure light heterogeneous.In this way, it can also control
The details of speckle processed is distributed, and is not construed as limiting herein.
It is understood that respectively with non-uniform structure light and uniform project structured light body surface when, unevenly
Structure light speckle distribution as shown in Figure 1A, the distribution of the speckle of uniform structure light is as shown in Figure 1B.From Figure 1A and 1B it is found that
In the region of same size, includes 11 spots in Figure 1A, include 16 spots in Figure 1B, i.e., non-uniform structure light is included
Spot it is less than the spot that uniform structure light includes.Therefore, using non-uniform structure light, the human depth of user is obtained
The energy of image, consumption is less, and energy-saving effect is more preferable, improves user experience.
Further, after the human depth's image for obtaining user, it can obtain and use according to human depth's image of user
The human body 3D model at family.
Specifically, the human body 3D model of user by a variety of methods, can be obtained.
For example, available multiple human depth's images, and Denoising disposal, smooth is carried out to multiple human depth's images
The pretreatments such as processing, the segmentation of front and back scape, so that the background that may include in human depth's image, environment etc. be divided with human body
From.Then, intensive point cloud data, the reconstruction for pedestrian's body depth information point cloud grid of going forward side by side are obtained according to human depth's image.Again
Multiframe depth image after reconstruction is merged, is registrated, human body 3D model is generated.
Alternatively, the structure light infrared image of human body by structured light technique, can be obtained, then from structure light infrared image
The speckle infrared image of middle acquisition human body, is calculated movement of the speckle point of speckle infrared image relative to reference speckle image
Distance, and obtain according to moving distance, with reference to the location information of speckle image the depth of the speckle point of the speckle infrared image of human body
Angle value, to obtain human depth's image according to depth value.Then it by being filtered to structure light infrared image, obtains
Human body infrared image.And then according to human depth's image and infrared image, the human body 3D model of user is obtained.
Step 102, according to the human body 3D model of user, the target submodule for carrying out completion to human body 3D model is determined
Type.
Wherein, target submodel is the corresponding model of any human organ.
It is understood that in normal human body 3D model, including all people's body organ, and the human body 3D of disabled user
Model understands some organ missing, target submodel, as by the human body 3D of user compared with normal human body 3D model
Model, when completion is normal human body 3D model, the corresponding model of human organ used.
Specifically, step 102 can be realized by following steps 102a-102b.
Step 102a determines the type of target submodel and characteristics of human body's letter of user according to the human body 3D model of user
Breath.
Wherein, the type of target submodel, i.e., the corresponding type of organ lacked in the human body 3D model of user, specifically
, it can be the types such as arm, hand, foot, leg, nose, ear.
Characteristics of human body's information may include at least one of following information: height, weight, gender, four limbs length, body
Body ratio, facial characteristics etc..
Specifically, facial characteristics, may include the respective feature of each organ of such as eyes, nose, as eyes are big
Small, lip thin and thick etc. can also include the distributing position etc. of each organ.
It, can be and normal according to the human body 3D model of user after the human body 3D model for obtaining user when specific implementation
Human body 3D model determines the human organ lacked in the human body 3D model of user, and then determines the type of target submodel.And lead to
It crosses and human body 3D solution to model is analysed, determine characteristics of human body's information of user.
Step 102b chooses and believes with the characteristics of human body of user from submodel corresponding with the type of target submodel library
Cease matched target submodel.
When specific implementation, model library can be pre-established, and according to human organ type, model library is divided into multiple submodules
Type library, each submodel in each submodel library, respectively corresponds different characteristics of human body's information.
Correspondingly, can also include: before step 102b
Obtain human body 3D model library, wherein include all people's body organ in any 3D model in 3D model library;
All 3D models in human body 3D model library are parsed, determines that characteristics of human body's information is corresponding with submodel and closes
System.
Specifically, in human body 3D model library, including a large amount of normal human body 3D model, by human body 3D model library
In, all people's body 3D model parses, and can determine the corresponding relationship of characteristics of human body's information and submodel.
Thus after characteristics of human body's information of the type and user that determine target submodel, it can be according to target submodel
Type determines submodel library, and according to characteristics of human body's information of user, and the characteristics of human body's information determined is corresponding with submodel
Relationship chooses the target submodel with characteristics of human body's information matches of user from submodel library.
When specific implementation, a threshold value can be preset, and by characteristics of human body's information of user, in submodel library
Characteristics of human body's information matches, can be by characteristics of human body's information in submodel library when matching degree reaches preset threshold
Corresponding submodel is determined as target submodel.
As an example it is assumed that preset threshold is 80%, model library can be divided into multiple submodel libraries such as arm type, leg type,
The submodel for including in arm type submodel library are as follows: " 160 centimetres of height (cm) below, women " corresponding submodel A, " height
160-170cm, women " corresponding submodel B, " height 170-175cm, women " corresponding submodel C, " height 175cm with
Upper, women " corresponding submodel D, " height 170cm or less, male " corresponding submodel A ', " height 170-175cm, male
Property " corresponding submodel B ', " height 175-180cm, male " corresponding submodel C ', " height 180cm or more, male " is corresponding
Submodel D '.The submodel for including in leg type submodel library are as follows: " height 160cm or less, women " corresponding submodel E,
" height 160-170cm, women " corresponding submodel F, " height 170-175cm, women " corresponding submodel G, " height
The corresponding submodel H of 175cm or more, women ", " height 170cm or less, male " corresponding submodel E ', " height 170-
175cm, male " corresponding submodel F ', " height 175-180cm, male " corresponding submodel G ', " height 180cm or more,
The corresponding submodel H ' of male ".If determine that user lacks arm then according to the human body 3D model of user, i.e. target submodel
Type is arm type, and characteristics of human body's information of user is " height 176cm, male ", due to the characteristics of human body's information and hand of user
" height 175-180cm, male " exact matching in arm type submodel library can then determine that target submodel is " height 175-
The corresponding submodel G ' of 180cm, male ".
It should be noted that preset model library, can store in the terminal, also can store in beyond the clouds, herein not
It is restricted.Furthermore it is possible at predetermined intervals, data update be carried out to preset model library, to improve determining target
The accuracy of submodel.
Step 103, using target submodel, completion processing is carried out to human body 3D model.
Step 104, according to the human body 3D model after completion, the image of user is generated.
, can be according in normal human body 3D model specifically, after target submodel has been determined, the position of target submodel
It sets, by target submodel, on the corresponding position in the human body 3D model of completion to user, the human body 3D model after obtaining completion,
And then generate the image of user.
It is understood that after carrying out completion processing to human body 3D model, in the image in order to realize the user of generation,
The organ of supplement is identical as the colour of skin of the original organ of user, junction is more natural and other effects, in embodiments of the present invention, can be with
According to the colour of skin etc. of the original each organ of user, saturation degree, brightness, the pixel value of organ region etc. of supplement are adjusted
It is whole, to improve the visual effect of the image of the user generated.
It should be noted that in a kind of possible way of realization, according to characteristics of human body's information of user, from target
In the corresponding submodel library of the type of model, when choosing target submodel, it is understood that there may be multiple characteristics of human body in submodel library
The case where matching degree of information and characteristics of human body's information of user reaches preset threshold.In embodiments of the present invention, if with
Characteristics of human body's information at family and the matching degree of multiple characteristics of human body's information in submodel library differ, then can be by submodel library
In, characteristics of human body information corresponding submodel highest with the matching degree of characteristics of human body's information of user is determined as target
Model.If characteristics of human body's information of user and the matching degree highest of multiple characteristics of human body's information in submodel library and equal,
Then the corresponding submodel of multiple characteristics of human body's information can be determined as target submodel, it is multiple so as to be utilized respectively
Target submodel carries out completion processing to human body 3D model, and generates the figure of user according to the human body 3D model after completion respectively
Picture, then by user according to the effect of multiple images, select suitable image as final image from multiple images.
In addition, user also can according to need, from submodel corresponding with the type of target submodel library, selection is suitable
Submodel as target submodel, to utilize target submodel, completion processing, specific completion are carried out to human body 3D model
Journey is referred to the associated description of step 103, and details are not described herein again.
Further, in embodiments of the present invention, can also as needed, setting whether to the human body 3D model of user into
Row completion processing, so that the health of user is complete in the image generated.That is, before step 101, can also include:
Obtain the image completion instruction of user's triggering;
Alternatively,
Determine that the image of the user currently acquired meets image completion condition.
Specifically, user can have the button of image completion function by click, long-pressing or sliding, triggering image is mended
All referring to order, so that image completion device after getting the image completion instruction of user's triggering, i.e., using structure light, is obtained and is used
The human body 3D model at family, and the mesh for carrying out completion to human body 3D model is further determined according to the human body 3D model of user
Submodel is marked, to utilize target submodel, completion processing is carried out to human body 3D model, so that in the image generated, the body of user
Body health is complete.
Alternatively, image completion condition can be preset are as follows: do not have to include all in the image of the user currently acquired
Human organ, to, that is, using structure light, obtain user when the image of the user currently acquired meets image completion condition
Human body 3D model determine the target for carrying out completion to human body 3D model and further according to the human body 3D model of user
Submodel carries out completion processing to human body 3D model to utilize target submodel, so that in the image generated, the body of user
It is healthy complete.
If not getting the image of user's triggering, or the image of the user currently acquired is unsatisfactory for image completion item
Part does not then carry out completion processing to the image of the user currently acquired.
It should be noted that in embodiments of the present invention, it, can not also be according to the user of acquisition when user shoots image
Human body 3D model, according only in the 2D image currently acquired, the human body image of user determines target submodel, then sharp again
With structure light, the human body 3D model of user is obtained, to carry out using determining target submodel to the human body 3D model of user
Completion processing, and then generate the image of user.
Image completion method provided in an embodiment of the present invention obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, recycle target
Model carries out completion processing to human body 3D model and generates the image of user finally according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
By above-mentioned analysis it is found that can use structure light, the human body 3D model of user is obtained, and according to the human body of user
3D model determines the target submodel for carrying out completion to human body 3D model, thus using target submodel, to human body 3D
Model carries out completion processing, and then according to the human body 3D model after completion, generates the image of user.In practice, user
When shooting image, different postures may be shown, below with reference to Fig. 2, above situation is further described.
Fig. 2 is the flow chart of the image completion method of another embodiment of the present invention.
As shown in Fig. 2, the image completion method includes:
Step 201, the image completion instruction of user's triggering is obtained.
Step 202, using structure light, the human body 3D model of user is obtained.
Step 203, according to the human body 3D model of user, the type of target submodel and characteristics of human body's letter of user are determined
Breath.
Step 204, from submodel corresponding with the type of target submodel library, characteristics of human body's information with user is chosen
Matched target submodel.
Step 205, according to human body 3D model, the targeted attitude of target submodel is determined.
Step 206, according to targeted attitude, using target submodel, completion processing is carried out to human body 3D model.
Step 207, according to the human body 3D model after completion, the image of user is generated.
Wherein, above-mentioned steps 201- step 204, the specific implementation principle and process of step 206- step 207, are referred to
Detailed description in above-described embodiment, details are not described herein again.
Specifically, can determine the targeted attitude of target submodel by a variety of methods.
For example, can be according in the current whole posture and human body 3D model library of the human body 3D model of the user of acquisition
The posture of a large amount of normal human body 3D models, predicts the posture of target submodel.Specifically, can be by most of normal human bodies
In 3D model, the posture of the corresponding organ of target submodel is determined as the targeted attitude of target submodel.
Alternatively, can be according to the instruction of user, by human body 3D model library, in any human body 3D model, target submodel
The posture of corresponding organ is determined as the targeted attitude of target submodel.
Specifically, after the targeted attitude of target submodel has been determined, it can be by target submodel, with determining target appearance
State, completion is into human body 3D model.
By the instruction according to user, or the posture of normal human body 3D model, target submodel completion is arrived user's
In human body 3D model, it can make in the user images generated after completion, the posture of user is more naturally, better meet user
Demand.
In a kind of possible way of realization of the present invention, exist to make target submodel preferably incorporate human body 3D model
Scene in, to improve the visual effect of the image of generation, target can also be utilized according to the depth information of target submodel
Submodel carries out completion processing to human body 3D model.That is, can also include: before step 206
According to human body 3D model, the depth information of target submodel is determined.
When specific implementation, the depth information of target submodel can be determined by a variety of methods.
For example, can according in the human body 3D model of user, the depth information of existing each organ and normal human's
The positional relationship of each organ determines the depth information of target submodel.
For example, if determining that user lacks left ear according to the human body 3D model of user, then can determine the head of user
Depth information determine the depth information of left ear and according to the head of normal human and the positional relationship of left ear.
Alternatively, can believe according in the human body 3D model of user with the depth of the same type of submodel of target submodel
Breath, determines the depth information of target submodel.
For example, if determining that user lacks left ear according to the human body 3D model of user, then can determine the human body of user
In 3D model, the depth information of existing auris dextra, and by the depth information of left ear, it is determined as the depth information of auris dextra.
It should be noted that if user shoot image when, without front towards camera, then directly according to and target submodule
The depth information of the depth information of the same type of submodel of type, determining target submodel may inaccuracy.In the present invention
, can also be according in the human body 3D model of user in embodiment, the depth information of existing at least two same kinds submodel
Relationship, and the depth information with the same type of submodel of target submodel determines the depth information of target submodel.
For example, if determining that user lacks left ear according to the human body 3D model of user, and two eyes of user at this time
Depth information it is identical, then the depth information that can determine left ear is identical as the depth information of auris dextra;Alternatively, if user at this time
The depth information of left eye is smaller than the depth information of right eye, and difference is that A obtains a left side after then the depth information of auris dextra being subtracted A
The depth information of ear.
Specifically, after the depth information of target submodel has been determined, it can be according to determining depth information, by target submodule
Type completion generates the image of user into human body 3D model, and according to the human body 3D model after completion.By being believed according to depth
Breath carries out completion processing to human body 3D model using target submodel, may be implemented to make on the basis of completion human body 3D model
Target submodel is preferably dissolved into the scene where user, to make according to the human body 3D model after completion, the use of generation
The image at family is truer.
It, can be according to the depth information and targeted attitude of target submodel, by target in a kind of preferably way of realization
Submodel completion is into human body 3D model, thus in scene where making target submodel preferably be dissolved into user, and then makes
According to the human body 3D model after completion, the image of the user of generation is truer, and the posture of user is more natural, better meets use
The demand at family improves user experience.
Image completion method provided in an embodiment of the present invention utilizes knot after obtaining the image completion instruction of user's triggering
Structure light obtains the human body 3D model of user, then according to the human body 3D model of user, determines type and the user of target submodel
Characteristics of human body's information chosen and characteristics of human body's information of user from submodel corresponding with the type of target submodel library
Matched target submodel determines the targeted attitude of target submodel further according to human body 3D model, thus according to targeted attitude,
Completion processing is carried out to human body 3D model using target submodel, and then according to the human body 3D model after completion, generates user's
Image.As a result, by shooting process, according to human organ belonging to the targeted attitude of target submodel and target area,
Depth information in human body 3D model carries out completion to human body 3D model, so that in the image generated, the health of user
Completely, and posture is more naturally, improve the visual effect of image, meets the demand of user, and be manually operated without user,
The energy for saving user, improves user experience.
Fig. 3 is the structure chart of the image completion device of the application one embodiment.
As shown in figure 3, the image completion device, comprising:
First obtains module 31, for utilizing structure light, obtains the human body 3D model of user;
First determining module 32 is determined for the human body 3D model according to user for carrying out completion to human body 3D model
Target submodel, wherein target submodel be the corresponding model of any human organ;
Processing module 33 carries out completion processing to human body 3D model for utilizing target submodel;
Generation module 34, for generating the image of user according to the human body 3D model after completion.
Wherein, image completion device provided in this embodiment, can execute image completion side provided in an embodiment of the present invention
Method.Specifically, the image completion device, can be configured in any terminal with camera function.Wherein, the type of terminal
Very much, it can be selected according to application, such as: mobile phone, computer, camera etc..
In a kind of possible way of realization of the present embodiment, above-mentioned first determining module 32 is specifically used for:
According to the human body 3D model of user, the type of target submodel and characteristics of human body's information of user are determined;
From submodel corresponding with the type of target submodel library, the mesh with characteristics of human body's information matches of user is chosen
Mark submodel.
In the alternatively possible way of realization of the present embodiment, above-mentioned characteristics of human body's information, including in following information extremely
It is one few:
Height, weight, gender, four limbs length, Body proportion, facial characteristics.
It should be noted that being also applied for the implementation to the explanation of image completion embodiment of the method in previous embodiment
The image completion device of example, details are not described herein again.
Image completion device provided by the embodiments of the present application obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, recycle target
Model carries out completion processing to human body 3D model and generates the image of user finally according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
Fig. 4 is the structure chart of the image completion device of the application another embodiment.
As shown in figure 4, on the basis of shown in Fig. 3, the image completion device, further includes:
Second obtains module 41, for obtaining human body 3D model library, wherein include in any 3D model in 3D model library
All people's body organ.
Second determining module 42 determines characteristics of human body for parsing to all 3D models in human body 3D model library
The corresponding relationship of information and submodel.
Third determining module 43, for determining the depth information of target submodel according to human body 3D model.
4th determining module 44, for determining the targeted attitude of target submodel according to human body 3D model.
In a kind of possible way of realization of the present embodiment, which can also include:
Third obtains module, for obtaining the image completion instruction of user's triggering;
Alternatively,
4th determining module, for determining that the image of the user currently acquired meets image completion condition.
It should be noted that being also applied for the implementation to the explanation of image completion embodiment of the method in previous embodiment
The image completion device of example, details are not described herein again.
Image completion device provided by the embodiments of the present application obtains the human body 3D model of user, so first with structure light
Afterwards according to the human body 3D model of user, the target submodel for carrying out completion to human body 3D model is determined, recycle target
Model carries out completion processing to human body 3D model and generates the image of user finally according to the human body 3D model after completion.As a result,
It realizes in shooting process, completion is carried out to human body 3D model in real time, so that the body of user is strong in the image generated
Health is complete, improves the visual effect of image, meets the demand of user, and be manually operated without user, saves user's
Energy improves user experience.
Further aspect of the present invention embodiment also proposes a kind of terminal.
Fig. 5 is the structure chart for the terminal that the application one embodiment provides.Wherein, there are many type of terminal, can basis
Using being selected, such as: mobile phone, computer, camera etc..Fig. 5 is illustrated by mobile phone of terminal.
As shown in figure 5, the terminal includes: processor 51, memory 52 and image processing circuit 53.
Wherein, the memory 52 is for storing executable program code;The processor 51 is by reading the storage
The depth image of executable program code and described image processing circuit 53 output stored in device 52, to realize such as aforementioned reality
Apply the image completion method in example.
It include image processing circuit 53 in above-mentioned terminal, image processing circuit 53 can use hardware and or software component
It realizes, it may include define the various processing units of ISP (Image Signal Processing, image signal process) pipeline.
Fig. 6 is the schematic diagram of image processing circuit in one embodiment.As shown in fig. 6, for purposes of illustration only, only showing and this
The various aspects of the relevant image processing techniques of inventive embodiments.
As shown in fig. 6, image processing circuit 53 includes imaging device 610, ISP processor 630 and control logic device 640.
Imaging device 610 may include camera and structured light projector with one or more lens 612, imaging sensor 614
616.Structured light projector 616 is by structured light projection to measured object.Wherein, the structured light patterns can for laser stripe, Gray code,
Sine streak or, the speckle pattern etc. of random alignment.Imaging sensor 614 captures the structure light that projection to measured object is formed
Image, and structure light image is sent to ISP processor 630, demodulation acquisition is carried out to structure light image by ISP processor 630
The depth information of measured object.Meanwhile imaging sensor 614 also can capture the color information of measured object.It is of course also possible to by two
A imaging sensor 614 captures the structure light image and color information of measured object respectively.
Wherein, by taking pattern light as an example, ISP processor 630 demodulates structure light image, specifically includes, from this
The speckle image that measured object is acquired in structure light image, by the speckle image of measured object and with reference to speckle image according to pre-defined algorithm
Image data calculating is carried out, each speckle point for obtaining speckle image on measured object is dissipated relative to the reference in reference speckle image
The moving distance of spot.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth
Angle value obtains the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on the method for jet lag TOF
Information etc., it is not limited here, as long as the method that can obtain or be obtained by calculation the depth information of measured object belongs to this
The range that embodiment includes.
After the color information that ISP processor 630 receives the measured object that imaging sensor 614 captures, it can be tested
The corresponding image data of the color information of object is handled.ISP processor 630 analyzes to obtain and can be used for image data
The image statistics of determining and/or imaging device 610 one or more control parameters.Imaging sensor 614 may include color
Color filter array (such as Bayer filter), imaging sensor 614 can be obtained to be captured with each imaging pixel of imaging sensor 614
Luminous intensity and wavelength information, and provide one group of raw image data being handled by ISP processor 630.
ISP processor 630 handles raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processor 630 can carry out raw image data at one or more images
Reason operation, image statistics of the collection about image data.Wherein, image processing operations can be by identical or different bit depth
Precision carries out.
ISP processor 630 can also receive pixel data from video memory 620.Video memory 620 can be memory device
The independent private memory in a part, storage equipment or electronic equipment set, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processor 630 can carry out one or more image processing operations.
After ISP processor 630 gets color information and the depth information of measured object, it can be merged, be obtained
3-D image.Wherein, it can be extracted by least one of appearance profile extracting method or contour feature extracting method corresponding
The feature of measured object.Such as pass through active shape model method ASM, active appearance models method AAM, Principal Component Analysis PCA, discrete
The methods of cosine transform method DCT, extracts the feature of measured object, it is not limited here.It will be extracted from depth information respectively again
The feature of measured object and the feature that measured object is extracted from color information carry out registration and Fusion Features processing.It herein refers to
Fusion treatment, which can be, directly combines the feature extracted in depth information and color information, is also possible to different images
In identical feature carry out weight setting after combine, can also have other amalgamation modes, finally according to fused feature, generate
3-D image.
The image data of 3-D image can be transmitted to video memory 620, to carry out other place before shown
Reason.ISP processor 630 from video memory 620 receive processing data, and to the processing data progress original domain in and
Image real time transfer in RGB and YCbCr color space.The image data of 3-D image may be output to display 660, for
User watches and/or is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).
In addition, the output of ISP processor 630 also can be transmitted to video memory 620, and display 660 can be from video memory 620
Read image data.In one embodiment, video memory 620 can be configured to realize one or more frame buffers.This
Outside, the output of ISP processor 630 can be transmitted to encoder/decoder 650, so as to encoding/decoding image data.The figure of coding
As data can be saved, and decompressed before being shown in 660 equipment of display.Encoder/decoder 650 can by CPU or
GPU or coprocessor are realized.
The image statistics that ISP processor 630 determines, which can be transmitted, gives control logic device Unit 640.Control logic device 640
It may include the processor and/or microcontroller for executing one or more routines (such as firmware), one or more routines can be according to connecing
The image statistics of receipts determine the control parameter of imaging device 610.
The following are realize image completion method with image processing techniques in Fig. 6:
Using structure light, the human body 3D model of user is obtained;
According to the human body 3D model of the user, the target submodule for carrying out completion to the human body 3D model is determined
Type, wherein the target submodel is the corresponding model of any human organ;
Using the target submodel, completion processing is carried out to the human body 3D model;
According to the human body 3D model after completion, the image of the user is generated.
Terminal provided by the embodiments of the present application obtains the human body 3D model of user first with structure light, then according to
The human body 3D model at family determines the target submodel for carrying out completion to human body 3D model, target submodel is recycled, to people
Body 3D model carries out completion processing and generates the image of user finally according to the human body 3D model after completion.Hereby it is achieved that
In shooting process, completion is carried out to human body 3D model in real time, so that the health of user is complete in the image generated,
The visual effect for improving image meets the demand of user, and is manually operated without user, saves the energy of user, changes
It has been apt to user experience.
In order to achieve the above object, the embodiment of the present application proposes a kind of computer readable storage medium, it is stored thereon with calculating
Machine program is realized when the program is executed by processor such as the image completion method in previous embodiment.
Computer readable storage medium provided by the embodiments of the present application can be set arbitrarily with the terminal of camera function
In, when user shoots image, by executing the image completion method stored thereon, may be implemented in shooting process, in real time
Completion is carried out to human body 3D model, so that the health of user is complete, improves the vision of image in the image generated
Effect meets the demand of user, and is manually operated without user, saves the energy of user, improves user experience.
In order to achieve the above object, the embodiment of the present application proposes a kind of computer program product, when the computer program produces
When instruction in product is executed by processor, execute such as the image completion method in previous embodiment.
Computer program product provided by the embodiments of the present application can be set in the terminal arbitrarily with camera function,
When user shoots image, by executing the program of correspondence image complementing method, may be implemented in shooting process, in real time to people
Body 3D model carries out completion, so that the health of user is complete in the image generated, improves the vision effect of image
Fruit meets the demand of user, and is manually operated without user, saves the energy of user, improves user experience.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
It should be noted that in the description of this specification, reference term " one embodiment ", " is shown " some embodiments "
The description of example ", " specific example " or " some examples " etc. mean specific features described in conjunction with this embodiment or example, structure,
Material or feature are included at least one embodiment or example of the invention.In the present specification, above-mentioned term is shown
The statement of meaning property is necessarily directed to identical embodiment or example.Moreover, specific features, structure, material or the spy of description
Point may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other,
Those skilled in the art can be by different embodiments or examples described in this specification and different embodiments or examples
Feature is combined.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example
Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, modifies, replacement and variant.
Claims (10)
1. a kind of image completion method characterized by comprising
Using structure light, the human body 3D model of user is obtained;
According to the human body 3D model of the user, the target submodel for carrying out completion to the human body 3D model is determined,
In, the target submodel is arm, hand, foot, leg, nose, the corresponding model of at least one human organ in ear;
Using the target submodel, completion processing is carried out to the human body 3D model;
According to the human body 3D model after completion, the image of the user is generated.
2. the method as described in claim 1, which is characterized in that the human body 3D model according to the user, determination are used for
The target submodel of completion is carried out to the human body 3D model, comprising:
According to the human body 3D model of the user, the type of the target submodel and characteristics of human body's letter of the user are determined
Breath;
From submodel library corresponding with the type of the target submodel, characteristics of human body's information matches with the user are chosen
Target submodel.
3. method according to claim 2, which is characterized in that described from submodule corresponding with the type of the target submodel
In type library, before the target submodel of characteristics of human body's information matches of selection and the user, further includes:
Obtain human body 3D model library, wherein include all people's body organ in any 3D model in the 3D model library;
All 3D models in the human body 3D model library are parsed, determines that characteristics of human body's information is corresponding with submodel and closes
System.
4. method as claimed in claim 2 or claim 3, which is characterized in that characteristics of human body's information, including in following information extremely
It is one few:
Height, weight, gender, four limbs length, Body proportion, facial characteristics.
5. method a method according to any one of claims 1-3, which is characterized in that it is described to utilize the target submodel, to the people
Body 3D model carries out before completion processing, further includes:
According to the human body 3D model, the depth information of the target submodel is determined.
6. method a method according to any one of claims 1-3, which is characterized in that it is described to utilize the target submodel, to the people
Body 3D model carries out before completion processing, further includes:
According to the human body 3D model, the targeted attitude of the target submodel is determined.
7. method as claimed in claim 6, which is characterized in that it is described utilize structure light, obtain user human body 3D model it
Before, further includes:
Obtain the image completion instruction of user's triggering;
Alternatively,
Determine that the image of the user currently acquired meets image completion condition.
8. a kind of image completion device characterized by comprising
First obtains module, for utilizing structure light, obtains the human body 3D model of user;
First determining module is determined for the human body 3D model according to the user for mending to the human body 3D model
Full target submodel, wherein the target submodel is arm, hand, foot, leg, nose, at least one human organ in ear
Corresponding model;
Processing module carries out completion processing to the human body 3D model for utilizing the target submodel;
Generation module, for generating the image of the user according to the human body 3D model after completion.
9. a kind of terminal, it is applied to field of camera technology, which is characterized in that including memory, processor and image processing circuit,
The memory is for storing executable program code;The processor is by reading the executable journey stored in the memory
Sequence code and the depth image of described image processing circuit output, to realize that the image as described in any in claim 1-7 is mended
Full method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The image completion method as described in any in claim 1-7 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710640059.0A CN107343148B (en) | 2017-07-31 | 2017-07-31 | Image completion method, apparatus and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710640059.0A CN107343148B (en) | 2017-07-31 | 2017-07-31 | Image completion method, apparatus and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107343148A CN107343148A (en) | 2017-11-10 |
CN107343148B true CN107343148B (en) | 2019-06-21 |
Family
ID=60217594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710640059.0A Active CN107343148B (en) | 2017-07-31 | 2017-07-31 | Image completion method, apparatus and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107343148B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108346175B (en) * | 2018-02-06 | 2023-10-20 | 腾讯科技(深圳)有限公司 | Face image restoration method, device and storage medium |
CN108765315B (en) * | 2018-05-04 | 2021-09-07 | Oppo广东移动通信有限公司 | Image completion method and device, computer equipment and storage medium |
CN108765321B (en) * | 2018-05-16 | 2021-09-07 | Oppo广东移动通信有限公司 | Shooting repair method and device, storage medium and terminal equipment |
US10585194B1 (en) | 2019-01-15 | 2020-03-10 | Shenzhen Guangjian Technology Co., Ltd. | Switchable diffuser projection systems and methods |
CN113050112A (en) * | 2019-03-21 | 2021-06-29 | 深圳市光鉴科技有限公司 | System and method for enhancing time-of-flight resolution |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271581A (en) * | 2008-04-25 | 2008-09-24 | 浙江大学 | Establishing personalized three-dimensional mannequin |
CN103236043A (en) * | 2013-04-28 | 2013-08-07 | 北京农业信息技术研究中心 | Plant organ point cloud restoration method |
CN103268629A (en) * | 2013-06-03 | 2013-08-28 | 程志全 | Mark-point-free real-time restoration method of three-dimensional human form and gesture |
CN105654061A (en) * | 2016-01-05 | 2016-06-08 | 安阳师范学院 | 3D face dynamic reconstruction method based on estimation compensation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8638985B2 (en) * | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
-
2017
- 2017-07-31 CN CN201710640059.0A patent/CN107343148B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271581A (en) * | 2008-04-25 | 2008-09-24 | 浙江大学 | Establishing personalized three-dimensional mannequin |
CN103236043A (en) * | 2013-04-28 | 2013-08-07 | 北京农业信息技术研究中心 | Plant organ point cloud restoration method |
CN103268629A (en) * | 2013-06-03 | 2013-08-28 | 程志全 | Mark-point-free real-time restoration method of three-dimensional human form and gesture |
CN105654061A (en) * | 2016-01-05 | 2016-06-08 | 安阳师范学院 | 3D face dynamic reconstruction method based on estimation compensation |
Also Published As
Publication number | Publication date |
---|---|
CN107343148A (en) | 2017-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107343148B (en) | Image completion method, apparatus and terminal | |
US20220050290A1 (en) | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking | |
AU2006282764B2 (en) | Capturing and processing facial motion data | |
US11494915B2 (en) | Image processing system, image processing method, and program | |
US10311624B2 (en) | Single shot capture to animated vr avatar | |
CN108447017A (en) | Face virtual face-lifting method and device | |
JP4932951B2 (en) | Facial image processing method and system | |
CN108171789B (en) | Virtual image generation method and system | |
CN107483845B (en) | Photographic method and its device | |
CN109377557A (en) | Real-time three-dimensional facial reconstruction method based on single frames facial image | |
CN107507269A (en) | Personalized three-dimensional model generating method, device and terminal device | |
JP2011100497A (en) | Method and system for animating facial feature, and method and system for expression transformation | |
CN107395974B (en) | Image processing system and method | |
CN107707839A (en) | Image processing method and device | |
CN107610171B (en) | Image processing method and device | |
US20100315524A1 (en) | Integrated motion capture | |
CN107592449A (en) | Three-dimension modeling method, apparatus and mobile terminal | |
CN107481318A (en) | Replacement method, device and the terminal device of user's head portrait | |
CN107493411B (en) | Image processing system and method | |
CN107509043A (en) | Image processing method and device | |
CN113628327A (en) | Head three-dimensional reconstruction method and equipment | |
CN107469355A (en) | Game image creation method and device, terminal device | |
CN107343151B (en) | Image processing method, device and terminal | |
CN110533761B (en) | Image display method, electronic device and non-transient computer readable recording medium | |
CN107454336A (en) | Image processing method and device, electronic installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |