CN109636898A - 3D model generating method and terminal - Google Patents

3D model generating method and terminal Download PDF

Info

Publication number
CN109636898A
CN109636898A CN201811447286.2A CN201811447286A CN109636898A CN 109636898 A CN109636898 A CN 109636898A CN 201811447286 A CN201811447286 A CN 201811447286A CN 109636898 A CN109636898 A CN 109636898A
Authority
CN
China
Prior art keywords
user
terminal
target
information
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811447286.2A
Other languages
Chinese (zh)
Other versions
CN109636898B (en
Inventor
付浩翊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811447286.2A priority Critical patent/CN109636898B/en
Publication of CN109636898A publication Critical patent/CN109636898A/en
Application granted granted Critical
Publication of CN109636898B publication Critical patent/CN109636898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The embodiment of the present invention provides a kind of 3D model generating method and terminal, is related to field of communication technology, the matching degree to solve the problems, such as 3D cartoon model and user is lower.This method comprises: receiving the first input of user;In response to first input, the target action information of user is obtained;In the case where the target action information and target deliberate action information matches, the target two dimension 2D image of user and the depth information of target 2D image are acquired;After the depth information of the N 2D images of acquisition user and the N 2D images, according to after the depth information of the N 2D images and the N 2D images, according to the depth information of the N 2D images and the N 2D images, 3D model is generated, N is positive integer.

Description

3D model generating method and terminal
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of 3D model generating method and terminals.
Background technique
With the development of communication technology, the application scenarios of terminal are more and more.
In general, user can input shape parameter (such as gender, measurements of the chest, waist and hips, height and weight) in the terminal, then terminal According to default three-dimensional (Three Dimensions, a 3D) cartoon model and the shape parameter, generate and the shape parameter pair The 3D cartoon model answered, user can be used the 3D cartoon model and fitted online.
However, if the shape parameter of different user is identical, using the 3D corresponding with these users of aforesaid way generation Cartoon model is also identical, and the practical figure of these users may differ by it is larger, thus using aforesaid way generate 3D cartoon The matching degree of model and user are lower.
Summary of the invention
The embodiment of the present invention provides a kind of 3D model generating method and terminal, to solve the matching of 3D cartoon model and user Spend lower problem.
In order to solve the above-mentioned technical problem, the embodiments of the present invention are implemented as follows:
In a first aspect, the embodiment of the present invention provides a kind of 3D model generating method, it is applied to terminal, this method comprises: connecing Receive the first input of user;In response to first input, the target action information of user is obtained;In the target action information and in advance If in the matched situation of action message, acquiring the target two dimension 2D image of user and the depth information of target 2D image;It is adopting After collecting the N 2D images of user and the depth information of the N 2D images, according to the depth of the N 2D images and the N 2D images Information is spent, generates 3D model, N is positive integer.
Second aspect, the embodiment of the invention also provides a kind of terminal, which includes: receiving module, obtains module, adopts Collect module and generation module;The receiving module, for receiving the first input of user;The acquisition module, for being connect in response to this Received first input of module is received, the target action information of user is obtained;The acquisition module, for being obtained in the acquisition module The target action information and target deliberate action information matches in the case where, acquire user target two dimension 2D image and the mesh Mark the depth information of 2D image;The generation module, for the N 2D images and the N 2D figures in acquisition module acquisition user After the depth information of picture, according to the depth information of the N 2D images and the N 2D images, 3D model is generated, N is positive integer.
The third aspect the embodiment of the invention provides a kind of terminal, including processor, memory and is stored in the memory Computer program that is upper and can running on the processor, realizes such as first aspect when which is executed by the processor The step of described 3D model generating method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage mediums Computer program is stored in matter, and 3D model generation side as described in relation to the first aspect is realized when which is executed by processor The step of method.
In embodiments of the present invention, firstly, terminal receives the first input of user, then, in response to the first input, terminal The target action information for obtaining user, in the case where target action information and target deliberate action information matches, terminal acquisition The target two dimension 2D image of user and the depth information of target 2D image, finally, in the N 2D images of acquisition user and the N After the depth information of 2D image, terminal generates 3D model according to the depth information of the N 2D images and the N 2D images.By In target action information and target deliberate action information matches, the movement that user executes and the movement that terminal needs can be indicated Match, in the case where the target action information of user and target deliberate action information matches, N 2D images of terminal acquisition can With the more accurately true figure of one user of reaction, ratio and fat or thin, and the depth information of these 2D images also can be more The 3D effect for adding human body image in the accurately image of reaction acquisition, therefore, according to above-mentioned 3D model generating method, the 3D of generation Model and the practical figure of user are more nearly, and matching degree is higher.
Detailed description of the invention
Fig. 1 is a kind of configuration diagram of possible Android operation system provided in an embodiment of the present invention;
Fig. 2 is a kind of 3D model generating method flow diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of interface schematic diagram one provided in an embodiment of the present invention;
Fig. 4 is a kind of interface schematic diagram two provided in an embodiment of the present invention;
Fig. 5 is a kind of interface schematic diagram three provided in an embodiment of the present invention;
Fig. 6 is a kind of interface schematic diagram four provided in an embodiment of the present invention;
Fig. 7 is a kind of interface schematic diagram five provided in an embodiment of the present invention;
Fig. 8 is a kind of interface schematic diagram six provided in an embodiment of the present invention;
Fig. 9 is a kind of interface schematic diagram seven provided in an embodiment of the present invention;
Figure 10 is a kind of interface schematic diagram eight provided in an embodiment of the present invention;
Figure 11 is a kind of possible structural schematic diagram one of terminal provided in an embodiment of the present invention;
Figure 12 is a kind of possible structural schematic diagram two of terminal provided in an embodiment of the present invention;
Figure 13 is a kind of possible structural schematic diagram three of terminal provided in an embodiment of the present invention;
Figure 14 is a kind of possible structural schematic diagram four of terminal provided in an embodiment of the present invention;
Figure 15 is a kind of hardware structural diagram of terminal of each embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It should be noted that "/" herein indicate or the meaning, for example, A/B can indicate A or B;Herein "and/or" is only a kind of incidence relation for describing affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B, it can To indicate: individualism A exists simultaneously A and B, these three situations of individualism B." multiple " refer to two or more.
Term " first " and " second " in description and claims of this specification etc. are for distinguishing different pairs As, rather than it is used for the particular order of description object.For example, the first interface and second contact surface etc. are for distinguishing different boundaries Face, rather than the particular order for describing interface.
It should be noted that in the embodiment of the present invention, " illustrative " or " such as " etc. words make example, example for indicating Card or explanation.Be described as in the embodiment of the present invention " illustrative " or " such as " any embodiment or design scheme do not answer It is interpreted than other embodiments or design scheme more preferably or more advantage.Specifically, " illustrative " or " example are used Such as " word is intended to that related notion is presented in specific ways.
Terminal in the embodiment of the present invention can be the terminal with operating system.The operating system can be Android (Android) operating system can be ios operating system, can also be other possible operating systems, the embodiment of the present invention is not Make specific limit.
Below by taking Android operation system as an example, introduces 3D model generating method provided in an embodiment of the present invention and applied Software environment.
As shown in Figure 1, being a kind of configuration diagram of possible Android operation system provided in an embodiment of the present invention.Scheming In 1, the framework of Android operation system includes 4 layers, be respectively as follows: application layer, application framework layer, system Runtime Library layer and Inner nuclear layer (is specifically as follows Linux inner core).
Wherein, application layer includes each application program (including system application and in Android operation system Tripartite's application program).
Application framework layer is the frame of application program, and developer can be in the exploitation for the frame for abiding by application program In the case where principle, some application programs are developed based on application framework layer.
System Runtime Library layer includes library (also referred to as system library) and Android operation system running environment.Library is mainly Android behaviour As system it is provided needed for all kinds of resources.Android operation system running environment is used to provide software loop for Android operation system Border.
Inner nuclear layer is the operating system layer of Android operation system, belongs to the bottom of Android operation system software level.It is interior Stratum nucleare provides core system service and hardware-related driver based on linux kernel for Android operation system.
By taking Android operation system as an example, in the embodiment of the present invention, developer can be based on above-mentioned Android as shown in Figure 1 The system architecture of operating system, the software program of 3D model generating method provided in an embodiment of the present invention is realized in exploitation, to make Obtaining the 3D model generating method can be run based on Android operation system as shown in Figure 1.I.e. processor or terminal can lead to It crosses and runs software program realization 3D model generating method provided in an embodiment of the present invention in Android operation system.
The 3D model generating method of the embodiment of the present invention is illustrated below with reference in Fig. 2.Fig. 2 is the embodiment of the present invention There is provided a kind of 3D model generating method flow diagram, as shown in Fig. 2, the 3D model generating method include S201 extremely S204:
S201, terminal receive the first input of user.
It should be noted that the first input is the input that user's triggering terminal generated or established 3D model, the first input can Think an input, also may include multiple sub- inputs, the first input can input for voice, or user is in terminal Input on display interface, the present invention is not especially limit this.
It should be noted that terminal provided in an embodiment of the present invention can have touch screen, which can be used for connecing The input of user is received, and shows the corresponding content of the input to the user in response to the input.Wherein, the first input can be touching Shield input, fingerprint input, gravity input, key-press input etc..Touch-screen input is user to the pressing input of the touch screen of terminal, length By input, slidably input, click input, suspend input (input of user near touch screen) etc. input.Fingerprint input is use Family to the sliding fingerprint of the Fingerprint Identification Unit of terminal, long-pressing fingerprint, click fingerprint and double-click the input such as fingerprint.Gravity input is use Family inputs the shaking of terminal specific direction, shaking of specific times etc..Key-press input correspond to user to the power key of terminal, The keys such as volume key, Home key click input, double-click the inputs such as input, long-pressing input, combination button input.Specifically, this Inventive embodiments are not especially limited the mode of the first input, can be any achievable mode.
Illustratively, the first input can be the input on the interface 301 of user shown in fig. 3, wherein interface 301 Middle display Sex preference control and dressing prompt information, Sex preference control include control 30 and control 31, dressing prompt letter Breath is " please be in tights clothes, to guarantee the accuracy of modeling ", and the first input can be user on control 30 or control 31 Input.
Optionally, if interface 301 is the camera interface of terminal, the background image at interface 301 can be terminal to real-time Image after the outdoor scene Fuzzy processing of acquisition.
Optionally, terminal provided in an embodiment of the present invention can be the terminal with a screen, or there are two tools The terminal of screen.
S202, it is inputted in response to first, terminal obtains the target action information of user.
Optionally, target action information may include the pattern for the movement that user executes, for example, one big font, attentioning Movement, with arms akimbo, side stand etc..
It should be noted that terminal can be to maintain the position of terminal when obtaining the target action information of user It is constant, user's transformation movement;In the case where generating 3D model for user A, it is also possible to the movement of user's B controlling terminal, user A Holding movement is constant, and user's B controlling terminal carries out obtaining target action information around user A movement, and user can independently select Any one in both modes, the present invention is not especially limit this.
S203, in the case where target action information and target deliberate action information matches, terminal acquire user target The depth information of two-dimentional 2D image and target 2D image.
Specifically, target deliberate action information can be the information of one or more target deliberate action, the present invention is real It applies example and this is not especially limited.
S204, after the depth information of the N 2D images of acquisition user and the N 2D images, terminal is according to the N 2D The depth information of image and the N 2D images generates 3D model.
Wherein, N is positive integer.
Optionally, 3D model includes at least one of figure model and mask of user of user.
In general, the depth of the N 2D images can be acquired using TOF (time of flight) camera even depth camera Spend information.
3D model generating method provided in an embodiment of the present invention, firstly, then the first input that terminal receives user is rung Should be in the first input, terminal obtains the target action information of user, in target action information and target deliberate action information matches In the case where, terminal acquires the target two dimension 2D image of user and the depth information of target 2D image, finally, acquisition user's After the depth information of N 2D images and the N 2D images, terminal is believed according to the depth of the N 2D images and the N 2D images Breath generates 3D model.Due to target action information and target deliberate action information matches, can indicate movement that user executes and The movement matching needed with terminal, in the case where the target action information of user and target deliberate action information matches, terminal The N of acquisition 2D images can more accurately react the true figure of a user, ratio and fat or thin, and these 2D scheme The depth information of picture also can more accurately react the 3D effect of human body image in the image of acquisition, therefore, according to above-mentioned 3D mould Type generation method, the 3D model of generation and the practical figure of user are more nearly, and matching degree is higher.
A kind of possible implementation, 3D model generating method provided in an embodiment of the present invention obtain user's in terminal Further include S205 before target action information:
S205, terminal are sequentially output M prompt information.
Wherein, for prompting user to execute a movement, each prompt information includes prompt text, mentions a prompt information At least one of diagram picture and suggestion voice, M are the positive integer less than or equal to N.
Optionally, the movement of prompt information prompt can be preset movement in terminal, or terminal generates at random One movement, the present invention is not especially limit this.
Specifically, terminal can export a prompt information, such as " please act rotation one as illustrated as M=1 Week ".As M > 1, terminal can be sequentially output prompt information, for example, " movement 1 please be execute ", " movement 2 please be execute ", " please execute Movement 3 " etc..
It should be noted that N can be value identical with M, N may be a preset value.For example, N can be preset It can be 1 for 200, M, can according to need the occurrence of setting M and N in practical application, the embodiment of the present invention does not make this to have Body limits.
Illustratively, if prompt information is prompt text, prompt information can be " standing according to posture 1 ", prompt text Posture 1 in this is a specific posture, such as " left hand stand akimbo the right hand oliquely downward stretch ".
Illustratively, as shown in figure 4, terminal can be on interface 302 by terminal in interface for display reminding image Show a humanoid profile, which is that left hand is stood akimbo the humanoid profile that the right hand oliquely downward stretches.
Based on the program, for terminal before the target action information for obtaining user, terminal can be sequentially output M prompt letter Breath, successively executes M movement for prompting the user with user, enables to user that can hold according to the prompt information that terminal exports The movement of row prompt information prompt, on the one hand, the action message of standardization enables to the model established more quick and precisely, separately On the one hand, the step of user prompts respectively according to each prompt information operates, and user is quickly grasped very much and how to be given birth to The step of at model, facilitate convenient for the user to operate, the usage experience of user can be improved.
A kind of possible implementation, 3D model generating method provided in an embodiment of the present invention can be with after S201 Including S206:
S206, it is inputted in response to first, whether it includes human body image that terminal detects in the first interface of terminal.
It is understood that camera interface or other application that the first interface can open for user call camera to beat The camera interface opened, the embodiment of the present invention are not particularly shown this.
Specifically, then the first interface is the interface on this screen when terminal is the terminal only with a screen, the Input in one input or the interface.User can open the front camera of the terminal, oneself is according to defeated on interface Information execution movement out generates 3D model;The rear camera that user can also open a terminal, other users use the terminal And 3D model is generated according to the information execution movement of terminal output.
When terminal is the terminal with the first screen and the second screen, if one of screen display interface of user's using terminal It, can be with reference to the description of the above-mentioned terminal only with a screen when generating 3D model.If two of user's using terminal shield it is aobvious When showing that interface generates 3D model, user 1 can generate 3D model together with user 2 for the figure of user 1, any one user exists The first input of input on first screen, terminal can enable the second screen or the second screen institute camera in the plane carry out 2D image with The acquisition of the depth information of 2D image.
In turn, S205 can be executed by S205a:
S205a, in the first interface include human body image in the case where, terminal is sequentially output M prompt information.
Optionally, can be including human body image in the first interface includes a complete human figure in the first interface Picture, or the only human body image of the upper part of the body including a user, or only include the lower part of the body of a user Human body image, the present invention is not especially limit this.
It is understood that user can choose the 3D model for establishing whole body, it also can choose and establish top half body 3D model, also can choose the 3D model for establishing lower half portion body, the present invention is not especially limit this.
Optionally, for terminal before display interface 302, terminal includes end after human body image in determining the first interface End can first vibrate, prompt user to identify success, be then sequentially output M prompt information.
Optionally, the first interface can also include progress bar, the progress bar be used to indicate 3D model acquisition 2D image into Degree.
Illustratively, it is assumed that prompt information includes 2, and a prompt information mentions for what is shown in interface 302 shown in Fig. 4 Diagram picture, a prompt information can be " posture please be kept to rotate in place " shown in interface 303 shown in fig. 5.I.e. 2 mention The movement for showing information instruction is the same movement.It wherein, can also include progress bar 32 in interface 303.
Based on the program, after terminal receives the first input of user, terminal can first detect the first interface of terminal In whether include human body image, then in the first interface include human body image in the case where, terminal is sequentially output M prompt again Information can make beginning display reminding information more flexible.
Optionally, which includes the first prompt information and the second prompt information, and the first prompt information is for mentioning Show that user executes the first movement, the second prompt information is for prompting user to execute the second movement.
In a kind of possible implementation, 3D model generating method provided in an embodiment of the present invention is prompted in display first Can also include S205b1 or S205b2 after information:
If the first action message and the first deliberate action information matches of S205b1, the user obtained, terminal is by first Prompt information update is shown as the second prompt information.
Wherein, the first deliberate action information is the information of the first movement.
Illustratively, it is assumed that the movement 1 and the first movement matching that user executes, that is, the first action message of user and the One deliberate action information matches, the first prompt information is updated to the second prompt information by terminal, for prompting user according to second Second movement execution movement 2 of prompt information prompt.
Based on the program, terminal is the case where the first action message of the user of acquisition is with the first deliberate action information matches Under, the first prompt information is updated to the second prompt information by terminal, so that user could be aware that oneself successively according to prompt information The movement of execution is correctly, so as to improve the usage experience of user.
If S205b2, the first action message of the user obtained and the first deliberate action information mismatch, terminal output Adjustment information, until it is second that the first prompt information, which is updated output, after the first action message and the first deliberate action information matches Prompt information, wherein the movement that adjustment information is used to that user to be prompted to adjust user is the first movement.
Optionally, the form of adjustment information can be text, image or voice.
Illustratively, it is assumed that the movement 1 and the first movement that user executes mismatch, i.e. the first action message of user and the One deliberate action information mismatches, then terminal can export an adjustment information, for prompting the currently performed movement of user With the first performance of a different dive, user needs adjustment movement for the first movement.
Based on the program, terminal is mismatched in the first action message of the user of acquisition and the first deliberate action information, is led to Terminal output adjustment information is crossed, user can be made to know that oneself currently performed movement is nonstandard, and use can be reminded Family needs adjustment movement to act for which, until the first action message and the first deliberate action of the first movement that user executes Information matches, it is the second prompt information that the first prompt information, which is then updated output, again, outputs the second prompt information in terminal Later, the movement executed before user could be aware that oneself is correct, and then user continues to execute according to the second prompt information Second movement of the second prompt information prompt, enhances the man-machine interaction of user and terminal, so that user experience is more preferably.
Optionally, terminal can show a first area before display reminding information, and terminal can be in first area Middle display reminding information or user's operation indicate information.
Illustratively, as shown in fig. 6, first area is the region that prompting frame 33 shown in Fig. 6 surrounds.In the area Show user's operation instruction information " please by human body as in frame ", it is mobile by user's movement or terminal, so that user is at end Imaging in the interface of end is located in prompting frame 33.
Optionally, in the case where target prompting information includes prompt image, image is prompted to be shown in target interface In first area, target interface includes the first interface.
It should be noted that the first interface is first in the case where terminal is the terminal with the first screen and the second screen Interface on screen, target interface also may include third interface, and third interface can be the interface on the second screen, prompt image can With in the first area that is shown in third interface, for the second screen pair user movement is executed according to prompt image.
It is appreciated that target prompting information is any one prompt information in M prompt information.
It should be noted that the shape of first area can be rectangle, or the other shapes such as ellipse or circle, The present invention is not especially limit this.
As shown in fig. 7, prompt image is a humanoid profile in interface 305, which is shown in prompting frame 33 In the region of encirclement.The finger of " please standing according to the posture of human body contour outline " can be shown except the region that prompting frame 33 surrounds Show.
It should be noted that " please the standing according to the posture of human body contour outline " and the non-present invention implementation that are shown in interface 305 Prompt information in " M prompt information " described in example, but one of the human body contour outline shown in prompting frame supplementary explanation, User is instructed to stand according to the posture of the human body contour outline (i.e. prompt information) of interface display.
Optionally, after terminal includes human body image in determining the first interface, terminal can first be vibrated, then mention Show frame discoloration or thicker, is then sequentially output M prompt information.
Optionally, in the case where target prompting information includes prompt image and prompt text, image is prompted to be shown in mesh In the first area demarcated in face, text is prompted to be shown in the second area in target interface.
Optionally, target prompting information do not include prompt image including prompt text in the case where, prompt text can be with It is shown in the first area in target interface.
Based on the program, terminal will be prompted to image and be shown in the in the case where target prompting information includes prompt image In first area in one interface, terminal includes the case where prompting image and prompts text in target prompting information, will be prompted to figure As being shown in the first area in target interface, will be prompted in the second area that text is shown in target interface, i.e. basis The categories subarea domain of prompt information shows that the hobby that can be convenient user according to oneself is executed according to the prompt information of different zones The movement of prompt, so that the display of prompt information is more humanized.
A kind of possible implementation, 3D model generating method provided in an embodiment of the present invention, further includes S207:
S207, after the depth information for acquiring the N 2D images and the N 2D images, terminal shows second contact surface, should It include target animation in second contact surface, target animation is used to indicate the generating process of the 3D model.
Optionally, second contact surface can trigger display for user, or what terminal was shown automatically, the present invention is implemented Example is not especially limited this.Specifically, after the depth information for acquiring the N 2D images and the N 2D images, Yong Huke To select to check the generating process of 3D model, then terminal can in second contact surface displaying target animation.Certainly, the N is being acquired After the depth information for opening 2D image and the N 2D images, terminal can also directly display second contact surface, show 3D mould to user The generating process of type.
It illustratively, include target animation 34 in second contact surface as shown in figure 8, interface 306 can be second contact surface.? In interface 306, " movement complete, modeling " can also be shown to user.
It certainly, also may include a progress bar in interface 306, which can show the life of the 3D model to user At progress.
Based on the program, terminal, can be the after the depth information for acquiring the N 2D images and the N 2D images Displaying target animation in second interface shows the generating process of 3D model to user, enables to user's viewing according to the photograph of oneself Piece establish with oneself matched body model, increase terminal generate 3D model interest.
Optionally, after generating 3D model, terminal can also show the 3D model on interface, in order to which user checks The 3D model.
A kind of possible implementation, after terminal generates 3D model, 3D model provided in an embodiment of the present invention is generated Method further includes S208:
The target animation shown in second contact surface update is shown as 3D model image by S208, terminal.
Based on the program, the target animation shown in second contact surface is updated to by terminal after terminal generates 3D model 3D model image can show the 3D model image that generate according to the image of user to user, for user checks, edit or Person carries out online fitting etc. using the 3D model.
It is understood that, due to acquiring the needs of image, user distance terminal can in the generating process of above-mentioned 3D model Can farther out, mask may not be very accurate, if user wishes mask also more true and accurate, user can also be with Model is re-established for face, if user is without more accurately mask, user, which can choose, does not establish mask.
A kind of possible implementation, 3D model generating method provided in an embodiment of the present invention can be with after S208 Including S209:
S209, terminal show that third prompt information, third prompt information generate mask for prompting user to trigger.
It should be noted that if user selects to generate mask, executing process can be with reference to the mistake in above-described embodiment Journey, also to user's display reminding information during acquiring face-image, the prompt information in the program can prompt user Execution movement or facial expression.
Illustratively, it is assumed that user selected first to generate first is figure model, as shown in figure 9, generating in figure model Later, terminal can show control 35 on interface, and user is prompted to establish mask.Whether can show in control 35 " right D facial modelling ", " carrying out D facial modelling to obtain more life-like manikin ", " later " and " modeling ", user can choose " later " it also can choose " modeling ".
It should be noted that when terminal is carrying out any of the above-described kind of scheme provided in an embodiment of the present invention, if user When clicking return key or clicking ESC Escape, terminal can show a prompting frame in current interface, interior in the prompting frame Appearance can be used for prompting the user whether exiting modeling, and prompting user to exit rear current progress possibly can not save.If user selects It has selected and has exited modeling, terminal can return to the interface for starting modeling, such as interface shown in FIG. 1;Terminal also may return to Model preview interface, in model preview interface, user can check other models having built up, in model preview interface In, user also can be used the 3D model having built up and be fitted online.
Optionally, after terminal generates 3D model, terminal can directly save the 3D model, can also show one the Four interfaces, the 3D model in the 4th interface can be edited, such as can scale 3D model, can rotate 3D model, the 4th interface It can also include " preservation " control and " deletion " control, if user clicks " preservation " control, terminal can save the 3D model, It certainly, also may include " 3D fitting " control in the 4th interface, if user clicks directly on " 3D fitting control ", terminal can be first The 3D model is saved, a sub-interface is then shown in the 4th interface, user can name in sub-interface for the 3D model, Grouping etc..
Certainly, a sub-interface is shown in the interface 301 that terminal can also be shown in Fig. 3, is newly-built 3D for user Model name, the embodiment of the present invention are that newly-built 3D model name is not especially limited for when.
Optionally, for terminal when generating mask, first area can be a border circular areas, may be used also in second area It is built with display " it is recommended that remove glasses and arrange hair, to guarantee that face are not blocked ", " removing glasses " or " leaning on closer " Information is discussed, prompt information can be " face rotates to left back ", " face rotates to right back " or " keeping smiling " etc..
Based on the program, in user after establishing figure model, terminal shows third prompt information, third prompt letter Breath generates mask for prompting user to trigger, and user can choose the mask for generating user, so that the 3D model is more Add true to nature.
Optionally, include the first screen and the second screen in terminal, the first input in the case where the input on the first screen, S205 can specifically be executed by S205c.
S205c, terminal successively show the M prompt information on the first screen, and successively show that the M mention on the second screen Show information.
Wherein, mutually in the same time, the content of the first screen display is identical as the content of the second screen display, which includes Preview image and prompt information, or including preview image and adjustment information.
It should be noted that user is inputted on the first screen of terminal before the first input, the second screen of terminal can be Screen state is put out, after terminal receives the first input on the first screen, terminal is lighted the second screen and successively shown on the second screen M prompt information of the first screen display;Certainly, the second screen of terminal may be bright screen state, in terminal on the first screen After receiving the first input, the content update that the second screen is shown is shown as to M prompt information of the first screen display.
The program for ease of description, it is assumed that first user's using terminal is that second user establishes 3D model, the first user In face of the first screen, second user faces the second screen.
Certainly, in this scenario, a prompt information can also be shown on the first screen: " other side being asked to keep the rotation of 1 original place of posture Turn ".
Optionally, when the prompt information of the second screen display is prompt text, the font size of the prompt text on the second screen can Biggish font size is thought, in order to which second user executes the dynamic of prompt information prompt according to the prompt information on the second screen Make.
Optionally, user is acquiring when realizing 3D model generating method provided in an embodiment of the present invention using two screens When the depth information of 2D image and 2D image, it is invalid that touch control operation on the second screen can be set in user.
Optionally, after terminal determines that second user has executed movement according to the movement that prompt information prompts, terminal can Screen is put out to control the second screen.Before the second screen puts out screen, terminal can first the first user of vibration prompt acquisition finished.
Illustratively, as shown in Figure 10, it is assumed that terminal is inputted on interface 301 as shown in Figure 3 after first input, the Interface on one screen can be able to be interface 309 with interface 308, the interface of the second screen display.Wherein, it can be shown in interface 308 Show a prompting frame, a human body contour outline is shown in prompting frame, display " asks other side according to the appearance of human body contour outline outside prompting frame Gesture is stood ".In interface 309, terminal equally shows that a human body contour outline, human body profile can be with the human bodies in interface 308 Overall size is identical, can also be the human body contour outline equal proportion amplification in interface 308, interface 309 can also be shown " please according to people The posture of body profile is stood ", which can be biggish font, in order to which user checks.
It should be noted that terminal can prohibit when terminal executes 3D model generating method provided in an embodiment of the present invention The only function of double screen switching.
It include the first screen and the second screen in terminal, the first input is the input on the first screen, then terminal based on the program The M prompt information is successively shown on the first screen, and successively shows the M prompt information on the second screen, due to identical At the moment, the content of the first screen display is identical as the content of the second screen display, which includes preview image and prompt information, Or including preview image and adjustment information.That is, if first user's using terminal is that second user establishes 3D model, first User can be used by the position, prompt second user shift position, prompt second of the content mobile terminal of the first screen display Family transition activities posture etc., second user can execute corresponding movement by the content of the second screen display, if second user Back to terminal, the first user can continue to execute movement by the content presentation second user of the first screen display, so as to More convenient instructs second user to execute movement.
Figure 11 is a kind of possible structural schematic diagram of terminal provided in an embodiment of the present invention, and as shown in figure 11, terminal 400 is wrapped Include: receiving module 401 obtains module 402, acquisition module 403 and generation module 404;Receiving module 401, for receiving user First input;Module 402 is obtained, for inputting in response to receiving module 401 received first, obtains the target action of user Information;Acquisition module 403, for obtaining the target action information and target deliberate action information matches that module 402 obtains In the case of, acquire the target 2D image of user and the depth information of target 2D image;Generation module 404, in acquisition mould After block 403 acquires the N 2D images of user and the depth information of the N 2D images, according to the N 2D images and the N 2D The depth information of image generates 3D model, and N is positive integer.
Optionally, in conjunction with Figure 11, as shown in figure 12, terminal 400 further includes output module 405;Output module 405, is used for Before the target action for obtaining the acquisition user of module 402, it is sequentially output M prompt information, a prompt information is for prompting User executes a movement, and each prompt information includes prompt text, prompt at least one of image and suggestion voice, and M is Positive integer less than or equal to N.
Optionally, in conjunction with Figure 12, as shown in figure 13, terminal 400 further includes detection module 406;Detection module 406, is used for It is inputted in response to receiving module 401 received first, whether detect in the first interface of terminal 400 includes human body image;Output Module 405, specifically for being sequentially output the M in the case where it includes human body image that detection module 406, which detects in the first interface, A prompt information.
Optionally, in the case where target prompting information includes prompt image, image is prompted to be shown in target interface In first area, target interface includes the first interface;In the case where target prompting information includes prompt image and prompt text, Prompt image is shown in the first area in target interface, and text is prompted to be shown in the second area in target interface.
Optionally, in conjunction with Figure 11, as shown in figure 14, terminal further includes display module 407;Display module 407, for adopting After collecting the depth information that module 403 acquires the N 2D images and the N 2D images, shows second contact surface, wrapped in second contact surface Target animation is included, target animation is used to indicate the generating process of the 3D model.
Optionally, display module 407 are also used to after generation module 404 generates 3D model, will show in second contact surface Target animation update be shown as 3D model image.
Optionally, which includes at least one of figure module and mask of user of user.
Optionally, terminal 400 includes the first screen and the second screen, and the first input is the input on the first screen;Output module 405, specifically for successively showing the M prompt information on the first screen, and this M prompt letter is successively shown on the second screen Breath;Wherein, mutually in the same time, the content of the first screen display is identical as the content of the second screen display, which includes preview Image and prompt information, or including preview image and adjustment information.
Terminal 400 provided in an embodiment of the present invention can be realized each process that terminal is realized in above method embodiment, To avoid repeating, which is not described herein again.
Terminal provided in an embodiment of the present invention, firstly, terminal receives the first input of user, it is then, defeated in response to first Enter, terminal obtains the target action information of user, in the case where target action information and target deliberate action information matches, eventually The target two dimension 2D image of end acquisition user and the depth information of target 2D image, finally, acquisition user N 2D images and After the depth information of the N 2D images, terminal generates 3D mould according to the depth information of the N 2D images and the N 2D images Type.Due to target action information and target deliberate action information matches, it can indicate what the movement of user's execution and terminal needed Movement matching, in the case where the target action information of user and target deliberate action information matches, N 2D figures of terminal acquisition Picture, can more accurately react the true figure of a user, ratio and fat or thin, and the depth information of these 2D images The 3D effect of human body image in the image of acquisition can be more accurately reacted, therefore, according to above-mentioned 3D model generating method, is generated 3D model and the practical figure of user be more nearly, matching degree is higher.
A kind of hardware structural diagram of Figure 15 terminal of each embodiment to realize the present invention, the terminal 100 include but It is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, the components such as user input unit 107, interface unit 108, memory 109, processor 110 and power supply 111.This field Technical staff is appreciated that the restriction of the not structure paired terminal of terminal structure shown in Figure 15, and terminal may include than diagram More or fewer components perhaps combine certain components or different component layouts.In embodiments of the present invention, terminal packet Include but be not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, user input unit 107, for receiving the first input of user;Processor 110, in response to first Input, obtains the target action information of user;Processor 110 is also used in target action information and target deliberate action information In matched situation, the target two dimension 2D image of user and the depth information of target 2D image are acquired;At N of acquisition user After the depth information of 2D image and the N 2D images, according to the depth information of the N 2D images and the N 2D images, generate 3D model, N are positive integer.
The embodiment of the present invention provides terminal, firstly, then the first input that terminal receives user is inputted in response to first, Terminal obtains the target action information of user, in the case where target action information and target deliberate action information matches, terminal The target two dimension 2D image of user and the depth information of target 2D image are acquired, finally, opening 2D images in the N of acquisition user and being somebody's turn to do After the depth information of N 2D images, terminal generates 3D mould according to the depth information of the N 2D images and the N 2D images Type.Due to target action information and target deliberate action information matches, it can indicate what the movement of user's execution and terminal needed Movement matching, in the case where the target action information of user and target deliberate action information matches, N 2D figures of terminal acquisition Picture, can more accurately react the true figure of a user, ratio and fat or thin, and the depth information of these 2D images The 3D effect of human body image in the image of acquisition can be more accurately reacted, therefore, according to above-mentioned 3D model generating method, is generated 3D model and the practical figure of user be more nearly, matching degree is higher.
It should be understood that the embodiment of the present invention in, radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 110 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 101 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also by wireless communication system and network and other set Standby communication.
Terminal provides wireless broadband internet by network module 102 for user and accesses, and such as user is helped to receive and dispatch electricity Sub- mail, browsing webpage and access streaming video etc..
Audio output unit 103 can be received by radio frequency unit 101 or network module 102 or in memory 109 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 103 can also provide and end The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that end 100 executes.Sound Frequency output unit 103 includes loudspeaker, buzzer and receiver etc..
Input unit 104 is for receiving audio or video signal.Input unit 104 may include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or network module 102.Mike Wind 1042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 101 is converted in the case where telephone calling model.
Terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjusts the brightness of display panel 1061, and proximity sensor can close display panel when terminal 100 is moved in one's ear 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, can be used to identify terminal posture (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Sensor 105 can be with Including fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensor etc., details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generates and set with the user of terminal It sets and the related key signals of function control inputs.Specifically, user input unit 107 include touch panel 1071 and other Input equipment 1072.Touch panel 1071, also referred to as touch screen, collect user on it or nearby touch operation (such as User is using any suitable objects or attachment such as finger, stylus on touch panel 1071 or near touch panel 1071 Operation).Touch panel 1071 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 110, receives processor 110 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch panel 1071.In addition to touch panel 1071, user input unit 107 can also include other input equipments 1072. Specifically, other input equipments 1072 can include but is not limited to physical keyboard, function key (such as volume control button, switch Key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 1071 can be covered on display panel 1061, when touch panel 1071 is detected at it On or near touch operation after, send processor 110 to determine the type of touch event, be followed by subsequent processing device 110 according to touching The type for touching event provides corresponding visual output on display panel 1061.Although in Figure 15, touch panel 1071 and aobvious Show that panel 1061 is the function that outputs and inputs of realizing terminal as two independent components, but in certain embodiments, The function that outputs and inputs that touch panel 1071 and display panel 1061 can be integrated and be realized terminal, does not limit specifically herein It is fixed.
Interface unit 108 is the interface that external device (ED) is connect with terminal 100.For example, external device (ED) may include it is wired or Wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, memory card port, For connecting port, the port audio input/output (I/O), video i/o port, ear port of the device with identification module Etc..Interface unit 108 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and will One or more elements that the input received is transferred in terminal 100 or can be used for terminal 100 and external device (ED) it Between transmit data.
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to It crosses operation or executes the software program and/or module being stored in memory 109, and call and be stored in memory 109 Data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Processor 110 may include one Or multiple processing units;Preferably, processor 110 can integrate application processor and modem processor, wherein application processing The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage Solution, above-mentioned modem processor can not also be integrated into processor 110.
Terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 can be with It is logically contiguous by power-supply management system and processor 110, thus by power-supply management system realize management charging, electric discharge, with And the functions such as power managed.
In addition, terminal 100 includes some unshowned functional modules, details are not described herein.
Optionally, the embodiment of the present invention also provides a kind of terminal, and in conjunction with Figure 15, including processor 110, memory 109 is deposited The computer program that can be run on memory 109 and on processor 110 is stored up, which is executed by processor 110 Each process of the above-mentioned 3D model generating method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating, this In repeat no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned 3D model generating method embodiment when being executed by processor, and Identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic Dish or CD etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (17)

1. a kind of 3D model generating method is applied to terminal, which is characterized in that the described method includes:
Receive the first input of user;
In response to first input, the target action information of user is obtained;
In the case where the target action information and target deliberate action information matches, the target two dimension 2D image of user is acquired With the depth information of the target 2D image;
After the depth information of the N 2D images of acquisition user and the N 2D images, according to the N 2D images and institutes The depth information of N 2D images is stated, generates 3D model, N is positive integer.
2. the method according to claim 1, wherein before the target action information of acquisition user, further includes:
It is sequentially output M prompt information, for prompting user to execute a movement, each prompt information includes a prompt information Text, prompt at least one of image and suggestion voice are prompted, M is the positive integer less than or equal to N.
3. according to the method described in claim 2, it is characterized in that, it is described receive user first input after, the method Further include:
Whether in response to first input, detecting in the first interface of the terminal includes human body image;
It is described to be sequentially output M prompt information, comprising:
In the case where including the human body image in first interface, it is sequentially output the M prompt information.
4. according to the method described in claim 3, it is characterized in that,
In the case where target prompting information includes prompt image, the prompt image is shown in the first area in target interface Interior, the target interface includes first interface;
In the case where target prompting information includes prompt image and prompt text, the prompt image is shown in target circle In first area in face, the prompt text is shown in the second area in the target interface.
5. the method according to claim 1, wherein the method also includes:
After the depth information for acquiring the N 2D image and the N 2D images, second contact surface, second boundary are shown It include target animation in face, the target animation is used to indicate the generating process of the 3D model.
6. according to the method described in claim 5, it is characterized in that, after generating 3D model, further includes:
The target animation shown in the second contact surface update is shown as 3D model image.
7. the method according to claim 1, wherein the 3D model includes figure model and the user of user At least one of mask.
8. according to the method described in claim 2, it is characterized in that, the terminal include the first screen and the second screen, described first Input is the input on first screen;
It is sequentially output M prompt information, comprising:
The M prompt information is successively shown on first screen, and the M prompt is successively shown on second screen Information;
Wherein, mutually in the same time, the content of first screen display is identical as the content of second screen display, it is described in Holding includes preview image and prompt information, or including preview image and adjustment information.
9. a kind of terminal, which is characterized in that the terminal includes: receiving module, obtains module, acquisition module and generation module;
The receiving module, for receiving the first input of user;
The acquisition module, for obtaining the target action of user in response to received first input of the receiving module Information;
The acquisition module, the target action information and target deliberate action information for being obtained in the acquisition module In the case where matching, the target two dimension 2D image of user and the depth information of the target 2D image are acquired;
The generation module, in the N 2D images of acquisition module acquisition user and the depth letter of the N 2D images After breath, according to the depth information of the N 2D images and the N 2D images, 3D model is generated, N is positive integer.
10. terminal according to claim 9, which is characterized in that the terminal further includes output module;
The input module, for being sequentially output M prompt letter before the acquisition module obtains the target action of user Breath, for prompting user to execute a movement, each prompt information includes prompt text, prompt image and mentions a prompt information Show that at least one of voice, M are the positive integer less than or equal to N.
11. terminal according to claim 10, which is characterized in that the terminal further includes detection module;
The detection module, for detecting the first of the terminal in response to received first input of the receiving module It whether include human body image in interface;
The output module, specifically for detecting to include the human body image in first interface in the detection module In the case of, it is sequentially output the M prompt information.
12. terminal according to claim 11, which is characterized in that
In the case where target prompting information includes prompt image, the prompt image is shown in the first area in target interface Interior, the target interface includes first interface;
In the case where target prompting information includes prompt image and prompt text, the prompt image is shown in target circle In first area in face, the prompt text is shown in the second area in the target interface.
13. terminal according to claim 9, which is characterized in that the terminal further includes display module;
The display module, for acquired in acquisition module the N 2D images and the N 2D images depth information it Afterwards, it shows second contact surface, includes target animation in the second contact surface, the target animation is used to indicate the life of the 3D model At process.
14. terminal according to claim 13, which is characterized in that
The display module is also used to after the generation module generates 3D model, the institute that will be shown in the second contact surface It states target animation update and is shown as 3D model image.
15. terminal according to claim 9, which is characterized in that the 3D model includes figure module and the user of user At least one of mask.
16. terminal according to claim 10, which is characterized in that the terminal includes the first screen and the second screen, and described the One input is the input on first screen;
The display module, specifically for successively showing the M prompt information on first screen, and in second screen On successively show the M prompt information;
Wherein, mutually in the same time, the content of first screen display is identical as the content of second screen display, it is described in Holding includes preview image and prompt information, or including preview image and adjustment information.
17. a kind of terminal, which is characterized in that the terminal includes processor, memory and is stored on the memory and can The computer program run on the processor realizes such as claim when the computer program is executed by the processor Described in any one of 1-8 the step of 3D model generating method.
CN201811447286.2A 2018-11-29 2018-11-29 3D model generation method and terminal Active CN109636898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811447286.2A CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811447286.2A CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Publications (2)

Publication Number Publication Date
CN109636898A true CN109636898A (en) 2019-04-16
CN109636898B CN109636898B (en) 2023-08-22

Family

ID=66070261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811447286.2A Active CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Country Status (1)

Country Link
CN (1) CN109636898B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413607A2 (en) * 2010-07-27 2012-02-01 LG Electronics Mobile terminal and method of controlling a three-dimensional image therein
CN103258078A (en) * 2013-04-02 2013-08-21 上海交通大学 Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413607A2 (en) * 2010-07-27 2012-02-01 LG Electronics Mobile terminal and method of controlling a three-dimensional image therein
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN103258078A (en) * 2013-04-02 2013-08-21 上海交通大学 Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方伟: "3D成像技术在智能手机交互设计中的应用研究", 《佳木斯大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN109636898B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN109391792A (en) Method, apparatus, terminal and the computer readable storage medium of video communication
CN109600550A (en) A kind of shooting reminding method and terminal device
CN107817939A (en) A kind of image processing method and mobile terminal
CN109381165A (en) A kind of skin detecting method and mobile terminal
CN109218648A (en) A kind of display control method and terminal device
CN109215007A (en) A kind of image generating method and terminal device
CN109814968A (en) A kind of data inputting method, terminal device and computer readable storage medium
CN107786811B (en) A kind of photographic method and mobile terminal
CN109461124A (en) A kind of image processing method and terminal device
CN109859307A (en) A kind of image processing method and terminal device
CN109032406A (en) A kind of control method and mobile terminal
CN109995933A (en) The method and terminal device of the alarm clock of controlling terminal equipment
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN109669611A (en) Fitting method and terminal
CN109032468A (en) A kind of method and terminal of adjustment equipment parameter
CN109085963A (en) A kind of interface display method and terminal device
CN109190356A (en) A kind of unlocking screen method and terminal
CN109448069A (en) A kind of template generation method and mobile terminal
CN109413264A (en) A kind of background picture method of adjustment and terminal device
CN108881782A (en) A kind of video call method and terminal device
CN108881721A (en) A kind of display methods and terminal
CN107754316A (en) A kind of information interchange processing method and mobile terminal
CN108833791A (en) A kind of image pickup method and device
CN108898000A (en) A kind of method and terminal solving lock screen
CN109859718A (en) Screen brightness regulation method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant