CN109597480A - Man-machine interaction method, device, electronic equipment and computer readable storage medium - Google Patents

Man-machine interaction method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109597480A
CN109597480A CN201811314208.5A CN201811314208A CN109597480A CN 109597480 A CN109597480 A CN 109597480A CN 201811314208 A CN201811314208 A CN 201811314208A CN 109597480 A CN109597480 A CN 109597480A
Authority
CN
China
Prior art keywords
model
target object
interaction
target
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811314208.5A
Other languages
Chinese (zh)
Inventor
纪纲
余雪亭
徐毅刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201811314208.5A priority Critical patent/CN109597480A/en
Publication of CN109597480A publication Critical patent/CN109597480A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention discloses a kind of man-machine interaction method, device, electronic equipment and computer readable storage medium, the described method includes: when receiving interaction request of the target user for target object, according to collected face characteristic information, the head model of the target user is determined;The target object model of the target object is determined in preset object model library;It based on physical engine corresponding with the target object model, controls the target object model and is moved relative to the head model, and obtain the motion profile of the target object model;The motion profile is rendered, on the display screen of electronic equipment to show the interaction content of the interaction request.In the above scheme, by the head model and target object mode that construct user, the relative motion between two models is realized by physical engine, and wearing wearable interactive device without using family can be achieved with human-computer interaction, so that the operation of human-computer interaction is more convenient.

Description

Man-machine interaction method, device, electronic equipment and computer readable storage medium
Technical field
The present invention relates to field of computer technology more particularly to a kind of man-machine interaction method, device, electronic equipment and calculating Machine readable storage medium storing program for executing.
Background technique
With the continuous development of science and technology, human-computer interaction technology is also developed rapidly.Human-computer interaction technology refers to Information exchange between people and computer realizes people and computer by computer Input/Output Device in an efficient way Dialogue.
Currently, human-computer interaction is widely applied to every field, for field of play, user can be by wearable Interactive device, such as hand-held device, the helmet etc. carry out game.For example, by interactive device make specific movement, gesture come Complete the Role Dilemma in game.
Human-computer interaction in the prior art depends on wearable interactive device, therefore there is the skill for being not easy to user's operation Art problem.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind State man-machine interaction method, device, electronic equipment and the computer readable storage medium of problem.
In a first aspect, this specification embodiment provides a kind of man-machine interaction method, it is applied to electronic equipment, comprising:
When receiving interaction request of the target user for target object, according to collected face characteristic information, really The head model of the fixed target user;
The target object model of the target object is determined in preset object model library;
Based on physical engine corresponding with the target object model, the target object model is controlled relative to the head Portion's model sport, and obtain the motion profile of the target object model;
The motion profile is rendered on the display screen of the electronic equipment, in the interaction to show the interaction request Hold.
Optionally, described according to collected face characteristic information, determine the head model of the target user, comprising:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes that the target is used The face at family display size on the display screen and display position;
According to the display size and the display position, moulded dimension and the model position of the head model are determined It sets.
Optionally, it is based on physical engine corresponding with the target object model described, controls the target object mould Type is moved relative to the head model, and before obtaining the motion profile of the target object model, the method also includes:
Physical parameter based on the target object in real world determines the model ginseng of the target object model Number;
According to the interaction of interaction content rule and the model parameter of the target object model, the object is created Manage engine.
Optionally, the method also includes:
It is requested according to the interaction, determines target interactive scene;
The target interactive scene is shown on the display screen, so that the target object model and the head mould The interaction of type executes in the target interactive scene.
Optionally, described to be based on physical engine corresponding with the target object model, control the target object model It is moved relative to the head model, and obtains the motion profile of the target object model, comprising:
When the target object model is ball model, when the interaction request is requested for heading, control described ball Model moves in preset crash space relative to the head model, and obtains the motion profile of the ball model.
Optionally, described when the target object model is ball model, when the interaction request is requested for heading, control It makes the ball model to move relative to the head model, and after obtaining the motion profile of the ball model, the side Method further include:
After detecting that the ball model and the head model collide, after the collision for determining the ball model Track;
When track exceeds the preset crash space after the collision, terminate interaction.
Second aspect, this specification embodiment provide a kind of human-computer interaction device, are applied to electronic equipment, comprising:
Head model determining module, for when receiving interaction request of the target user for target object, according to adopting The face characteristic information collected determines the head model of the target user;
Target object model determining module, for determining the target pair of the target object in preset object model library As model;
Control module controls the target object mould for being based on physical engine corresponding with the target object model Type is moved relative to the head model, and obtains the motion profile of the target object model;
Rendering module, for rendering the motion profile on the display screen of the electronic equipment, to show the interaction The interaction content of request.
Optionally, the head model determining module, is used for:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes that the target is used The face at family display size on the display screen and display position;
According to the display size and the display position, moulded dimension and the model position of the head model are determined It sets.
Optionally, described device further include:
Parameter acquisition module determines the target for the physical parameter based on the target object in real world The model parameter of object model;
Creation module, for being joined according to the interaction rule of the interaction content and the model of the target object model Number, creates the physical engine.
Optionally, described device further include:
Scene determining module determines target interactive scene for requesting according to the interaction;
First processing module, for showing on the display screen the target interactive scene, so that the target pair As model executes in the target interactive scene with interacting for the head model.
Optionally, the control module, is used for:
When the target object model is ball model, when the interaction request is requested for heading, control described ball Model moves in preset crash space relative to the head model, and obtains the motion profile of the ball model.
Optionally, described device further include:
Track determining module, for determining institute after detecting that the ball model and the head model collide State track after the collision of ball model;
Second processing module, for terminating interaction when track exceeds the preset crash space after the collision.
The third aspect, this specification embodiment provides a kind of electronic equipment, including memory, processor and is stored in storage On device and the computer program that can run on a processor, the step of processor executes any of the above-described the method.
Fourth aspect, this specification embodiment provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence, when which is executed by processor the step of realization any of the above-described the method.
This specification embodiment has the beneficial effect that:
Target user is being received for target object when requesting with interaction, into human-computer interaction scene, according to acquisition The face characteristic information arrived determines the head model of target user, and the target pair is determined in preset object model library The target object model of elephant;Based on physical engine corresponding with target object model, target object model is controlled relative to head Model sport, and obtain the motion profile of target object model;The rendering motion track on the display screen of electronic equipment, to show Interact the interaction content of request.In the above scheme, by constructing the head model and target object mode of user, Yi Jitong The relative motion between physical engine two models of realization is crossed, can be achieved with without using the family wearable interactive device of wearing man-machine Interaction, so that the operation of human-computer interaction is more convenient.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is a kind of flow chart for man-machine interaction method that this specification embodiment first aspect provides;
Fig. 2 is a kind of method flow diagram for determining head model that this specification embodiment provides;
Fig. 3 is the schematic diagram for the human-computer interaction device that this specification embodiment second aspect provides;
Fig. 4 is the schematic diagram for the electronic equipment that this specification embodiment third aspect provides.
Specific embodiment
This specification embodiment discloses a kind of man-machine interaction method, device, electronic equipment and computer-readable storage medium Matter is realized between two models by constructing the head model and target object mode of user, and by physical engine Relative motion, wearing wearable interactive device without using family can be achieved with human-computer interaction, so that the operation of human-computer interaction is more It is convenient.Man-machine interaction method is applied to electronic equipment, comprising: requests receiving interaction of the target user for target object When, according to collected face characteristic information, determine the head model of the target user;In preset object model library really The target object model of the fixed target object;Based on physical engine corresponding with the target object model, the mesh is controlled Mark object model is moved relative to the head model, and obtains the motion profile of the target object model;In the electronics The motion profile is rendered on the display screen of equipment, to show the interaction content of the interaction request.
Technical solution of the present invention is described in detail below by attached drawing and specific embodiment, it should be understood that the application Specific features in embodiment and embodiment are the detailed description to technical scheme, rather than to present techniques The restriction of scheme, in the absence of conflict, the technical characteristic in the embodiment of the present application and embodiment can be combined with each other.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein Middle character "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or".
Embodiment
In a first aspect, this specification embodiment provides a kind of man-machine interaction method, it is applied to electronic equipment, as shown in Figure 1, For a kind of flow chart for man-machine interaction method that this specification embodiment provides, method includes the following steps:
Step S11: when receiving interaction request of the target user for target object, according to collected face characteristic Information determines the head model of the target user;
In this specification embodiment, electronic equipment can be the equipment such as mobile phone, tablet computer, can be set on electronic equipment There is image collecting device, for acquiring the face characteristic information of user.
It should be understood that target user can be any user for carrying out human-computer interaction with electronic equipment, interaction request can To be that the interaction key on user's click electronic equipment display screen generates, it is also possible to press on electronic equipment for triggering mutually What the external key of dynamic operation generated.In this specification embodiment, electronic equipment can provide a variety of targets pair for interaction As, such as football, gift box or other interactive objects.
When receiving interaction request, electronic equipment can pass through the face characteristic of image acquisition device target user Information, and determine according to face characteristic information the head model of target user, head model can be threedimensional model.This specification In embodiment, head model, which can be, to be pre-set, and calls directly the head mould preset when receiving interaction request Type, what head model was also possible to generate in real time.In one embodiment, it can be previously provided with head model library, the head It include the head model there are many moulding in model library, user can select according to the hobby of oneself.In another embodiment In, head model can construct in real time according to the face characteristic information of target user, such as construct a transparent head model, add It is downloaded in the face recognition result of target user.
Step S12: the target object model of the target object is determined in preset object model library;
It should be understood that may include the target object of interaction in interaction request, different target objects is corresponding with difference Target object model.In this specification embodiment, for different target objects, a target object model, mesh are constructed Mark object model can be threedimensional model.For example, constructing a rigid body mould corresponding with football when target object is football Type constructs a model corresponding with gift box when target object is gift box.Preset object model library be these not The set of same target object model.It, can be directly in preset object when receiving the interaction request for target object Invocation target object model in model library.
Step S13: it is based on physical engine corresponding with the target object model, it is opposite to control the target object model It is moved in the head model, and obtains the motion profile of the target object model;
In this specification embodiment, in order to realize interacting between target object model and head model, drawn by physics Physical attribute required for assign target object model is held up, for example, the objects such as quality, coefficient of resilience of setting target object model Manage attribute.After target object model has these physical attributes, so that it may simulate corresponding object according to these physical attributes Reason movement, can move relative to head model.In one embodiment, the opposite fortune of target object model and head model Dynamic to can be collision movement, i.e. target object model is sprung back after colliding with head model, after falling again with head model into Row collision.When target object model is moved, the motion profile of target object model is obtained, motion profile may include fortune The parameters such as dynamic direction, move distance.
Step S14: rendering the motion profile on the display screen of the electronic equipment, to show the interaction request Interaction content.
In this specification embodiment, motion profile can be rendered by rendering engine, for example, working as target object It, can be by render engine renders prototype soccerballs along the process of motion profile when model is prototype soccerballs.In addition, working as head model For preset shape head model when, head model can also be rendered on a display screen, if head model be it is transparent When model, then incorrect portion's model is rendered.
In one embodiment, physical engine can be based on rendering engine exploitation, for example, physical engine can be Bullet physical engine, rendering engine can be bgfx rendering engine, by bgfx rendering coordinate system and bullet physical space It is mapped, realizes that rendering engine and physical engine combine, with movement of the post-processing object object in display screen.
As shown in Fig. 2, the method flow diagram of the determination head model provided for this specification embodiment, including following step Suddenly.
Step S21: according to the face characteristic information, face recognition result is obtained, the face recognition result includes institute State the face of target user display size on the display screen and display position;
Step S22: according to the display size and the display position, determine the moulded dimension of the head model with And modal position.
It should be understood that the acquisition of face characteristic information is real-time, that is to say, that set in target user relative to electronics When change in location occurs for standby image collecting device, the face recognition result of acquisition is also that can change, in order to guarantor The real-time of machine interaction needs to adjust head model in real time according to face recognition result.
In this specification embodiment, can based on AI (Artificial Intelligence, artificial intelligence) technology come into Row recognition of face, face recognition result may include the face of target user display size on the display screen and display Position, it is, of course, also possible to include other recognition results, here without limitation.According to the display of user face on a display screen Size and display position determine the moulded dimension and modal position of head model.In one embodiment, ruler can will be shown Very little and moulded dimension is set as identical, display position and modal position is also configured as identical.In another embodiment, it shows The difference of size and moulded dimension is less than first threshold, and the difference of display position and modal position is less than second threshold.In addition, Face recognition result displaying target user to face display size and display position change when, corresponding adjustment head mould The moulded dimension and modal position of type.
For example, when target user with mobile phone when interacting target can be acquired by the camera of mobile phone The image of user, and the image of target user is shown on a display screen, the image based on target user, it is special to obtain recognition of face Sign, and determine face current display size and display position, head model is loaded into according to display size and display position The head position of target user.
Optionally, it is based on physical engine corresponding with the target object model described, controls the target object mould Type is moved relative to the head model, and before obtaining the motion profile of the target object model, the method also includes: Physical parameter based on the target object in real world determines the model parameter of the target object model;According to institute The interaction rule of interaction content and the model parameter of the target object model are stated, the physical engine is created.
In this specification embodiment, physical engine can construct a virtual physical world space, correspond to true generation Boundary obtains the mould of target object model by map physical parameters of the target object in real world into physical world space Shape parameter.For example, when target object is football, according to football in parameters such as the quality, size, frictional force of real world, etc. Ratio is mapped in virtual physical world space, so that quality of the prototype soccerballs in virtual physical world space, big The parameters such as small, frictional force are identical as parameter of the true football in real world.
In this specification embodiment, different interaction contents can be corresponding with different interaction rules, for example, when in interaction When holding to head a ball, interaction rule can be to be sprung back again after head model collides with prototype soccerballs.When interaction content is head When portion's prostrating oneself before somebody object, interaction rule is that gift BOX Model is broken after head model and gift BOX Model collide.According to interaction The model parameter of rule and target object model creates physical engine, so that physical engine assigns target object model physics Parameter, and moved according to interaction rule.
Optionally, the method also includes: according to the interaction request, determine target interactive scene;The target is mutual Dynamic scene is shown on the display screen, so that the target object model and interacting for the head model are mutual in the target It is executed in dynamic scene.
It, can be according to the different mesh of interaction request selecting in order to keep human-computer interaction more life-like in this specification embodiment Mark interactive scene.For example, target interactive scene can be football pitch when interaction request is requested for heading, it is in interaction request When prostrating oneself before somebody object, target interactive scene can distribute scene for present.It requests, can have one or more mutual for every kind of interaction Dynamic scene is corresponding, these interactive scenes can be pre-set, and calls directly when receiving interaction request.
In one embodiment, the switching of scene can be realized by human body image segmentation, electronic equipment can acquire To the image of user, then the image of user is split from current scene, and by the display scene switching of electronic equipment For target interactive scene, then the image of user is loaded into target interactive scene, it is mutual in target that user thus may be implemented It is interacted with target object in dynamic scene.Certainly, in addition to the above method, user can also be realized in target by other means The process interacted in interactive scene with target object, here without limitation.
In order to better understand the man-machine interaction method of this specification embodiment offer, below using target object model as ball Class model, interaction request to be illustrated for heading request.
In this embodiment, described to be based on physical engine corresponding with the target object model, control the target pair As model is moved relative to the head model, and obtain the motion profile of the target object model, comprising: in the target When object model is ball model, when the interaction request is requested for heading, it is empty in preset collision to control the ball model It is interior to be moved relative to the head model, and obtain the motion profile of the ball model.
It should be understood that ball model can be prototype soccerballs, basketball model etc., heading is usually that the head of user carries out Heading, therefore, head model can be the model of target user's contouring head, be that target is used with guarantee to interact with ball model The head at family, rather than other positions such as face, chin.
In this specification embodiment, preset crash space can be three-dimensional space, and preset crash space is set with electronics The display size of standby display screen is adapted, i.e., preset crash space is completely displayed on display screen.Preset collision is empty Between can be the interaction space of human-computer interaction.In one embodiment, by taking ball model is prototype soccerballs as an example, if football mould Type is initially set to fall from the position of 3m high, and physical engine can be used by the height of preset crash space and be set as 3m, If football is initially fallen on the most upper edge of display screen, and the most lower edge of display screen is corresponding with the ground of preset crash space, So the vertical size of display screen can be mapped with 3m, and then move distance of the prototype soccerballs in preset crash space It can correspond on the move distance that reflection is shown to display screen.
Optionally, described when the target object model is ball model, when the interaction request is requested for heading, control It makes the ball model to move relative to the head model, and after obtaining the motion profile of the ball model, the side Method further include: after detecting that the ball model and the head model collide, determine the collision of the ball model Track afterwards;When track exceeds the preset crash space after the collision, terminate interaction.
Still by taking prototype soccerballs as an example, when falling behind under prototype soccerballs, target user can control head by moving-head The movement of model, when head model is moved to the lower section of football, head model collides with prototype soccerballs, prototype soccerballs quilt It flicks, while being fallen after rise under the action of prototype soccerballs self gravity, target user is again by the head of movement oneself come real at this time The movement of existing head model, so that head model is collided with prototype soccerballs again.If head model is in prototype soccerballs During falling, prototype soccerballs are not connected to, i.e., are not collided, then interaction terminates.In addition, if head model is football After model collides, prototype soccerballs fly out preset crash space, that is, the display screen range for the display screen that flies out, at this point, interaction Terminate.
Second aspect, based on the same inventive concept, this specification embodiment provide a kind of human-computer interaction device, are applied to electricity Sub- equipment, referring to FIG. 3, including:
Head model determining module 31, for receive target user for target object interaction request when, according to Collected face characteristic information determines the head model of the target user;
Target object model determining module 32, for determining the target of the target object in preset object model library Object model;
Control module 33 controls the target object for being based on physical engine corresponding with the target object model Model is moved relative to the head model, and obtains the motion profile of the target object model;
Rendering module 34, it is described mutual to show for rendering the motion profile on the display screen of the electronic equipment The interaction content of dynamic request.
In a kind of optional implementation, head model determining module 31 is used for:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes that the target is used The face at family display size on the display screen and display position;
According to the display size and the display position, moulded dimension and the model position of the head model are determined It sets.
In a kind of optional implementation, described device further include:
Parameter acquisition module determines the target for the physical parameter based on the target object in real world The model parameter of object model;
Creation module, for being joined according to the interaction rule of the interaction content and the model of the target object model Number, creates the physical engine.
In a kind of optional implementation, described device further include:
Scene determining module determines target interactive scene for requesting according to the interaction;
First processing module, for showing on the display screen the target interactive scene, so that the target pair As model executes in the target interactive scene with interacting for the head model.
In a kind of optional implementation, control module 33 is used for:
When the target object model is prototype soccerballs, when the interaction request is requested for heading, the football is controlled Model moves in preset crash space relative to the head model, and obtains the motion profile of the ball model.
In a kind of optional implementation, described device further include:
Track determining module, for determining institute after detecting that the prototype soccerballs and the head model collide State track after the collision of prototype soccerballs;
Second processing module, for terminating interaction when track exceeds the preset crash space after the collision.
About above-mentioned apparatus, wherein the concrete function of modules is in human-computer interaction side provided in an embodiment of the present invention It is described in detail in the embodiment of method, no detailed explanation will be given here.
The third aspect is based on inventive concept same as man-machine interaction method in previous embodiment, and the present invention also provides one Kind of electronic equipment, as shown in figure 4, including memory 504, processor 502 and being stored on memory 504 and can be in processor The computer program run on 502, the processor 502 realize appointing for man-machine interaction method described previously when executing described program The step of one method.
Wherein, in Fig. 4, bus architecture (is represented) with bus 500, and bus 500 may include any number of interconnection Bus and bridge, bus 500 will include the one or more processors represented by processor 502 and what memory 504 represented deposits The various circuits of reservoir link together.Bus 500 can also will peripheral equipment, voltage-stablizer and management circuit etc. it Various other circuits of class link together, and these are all it is known in the art, therefore, no longer carry out further to it herein Description.Bus interface 506 provides interface between bus 500 and receiver 501 and transmitter 503.Receiver 501 and transmitter 503 can be the same element, i.e. transceiver, provide the unit for communicating over a transmission medium with various other devices.Place It manages device 502 and is responsible for management bus 500 and common processing, and memory 504 can be used for storage processor 502 and execute behaviour Used data when making.
Fourth aspect, based on the inventive concept with man-machine interaction method in previous embodiment, the present invention also provides a kind of meters Calculation machine readable storage medium storing program for executing, is stored thereon with computer program, which realizes man-machine friendship described previously when being executed by processor The step of either mutual method method.
This specification is referring to the method, equipment (system) and computer program product according to this specification embodiment Flowchart and/or the block diagram describes.It should be understood that can be realized by computer program instructions every in flowchart and/or the block diagram The combination of process and/or box in one process and/or box and flowchart and/or the block diagram.It can provide these computers Processor of the program instruction to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices To generate a machine, so that generating use by the instruction that computer or the processor of other programmable data processing devices execute In setting for the function that realization is specified in one or more flows of the flowchart and/or one or more blocks of the block diagram It is standby.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of equipment, the commander equipment realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.
Invention additionally discloses A1, a kind of man-machine interaction method is applied to electronic equipment, which comprises
When receiving interaction request of the target user for target object, according to collected face characteristic information, really The head model of the fixed target user;
The target object model of the target object is determined in preset object model library;
Based on physical engine corresponding with the target object model, the target object model is controlled relative to the head Portion's model sport, and obtain the motion profile of the target object model;
The motion profile is rendered on the display screen of the electronic equipment, in the interaction to show the interaction request Hold.
A2, man-machine interaction method according to a1, it is described according to collected face characteristic information, determine the target The head model of user, comprising:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes that the target is used The face at family display size on the display screen and display position;
According to the display size and the display position, moulded dimension and the model position of the head model are determined It sets.
A3, man-machine interaction method according to a1 are drawn described based on physics corresponding with the target object model It holds up, controls the target object model and moved relative to the head model, and obtain the movement rail of the target object model Before mark, the method also includes:
Physical parameter based on the target object in real world determines the model ginseng of the target object model Number;
According to the interaction of interaction content rule and the model parameter of the target object model, the object is created Manage engine.
A4, man-machine interaction method according to a1, the method also includes:
It is requested according to the interaction, determines target interactive scene;
The target interactive scene is shown on the display screen, so that the target object model and the head mould The interaction of type executes in the target interactive scene.
A5, according to the described in any item man-machine interaction methods of A1-A4, it is described based on corresponding with the target object model Physical engine controls the target object model and moves relative to the head model, and obtains the target object model Motion profile, comprising:
When the target object model is ball model, when the interaction request is requested for heading, control described ball Model moves in preset crash space relative to the head model, and obtains the motion profile of the ball model.
A6, man-machine interaction method according to a5, it is described when the target object model is ball model, it is described mutual When dynamic request is requested for heading, controls the ball model and moved relative to the head model, and obtain the ball model Motion profile after, the method also includes:
After detecting that the ball model and the head model collide, after the collision for determining the ball model Track;
When track exceeds the preset crash space after the collision, terminate interaction.
B7, a kind of human-computer interaction device, are applied to electronic equipment, and described device includes:
Head model determining module, for when receiving interaction request of the target user for target object, according to adopting The face characteristic information collected determines the head model of the target user;
Target object model determining module, for determining the target pair of the target object in preset object model library As model;
Control module controls the target object mould for being based on physical engine corresponding with the target object model Type is moved relative to the head model, and obtains the motion profile of the target object model;
Rendering module, for rendering the motion profile on the display screen of the electronic equipment, to show the interaction The interaction content of request.
B8, the human-computer interaction device according to B7, the head model determining module, are used for:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes that the target is used The face at family display size on the display screen and display position;
According to the display size and the display position, moulded dimension and the model position of the head model are determined It sets.
B9, the human-computer interaction device according to B7, described device further include:
Parameter acquisition module determines the target for the physical parameter based on the target object in real world The model parameter of object model;
Creation module, for being joined according to the interaction rule of the interaction content and the model of the target object model Number, creates the physical engine.
B10, the human-computer interaction device according to B7, described device further include:
Scene determining module determines target interactive scene for requesting according to the interaction;
First processing module, for showing on the display screen the target interactive scene, so that the target pair As model executes in the target interactive scene with interacting for the head model.
B11, according to the described in any item human-computer interaction devices of B7-B10, the control module is used for:
When the target object model is ball model, when the interaction request is requested for heading, control described ball Model moves in preset crash space relative to the head model, and obtains the motion profile of the ball model.
B12, the human-computer interaction device according to B11, described device further include:
Track determining module, for determining institute after detecting that the ball model and the head model collide State track after the collision of ball model;
Second processing module, for terminating interaction when track exceeds the preset crash space after the collision.
C13, a kind of electronic equipment, including memory, processor and storage can be run on a memory and on a processor Computer program, the processor realizes any one of A1-A6 the method when executing described program the step of.
D14, a kind of computer readable storage medium, are stored thereon with computer program, when which is executed by processor The step of realizing any one of A1-A6 the method.

Claims (10)

1. a kind of man-machine interaction method is applied to electronic equipment, which is characterized in that the described method includes:
When receiving interaction request of the target user for target object, according to collected face characteristic information, institute is determined State the head model of target user;
The target object model of the target object is determined in preset object model library;
Based on physical engine corresponding with the target object model, the target object model is controlled relative to the head mould Type movement, and obtain the motion profile of the target object model;
The motion profile is rendered, on the display screen of the electronic equipment to show the interaction content of the interaction request.
2. man-machine interaction method according to claim 1, which is characterized in that described to be believed according to collected face characteristic Breath, determines the head model of the target user, comprising:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes the target user Face's display size on the display screen and display position;
According to the display size and the display position, the moulded dimension and modal position of the head model are determined.
3. man-machine interaction method according to claim 1, which is characterized in that be based on and the target object model described Corresponding physical engine controls the target object model and moves relative to the head model, and obtains the target object Before the motion profile of model, the method also includes:
Physical parameter based on the target object in real world determines the model parameter of the target object model;
According to the interaction of interaction content rule and the model parameter of the target object model, creates the physics and draw It holds up.
4. man-machine interaction method according to claim 1, which is characterized in that the method also includes:
It is requested according to the interaction, determines target interactive scene;
The target interactive scene is shown on the display screen, so that the target object model and the head model Interaction executes in the target interactive scene.
5. man-machine interaction method according to claim 1-4, which is characterized in that described to be based on and the target pair It as the corresponding physical engine of model, controls the target object model and is moved relative to the head model, and obtain the mesh Mark the motion profile of object model, comprising:
When the target object model is ball model, when the interaction request is requested for heading, the ball model is controlled It is moved in preset crash space relative to the head model, and obtains the motion profile of the ball model.
6. man-machine interaction method according to claim 5, which is characterized in that it is described the target object model be it is ball When model, when the interaction request is requested for heading, controls the ball model and moved relative to the head model, and obtain After the motion profile of the ball model, the method also includes:
After detecting that the ball model and the head model collide, rail after the collision of the ball model is determined Mark;
When track exceeds the preset crash space after the collision, terminate interaction.
7. a kind of human-computer interaction device, it is applied to electronic equipment, which is characterized in that described device includes:
Head model determining module, for when receiving interaction request of the target user for target object, according to collecting Face characteristic information, determine the head model of the target user;
Target object model determining module, for determining the target object mould of the target object in preset object model library Type;
Control module controls the target object model phase for being based on physical engine corresponding with the target object model The head model is moved, and obtains the motion profile of the target object model;
Rendering module, for rendering the motion profile on the display screen of the electronic equipment, to show the interaction request Interaction content.
8. human-computer interaction device according to claim 7, which is characterized in that the head model determining module is used for:
According to the face characteristic information, face recognition result is obtained, the face recognition result includes the target user Face's display size on the display screen and display position;
According to the display size and the display position, the moulded dimension and modal position of the head model are determined.
9. a kind of electronic equipment, which is characterized in that on a memory and can be on a processor including memory, processor and storage The step of computer program of operation, the processor realizes any one of claim 1-6 the method when executing described program.
10. a kind of computer readable storage medium, which is characterized in that be stored thereon with computer program, the program is by processor The step of any one of claim 1-6 the method is realized when execution.
CN201811314208.5A 2018-11-06 2018-11-06 Man-machine interaction method, device, electronic equipment and computer readable storage medium Pending CN109597480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811314208.5A CN109597480A (en) 2018-11-06 2018-11-06 Man-machine interaction method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811314208.5A CN109597480A (en) 2018-11-06 2018-11-06 Man-machine interaction method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109597480A true CN109597480A (en) 2019-04-09

Family

ID=65957795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811314208.5A Pending CN109597480A (en) 2018-11-06 2018-11-06 Man-machine interaction method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109597480A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047124A (en) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of render video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163077A (en) * 2010-02-16 2011-08-24 微软公司 Capturing screen objects using a collision volume
US20150123967A1 (en) * 2013-11-01 2015-05-07 Microsoft Corporation Generating an avatar from real time image data
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN107820591A (en) * 2017-06-12 2018-03-20 美的集团股份有限公司 Control method, controller, Intelligent mirror and computer-readable recording medium
US20180158230A1 (en) * 2016-12-06 2018-06-07 Activision Publishing, Inc. Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163077A (en) * 2010-02-16 2011-08-24 微软公司 Capturing screen objects using a collision volume
US20150123967A1 (en) * 2013-11-01 2015-05-07 Microsoft Corporation Generating an avatar from real time image data
US20180158230A1 (en) * 2016-12-06 2018-06-07 Activision Publishing, Inc. Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional
CN107820591A (en) * 2017-06-12 2018-03-20 美的集团股份有限公司 Control method, controller, Intelligent mirror and computer-readable recording medium
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN108629339A (en) * 2018-06-15 2018-10-09 Oppo广东移动通信有限公司 Image processing method and related product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047124A (en) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of render video

Similar Documents

Publication Publication Date Title
US9050538B2 (en) Collision detection and motion simulation in game virtual space
US9842405B2 (en) Visual target tracking
US7974443B2 (en) Visual target tracking using model fitting and exemplar
US8144148B2 (en) Method and system for vision-based interaction in a virtual environment
US8577084B2 (en) Visual target tracking
US8154544B1 (en) User specified contact deformations for computer graphics
US8682028B2 (en) Visual target tracking
US8577085B2 (en) Visual target tracking
KR20110117114A (en) Visual target tracking
US20100197400A1 (en) Visual target tracking
KR20160080064A (en) Virtual sensor in a virtual environment
US8565477B2 (en) Visual target tracking
US8749555B2 (en) Method of processing three-dimensional image in mobile device
CN107943286B (en) Method for enhancing roaming immersion
CN112598769B (en) Special effect rendering method, device, electronic equipment and computer readable storage medium
US11238651B2 (en) Fast hand meshing for dynamic occlusion
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
US20210407125A1 (en) Object recognition neural network for amodal center prediction
JP2017534135A (en) Method for simulating and controlling a virtual ball on a mobile device
CN109407824A (en) Manikin moves synchronously method and apparatus
CN109597480A (en) Man-machine interaction method, device, electronic equipment and computer readable storage medium
CN112348965B (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN115908663B (en) Virtual image clothing rendering method, device, equipment and medium
CN110992444B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination