CN112150617A - Control device and method of three-dimensional character model - Google Patents

Control device and method of three-dimensional character model Download PDF

Info

Publication number
CN112150617A
CN112150617A CN202011065117.XA CN202011065117A CN112150617A CN 112150617 A CN112150617 A CN 112150617A CN 202011065117 A CN202011065117 A CN 202011065117A CN 112150617 A CN112150617 A CN 112150617A
Authority
CN
China
Prior art keywords
module
face
virtual
dimensional character
character model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011065117.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinzhixin (Henan) Medical Technology Co.,Ltd.
Original Assignee
Shanxi Zhiyou Limin Health Management Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Zhiyou Limin Health Management Consulting Co ltd filed Critical Shanxi Zhiyou Limin Health Management Consulting Co ltd
Priority to CN202011065117.XA priority Critical patent/CN112150617A/en
Publication of CN112150617A publication Critical patent/CN112150617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Abstract

The invention relates to a control device and a control method of a three-dimensional character model, belongs to the technical field of psychological diagnosis and treatment devices, and solves the problems of the control device and the control method of a virtual character model with defects in the prior art. The control device includes: the figure portrait generating module selects corresponding five sense organs to be assembled into a complete figure portrait according to the virtual figure image described by the patient; the three-dimensional character model building module is used for manufacturing a Blendshape expression controller and a transform action controller based on the virtual portrait and generating a three-dimensional character model; the facial expression driving module controls the facial expression of the three-dimensional character model in real time through a Blendshape expression controller according to a facial capture value obtained by the face video; the plurality of sensors sense the action of the psychotherapist in real time and convert the action into bone motion quaternion data; and the action driving module controls the whole body action of the three-dimensional character model in real time through a transform action controller according to the bone motion quaternion data. And controlling the expression and the action of the three-dimensional character model in real time.

Description

Control device and method of three-dimensional character model
Technical Field
The invention relates to the technical field of psychological diagnosis and treatment devices, in particular to a control device and a control method of a three-dimensional character model.
Background
Auditory hallucinations is a major sensory disorder in psychiatry. Auditory hallucinations appear as a false sense of the patient on a realistic-feeling-based basis without intervention of anyone or external stimuli. 75% of schizophrenic patients are diagnosed with auditory hallucinations and, more importantly, are often accompanied by the development of other disorders such as borderline personality disorder, post-traumatic stress disorder, epilepsy, Parkinson's disease, and isolated, psychiatric and affective disorders. These disease symptoms may also be observed in persons with no clinical diagnosis. Clinical practice has shown that medical procedures are not effective in assisting auditory patients, half of auditory hallucinations can develop into chronic disease, and hallucinations can remain for months or even years despite drug therapy. These symptoms are often the cause of hospitalization of the psychiatric patient, which is socially disjointed.
Psychological diseases refer to the abnormal appearance of a person in psychological processes, personality characteristics and behavioral patterns due to physiological, psychological or social reasons, and are manifested by abnormal ability to speak and act in a socially accepted appropriate manner, so that various uncomfortable symptoms appear. When the degree of the abnormal psychological activities of the patient reaches the medical diagnosis standard, the psychological disorder is called as psychological disorder.
Based on the situation, a new treatment scheme is urgently needed to be found. One of the well-known effective methods is to strengthen the hallucinations of the patient and to dredge the hallucinations, which is also well-known as the most effective treatment method. On this basis, studies have demonstrated that treatment of psychological disorders using audiovisual techniques helps patients to control their hallucinations in real life, which is still currently in the clinical trial phase.
To date, the existing technologies for psychological disease, psychological disorder diagnosis, treatment and alleviation are far from mature, and a set of control device and method of a virtual character model fitting the virtual character image and auditory language of a user is lacked to perform psychological persuasion on a patient to improve the psychological state of the patient.
Disclosure of Invention
In view of the foregoing analysis, embodiments of the present invention are directed to a control apparatus and method for a three-dimensional character model, so as to solve the problems of the prior art, such as a control apparatus and method for a virtual character model with a virtual character and a phantom language fitted to a user.
In one aspect, an embodiment of the present invention provides a control apparatus for a three-dimensional character model, including: the figure portrait generating module is used for selecting corresponding five sense organs from the multiple five sense organ libraries to be assembled into a complete figure portrait according to the virtual figure image described by the patient; the three-dimensional character model building module is used for manufacturing a Blendshape expression controller and a transform action controller based on the virtual portrait and generating a three-dimensional character model simulating the virtual character; the facial expression driving module is used for controlling the facial expression of the three-dimensional character model in real time through the Blendshape expression controller according to a face capturing numerical value obtained by a face video; a plurality of sensors for sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting into bone motion quaternion data; and the action driving module is used for controlling the whole body action of the three-dimensional character model in real time through the transform action controller according to the bone motion quaternion data.
The beneficial effects of the above technical scheme are as follows: a three-dimensional character model fitting the virtual character image of the user can be obtained. And controlling the facial expressions and the actions of the three-dimensional character model in real time by adopting a facial expression driving module and an action driving module according to the expression actions of the psychotherapist.
Based on further improvement of the device, the control device of the three-dimensional character model further comprises a facial feature library construction module, which is used for constructing a plurality of facial feature libraries, wherein the facial feature library construction module comprises a face library construction sub-module, a hair library construction sub-module, an eyebrow library construction sub-module, an eye library construction sub-module, a nose library construction sub-module and a mouth library construction sub-module, the face library construction sub-module is used for saving the face by removing other facial features in a cutout mode based on a plurality of character image photos with different face features, and constructing the face library with the different face features by using the saved face; and the hair library construction sub-module, the eyebrow library construction sub-module, the eyes library construction sub-module, the nose library construction sub-module and the mouth library construction sub-module are used for respectively constructing a hair library, an eyebrow library, an eyes library, a nose library and a mouth library which have different corresponding characteristics in a matting mode based on a plurality of character image photos with different facial features.
The beneficial effects of the above technical scheme are as follows: the virtual portrait conforming to the virtual character image can be quickly and accurately obtained by utilizing a plurality of facial features libraries constructed by the facial features library construction module. Therefore, the defects that the time for manually constructing the virtual image is long and the virtual image fantasy by the patient is an virtual image which does not exist in real life and cannot be modeled by scanning are overcome.
Based on further improvement of the above device, the control device of the three-dimensional character model further comprises a facial feature adjusting module for adjusting facial features in the character representation to obtain a virtual representation conforming to the virtual character image, the facial feature adjusting module comprises: the system comprises an X-axis moving module, a Y-axis moving module, an X-axis zooming module and a Y-axis zooming module, wherein the X-axis moving module is used for moving and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait in the X-axis direction according to the virtual figure image; the Y-axis moving module is used for moving and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait in the Y-axis direction according to the virtual figure image; the X-axis scaling module is used for scaling and adjusting the face shape, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait respectively in the X-axis direction according to the virtual figure image; and the Y-axis scaling module is used for respectively scaling and adjusting the face shape, the hair, the eyebrows, the eyes, the nose and the mouth in the portrait image in the Y-axis direction according to the virtual portrait image.
The beneficial effects of the above technical scheme are as follows: the virtual portrait conforming to the virtual character image is obtained by adjusting the five sense organs in the character portrait.
Based on a further improvement of the above apparatus, the three-dimensional character model building module further includes: generating a basic model of a three-dimensional figure and manufacturing the Blendshape expression controller and the transform action controller; analyzing a face area in the virtual portrait based on a face and face recognition algorithm; converting a human face region into basic measurement parameters of a human face, and simultaneously performing matting on an image of the human face region and performing head mapping on the basic model; the Blendshape expression controller receives the basic parameters of the human face and dynamically controls the head shape of the basic model according to the basic parameters of the human face so as to maximally match the head characteristics in the virtual portrait; and controlling the height, the body and the shape of four limbs of the basic model through the transform motion controller according to the virtual character described by the patient.
Based on further improvement of the device, the Blendshape expression controller comprises a plurality of Blendshape control components, wherein the Blendshape control components are used for respectively controlling the head height, the head width, the skull height, the skull width, the face length, the eyebrow height, the eyebrow width, the eye height, the nose width, the mouth height and the mouth width.
Based on a further improvement of the above apparatus, the control apparatus for a three-dimensional character model further includes a face capture function module further configured to: shooting the video of the face of the psychotherapist in real time through a network camera; generating corresponding mark points at positions of eyebrows, eyes, pupils, a nose and a mouth by combining the face capture algorithm; tracking the position of the corresponding mark point in real time in the video of the face of the psychotherapist; converting the positions of the corresponding mark points into the face capturing numerical values in real time and storing the face capturing numerical values; and the facial expression driver module is further configured to: receiving the face capture value from the face capture function module; and controlling a plurality of Blendshape control components of the three-dimensional character model in real time based on the face capture values to simulate facial expressions of the psychotherapist.
In a further refinement of the apparatus described above, the plurality of sensors are configured to: respectively arranging the plurality of sensors at different positions of a psychotherapist, wherein the different positions comprise hands, small arms, large arms, shoulders, heads, backs, chests, feet, crus and thighs; the plurality of sensors respectively sense the bone motion data of the different positions in real time when the psychotherapist moves; and converting the bone motion data into bone motion quaternion data and transmitting the bone motion quaternion data to the action driving module.
Based on a further improvement of the above apparatus, the action driving module is configured to: receiving the bone motion quaternion data; controlling the motions of the hand, the lower arm, the upper arm, the shoulder, the head, the back, the chest, the foot, the lower leg and the thigh of the three-dimensional character module in real time through the transform motion controller according to the bone motion quaternion data so as to simulate the motion of the psychotherapist.
Based on further improvement of the device, the control device of the three-dimensional character model further comprises a sound changing module, a synchronization module and an output module, wherein the sound changing module is used for acquiring original sounds of the psychotherapist in real time, and performing superposition modulation and pitch base frequency sound changing control on the original sounds in sequence so as to convert the original sounds into auditory illusion sounds simulating the virtual character sounds; the synchronization module is used for synchronizing the auditory illusion sound and the facial expression; and the output module is used for simultaneously showing the auditory illusion sound and the action expression of the virtual character to the patient.
On the other hand, an embodiment of the present invention provides a method for controlling a three-dimensional character model, including: selecting corresponding five sense organs from a plurality of five sense organ libraries to assemble a complete portrait according to the virtual portrait described by the patient; based on the virtual portrait, making a Blendshape expression controller and a transform action controller and generating a three-dimensional character model simulating the virtual character; controlling the facial expression of the three-dimensional character model in real time through the Blendshape expression controller according to a face capturing value obtained by a face video; sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting the actions into bone motion quaternion data; and controlling the whole body motion of the three-dimensional character model in real time through the transform motion controller according to the bone motion quaternion data.
Compared with the prior art, the invention can realize at least one of the following beneficial effects:
1. the virtual portrait conforming to the virtual character image can be quickly and accurately obtained by utilizing the five sense organ library building module to overcome the defects that the time for manually building the virtual image is long and the virtual image fantasy by a patient is a virtual image which does not exist in real life and cannot be modeled by scanning. And a three-dimensional character model fitting the virtual character image of the user can be obtained through the virtual portrait.
2. The virtual portrait conforming to the virtual character image is obtained by adjusting the five sense organs in the character portrait.
3. And the facial expression driving module and the action driving module are adopted to control the facial expression and the action of the three-dimensional character model in real time according to the expression action of the psychotherapist, and the auditory sound and the expression action of the virtual character are displayed to the patient at the same time, so that psychological dispersion and treatment are realized, and the psychological pressure state of the patient is improved.
In the invention, the technical schemes can be combined with each other to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
Fig. 1 is a block diagram of a control apparatus of a three-dimensional character model according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a library of five sense organs according to an embodiment of the present invention.
FIG. 3 is a diagram of a complete portrait formed by selecting corresponding five sense organs from a plurality of five sense organs libraries.
FIG. 4 is a diagram illustrating an exemplary method for adjusting a character representation to generate a virtual representation corresponding to a virtual character image.
FIG. 5 is a schematic diagram of a three-dimensional model generated from a virtual representation.
Fig. 6 is a schematic diagram of generating face mark points on a face.
Fig. 7 is a structural diagram of a sound-varying module according to an embodiment of the present invention.
Fig. 8 is a flowchart of a method of controlling a three-dimensional character model according to an embodiment of the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The invention discloses a control device of a three-dimensional character model. Referring to fig. 1, the control apparatus of a three-dimensional character model includes: a figure portrait generation module 102, configured to select, according to the virtual figure image described by the patient, corresponding five sense organs from the multiple five sense organs libraries to assemble a complete figure portrait; the three-dimensional character model building module 104 is used for manufacturing a Blendshape expression controller and a transform action controller based on the virtual portrait and generating a three-dimensional character model simulating a virtual character; the facial expression driving module 106 is used for controlling the facial expression of the three-dimensional character model in real time through the Blendshape expression controller according to the face capturing value obtained by the face video; a plurality of sensors 108 for sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting into bone motion quaternion data; and an action driving module 110, configured to control the whole body action of the three-dimensional character model in real time through a transform action controller according to the bone motion quaternion data.
Compared with the prior art, the control device of the three-dimensional character model provided by the embodiment can obtain the three-dimensional character model fitting the virtual character image of the user. And controlling the facial expressions and the actions of the three-dimensional character model in real time by adopting a facial expression driving module and an action driving module according to the expression actions of the psychotherapist.
Hereinafter, the control apparatus of the three-dimensional character model will be described in detail with reference to fig. 1 to 7.
The control device for a three-dimensional character model includes: the system comprises a character portrait generation module 102, a three-dimensional character model building module 104, a facial expression driving module 106, a plurality of sensors 108, a motion driving module 110, a five sense organ library building module, a five sense organ adjusting module, a face capturing function module, a sound changing module, a synchronization module and an output module.
Referring to fig. 2, a facial features library construction module is used to construct a plurality of facial features libraries. The facial feature library construction module comprises a face type library construction submodule, a hair library construction submodule, an eyebrow library construction submodule, an eye library construction submodule, a nose library construction submodule and a mouth library construction submodule, wherein the face type library construction submodule is used for removing other facial features to save the face type in a sectional drawing mode on the basis of a plurality of character image photos with different face type characteristics, and constructing a face type library with different face type characteristics by using the saved face type. The head hair library construction sub-module, the eyebrow library construction sub-module, the eye library construction sub-module, the nose library construction sub-module and the mouth library construction sub-module are used for respectively constructing a hair library, an eyebrow library, an eye library, a nose library and a mouth library with different corresponding characteristics in a cutout mode on the basis of a plurality of character image photos with different five sense organs.
Referring to fig. 3, the figure image generation module 102 is configured to select corresponding five sense organs from the five sense organs library to assemble a complete figure image according to the virtual figure image described by the patient.
Referring to fig. 4, the five sense organs adjusting module is used for adjusting the five sense organs in the character portrait to obtain a virtual portrait conforming to the virtual character image. Specifically, the facial feature adjusting module comprises: the system comprises an X-axis moving module, a Y-axis moving module, an X-axis zooming module and a Y-axis zooming module, wherein the X-axis moving module is used for moving and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in a figure portrait in the X-axis direction according to a virtual figure image; the Y-axis moving module is used for moving and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait in the Y-axis direction according to the virtual figure image; the X-axis scaling module is used for scaling and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure image in the X-axis direction respectively according to the virtual figure image; and the Y-axis scaling module is used for scaling and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure image in the Y-axis direction respectively according to the virtual figure image.
Referring to fig. 5, the three-dimensional character model building module 104 is configured to create a Blendshape expression controller and a transform action controller based on the virtual representation and generate a three-dimensional character model simulating a virtual character. Specifically, the three-dimensional character model building module further includes: generating a basic model of a three-dimensional figure and manufacturing a Blendshape expression controller and a transform action controller; analyzing a face area in the virtual portrait based on a face recognition algorithm; converting the human face area into basic measurement parameters of the human face, performing matting on the image of the human face area and performing head mapping on a basic model, wherein the head mapping is performed on facial details such as wrinkles, moles and the like so as to improve the similarity degree of a virtual portrait and a three-dimensional character; the Blendshape expression controller receives the basic parameters of the human face, and dynamically controls the head shape of the basic model according to the basic parameters of the human face so as to maximally match the head characteristics in the virtual image; and controlling the height, the body and the shape of four limbs of the basic model through a transform motion controller according to the virtual character described by the patient. Basic measurement parameters for multiple faces include head height, head width, skull height, skull width, face length, eyebrow height, eyebrow width, eye height, nose width, mouth height, and mouth width. The Blendshape controller comprises a plurality of Blendshape control components, wherein the plurality of Blendshape control components are used for respectively controlling head height, head width, skull height, skull width, face length, eyebrow height, eyebrow width, eye height, nose width, mouth height and mouth width. In addition, the body and the limbs can be controlled accordingly.
Referring to fig. 6, the face capture function module is configured to capture a video of a human face of a psychotherapist in real time, generate and track positions of the face mark points in real time in combination with a face capture algorithm, and convert the real-time face mark points into face capture values representing different facial features and muscle expressions of the human face. Specifically, the face capture function module is further configured to: shooting a video of the face of a psychotherapist in real time through a network camera; generating corresponding mark points at positions of eyebrows, eyes, pupils, a nose and a mouth by combining a face capturing algorithm; tracking the position of the corresponding mark point in the video of the face of the psychotherapist in real time; and converting the positions of the corresponding mark points into face capture values in real time and storing the face capture values.
The facial expression driving module 106 is configured to control the facial expression of the three-dimensional character model in real time through the Blendshape controller according to the facial capture value. Specifically, the facial expression driver is further configured to: receiving a face capture value from a face capture function; and controlling a plurality of Blendshape control components of the three-dimensional character model in real time based on the face capture values to simulate facial expressions of a psychotherapist.
A plurality of sensors 108 for sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting into bone motion quaternion data. Specifically, the plurality of sensors are configured to: respectively arranging a plurality of sensors at different positions of a psychotherapist, wherein the different positions comprise hands, small arms, large arms, shoulders, heads, backs, chests, feet, shanks and thighs; when a psychotherapist moves, the plurality of sensors respectively sense the bone movement data of different positions in real time; and converting the bone motion data into bone motion quaternion data and transmitting the bone motion quaternion data to the action driving module.
And the action driving module 110 is used for controlling the whole body action of the three-dimensional character model in real time through a transform action controller according to the bone motion quaternion data. The action driver module is configured to: receiving bone motion quaternion data; and controlling the motions of the hand, the forearm, the upper arm, the shoulder, the head, the back, the chest, the foot, the lower leg and the thigh of the three-dimensional character module in real time through a transform motion controller according to the bone motion quaternion data so as to simulate the motion of a psychotherapist.
Referring to fig. 7, the sound modification module is configured to obtain an original sound of a psychotherapist in real time, and perform superposition modulation and pitch fundamental frequency sound modification control on the original sound sequence to convert the original sound into a pseudonymous sound simulating a virtual character sound; the synchronization module is used for synchronizing the auditory illusion sound and the facial expression; and the output module is used for simultaneously showing the auditory illusion sound and the action expression of the virtual character to the patient.
Compared with the prior art, the control device of the three-dimensional character model provided by the embodiment has the advantages that a psychotherapist controls the facial expressions and the actions of the three-dimensional character model through the facial expression driving module and the action driving module, and the auditory sound and the expression actions of the virtual character are displayed to a patient through the output module at the same time, so that psychological counseling and treatment are realized, and the psychological pressure state of the patient can be improved through soothing languages and actions.
Hereinafter, the control device of the three-dimensional character model will be described in detail by way of specific examples with reference to fig. 2 to 7.
1. Obtaining a virtual representation
In manual modeling solutions, the modeler also needs to use one or more angle reference maps to create a relatively realistic character model. However, the virtual image of the patient's auditory illusion and visual illusion only exists in the mental space of the patient, so the image of the fantasy character of the patient can be simulated quickly and accurately through the self-developed virtual portrait software.
Even though the appearance of the world is almost the same, there are many cases where a person is similar to a single five sense organ of a human. For example, the eyebrows of A are very similar to those of B, and the eyes of B are very similar to those of C. Therefore, the whole appearance of a person can be divided into five sense organs and categories, each five sense organ selects 8 typical subclasses, and people under each subclass collect and shoot typical images of people with the five sense organ categories, and the images of the five sense organs represented by each subclass of the five sense organs are divided through technical means and finally arranged and combined to achieve the purpose of virtual portrait.
For example: dividing the facial form into: square face, round face, long face, sharp face, Chinese character face, melon seed face and fat face. Under each face, images of different characters with the face feature are collected and shot, and the face library with only the face of other five sense organs removed is used for calling. The other five sense organs are the same as above.
Referring to fig. 2, the software operates: all materials of the five sense organs are in each five sense organ library, the pictures of the five sense organs can be displayed on the canvas on the right side by clicking the materials, and a complete portrait can be assembled by sequentially selecting the face shape, the hair, the eyebrows, the eyes, the mouth and the nose. Each of the five sense organs supports the adjustment option on the right side for adjustment, and X, Y-axis movement zooming can be performed for each of the five sense organs to fit a richer character image.
1500 facial features materials are preset in the whole portrait library at present, the number of the facial features materials is continuously increased in the later period, and a virtual portrait which accords with the virtual image of a patient can be obtained in a relatively short time through mutual assembly of massive facial features libraries. The final effect demonstration of the virtual portrait software is shown in FIG. 3.
2. Three-dimensional model building based on virtual portrait
After the virtual image description of the patient is rapidly obtained within 15-30 minutes by the fast virtual portrait software, a three-dimensional character model based on the portrait is made.
The three-dimensional character model requires that the degree of facial similarity with the virtual image reaches more than 80%, and meanwhile, in order to meet the requirements of later facial capture and whole body capture, the three-dimensional character model needs to perform facial BlendShape expression animation.
Meanwhile, the whole manufacturing period needs to be controlled within one day to be manufactured and finished based on clinical use requirements.
Based on the above requirements, traditional manual modeling schemes cannot be employed. Finally we choose to use the photo one-key generative model method for rapid production of character models.
The bottom layer principle of the photo one-key generation model is based on a face recognition algorithm, firstly, the algorithm can analyze a face region in a photo, then, the face region is converted into basic measurement parameters of the face according to the obtained face region, meanwhile, an image of the face region is extracted through a specific mask and then is used as a head map of a generated person, and the head map is used for improving the similarity degree of a virtual portrait of the photo and a three-dimensional person.
The generation of the three-dimensional character model by one key needs to have a basic model as a reference model, bones and skins need to be bound to the basic model, a Blendshape animation controller needs to be manufactured for a face, and in addition, additional head Blendshape control strips (also called Blendshape control components) need to be manufactured and are used for receiving parameter characteristics acquired after a face recognition algorithm analyzes a picture, such as head width, nose width, eye height, mouth width height and the like, and then dynamically controlling the head shape of the basic model to match the head characteristics in the image picture to the maximum extent.
Through the two steps, the head model similar to the photo model can be quickly generated, and the whole modeling process can be controlled within one day. The following is the photo-to-model effect (see fig. 4 and 5).
3. Face capture function creation
A face capturing function module of a three-dimensional virtual character is manufactured, firstly, a face capturing algorithm is needed, and a Faceware face capturing algorithm is adopted. The simple WebCamera network camera is connected with a computer through a USB, a video of a human face is shot in real time through the network camera, mark points are generated at positions such as eyebrows, eyes, pupils, mouths and noses by combining a face capture algorithm, the positions of the mark points are tracked in real time (refer to a figure 6), the mark points are converted into value data, and data generated by each control point are stored in real time.
4. Performing with face capture numerical mapping
The network camera transmits and maps the numerical values of different facial features and muscle expressions obtained by the facial capture algorithm to the corresponding facial Blendshape animation interface of the three-dimensional virtual character with the same structure, so that the data value of the real face can be captured by the facial capture algorithm to control the numerical value of the Blendshape control bar which is manufactured by the three-dimensional model in real time, the facial expression of the real person can be performed and simulated, the function of displaying the performed expression by the virtual character can be realized finally and the real person can drive the virtual character.
5. Whole body motion capture
Based on the face capture, whole body motion capture is performed. And performing the whole body action step by adopting an inertial system. The psychotherapist binds 18 sensors on the body, specifically, is provided with two sensors on hand, the forearm is provided with two sensors, the forearm is provided with one sensor, the head is provided with two sensors, the shoulder is provided with two sensors, the foot is provided with two sensors, the calf is provided with two sensors, the thigh is provided with two sensors, the back is provided with two sensors and the chest is provided with one sensor. This inertia is more stable than the Azure Kinect hardware setup. When a psychotherapist moves, the plurality of sensors respectively sense the bone movement data of different positions in real time; and converting the bone motion data into bone motion quaternion data and transmitting the bone motion quaternion data to the action driving module.
6. Whole body motion capture
The bone motion quaternion data is transmitted to a Transform conversion controller of the three-dimensional character bone in real time, and the whole body motion capture function is realized. The hand, forearm, shoulder, head, back, chest, foot, calf and thigh movements of the three-dimensional character module are controlled in real time by a transform motion controller to simulate the movements of a psychotherapist.
7. Sound changing module
When a patient is clinically treated, the virtual image of the patient often has different sounds, but a therapist is responsible for the diagnosis and treatment of a plurality of patients. It is therefore desirable to vary the sound of the therapist to meet the sound requirements of different patients for the virtual image.
The Ptich fundamental tone is the vocal cord vibration frequency. Controlling Pitch may change the Pitch of the sound such as bass or treble. The Formant resonance peak is the resonance frequency inherent to a pronunciation system formed by the oral cavity and the nasal cavity of the throat and the tongue cheek helper. Controlling the variation of the command may control the frequency of the sound.
Referring to fig. 7, the input of the sound is acquired by using a microphone, and then the values of Pitch and command are changed in real time, and the sound is output through a speaker after being changed. The sound changing system comprises a patient management and information storage system, and the set value can be stored after sound changing is set for each patient, and can be directly called when treatment is carried out next time.
(1) Microphone switch: clicking can turn on or off the microphone
(2) The Pitch tone adjusting sliding component can adjust float type single-precision floating point numbers with the range of 1-3, and two decimal places are reserved after decimal places.
(3) The Formant fundamental frequency adjusting sliding component can adjust float type single-precision floating point numbers in the range of 1-5, and two decimal places are reserved after decimal places.
(4) The acoustic function: in the treatment stage of a therapist, a three-party diagnosis and treatment mode is required, namely, the therapist plays the virtual role of the illusion of the patient, the voice changed by the voice changer is communicated with the patient, the therapist selects whether to intervene in the treatment process by the identity of the therapist according to the change of the psychological state of the patient in the treatment stage, and at the moment, the image picture of the therapist needs to be displayed, the voice changing function of software is closed, and the image picture appears in the original voice form of the therapist.
The acoustic function button is responsible for the function of the part, the button is clicked, the real-time picture of the therapist is popped up on the left side of the screen, the voice changing function of the software is closed, and meanwhile, the face capturing function and the whole body capturing function of the software are closed, so that the therapist intervenes in three-party treatment with the identity of the therapist.
(5) The sound changing function is as follows: the sound changing function is just opposite to the original sound function, and a therapist can select whether to continue the treatment again in the image of the virtual character at any time in the treatment process according to the psychological state of the patient. Click this button, can close the real-time picture of left side therapist to connect facial seizure once more and whole body and catch the functional module, open the sound changer simultaneously, the therapist can intervene the treatment with the identity of patient's avatar once more.
(6) Preservation of sound presets: the therapist changes the sound in real time through the Pitch and the command sliding assembly that drags the sound changer module and provide, through constantly adjusting these two parameters, and the cooperation test pronunciation, lets the patient constantly select to confirm, and the value of Pitch and command is higher with the sound goodness of fit of avatar under the patient thinks a certain preset value, can preserve preset button through clicking, preserves this group of sound preset value, and as the sound parameter of this patient's fantasy avatar, the storage is under this patient's database information. And when the diagnosis and treatment stage is entered next time, the software can automatically read the preset numerical value and change the sound according to the sound.
(7) In order to meet the purpose of quick treatment, the function module of the sound changer presets two groups of quick preset buttons for changing boys into girls and girls into boys. The limitation of sex of therapists can be broken through the two groups of quick presets, so that the male therapists can also play the virtual image of female, and the female therapists play the sound of the virtual image of male.
In another embodiment of the present invention, a method for controlling a three-dimensional character model is disclosed. Referring to fig. 8, the control method of the three-dimensional character model includes: step S802, selecting corresponding five sense organs from a plurality of five sense organ libraries to assemble a complete portrait according to the virtual portrait described by the patient; step S804, based on the virtual portrait, making a Blendshape expression controller and a transform action controller and generating a three-dimensional character model simulating a virtual character; step 806, controlling the facial expression of the three-dimensional character model in real time through a Blendshape expression controller according to the facial capture value obtained by the face video; step S808, sensing the actions of the sensors of the psychotherapist wearing the plurality of sensors in real time and converting the actions into bone motion quaternion data; and step S810, controlling the whole body motion of the three-dimensional character model in real time through a transform motion controller according to the bone motion quaternion data.
Compared with the prior art, the invention can realize at least one of the following beneficial effects:
1. the virtual portrait conforming to the virtual character image can be quickly and accurately obtained by utilizing the five sense organ library building module to overcome the defects that the time for manually building the virtual image is long and the virtual image fantasy by a patient is a virtual image which does not exist in real life and cannot be modeled by scanning. And a three-dimensional character model fitting the virtual character image of the user can be obtained through the virtual portrait.
2. The virtual portrait conforming to the virtual character image is obtained by adjusting the five sense organs in the character portrait.
3. And the facial expression driving module and the action driving module are adopted to control the facial expression and the action of the three-dimensional character model in real time according to the expression action of the psychotherapist, and the auditory sound and the expression action of the virtual character are displayed to the patient at the same time, so that psychological dispersion and treatment are realized, and the psychological pressure state of the patient is improved.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program, which is stored in a computer readable storage medium, to instruct related hardware. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. An apparatus for controlling a three-dimensional character model, comprising:
the figure portrait generating module is used for selecting corresponding five sense organs from the multiple five sense organ libraries to be assembled into a complete figure portrait according to the virtual figure image described by the patient;
the three-dimensional character model building module is used for manufacturing a Blendshape expression controller and a transform action controller based on the virtual portrait and generating a three-dimensional character model simulating the virtual character;
the facial expression driving module is used for controlling the facial expression of the three-dimensional character model in real time through the Blendshape expression controller according to a face capturing numerical value obtained by a face video;
a plurality of sensors for sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting into bone motion quaternion data; and
and the action driving module is used for controlling the whole body action of the three-dimensional character model in real time through the transform action controller according to the bone motion quaternion data.
2. The control apparatus of a three-dimensional character model according to claim 1, further comprising a facial features library construction module for constructing a plurality of facial features libraries, a hair library construction module, an eyebrow library construction module, an eye library construction module, a nose library construction module, and a mouth library construction module, wherein,
the facial form library construction submodule is used for saving facial forms by removing other five sense organs in a cutout mode based on a plurality of character image photos with different facial form characteristics, and constructing a facial form library with different facial form characteristics by using the saved facial forms; and
the hair storehouse construction submodule the eyebrow storehouse construction submodule the eyes storehouse construction submodule the nose storehouse construction submodule and the mouth storehouse construction submodule are used for constructing the hair storehouse, the eyebrow storehouse, the eyes storehouse, the nose storehouse and the mouth storehouse with different corresponding characteristics respectively in a matting mode based on a plurality of character image photos with different facial features.
3. The apparatus for controlling a three-dimensional character model according to claim 2, further comprising a facial feature adjusting module for adjusting facial features in said character representation to obtain a virtual representation conforming to said virtual character image, said facial feature adjusting module comprising: an X-axis moving module, a Y-axis moving module, an X-axis zooming module and a Y-axis zooming module, wherein,
the X-axis moving module is used for carrying out moving adjustment on the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait in the X-axis direction according to the virtual figure image;
the Y-axis moving module is used for moving and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait in the Y-axis direction according to the virtual figure image;
the X-axis scaling module is used for scaling and adjusting the face shape, the hair, the eyebrows, the eyes, the nose and the mouth in the figure portrait respectively in the X-axis direction according to the virtual figure image; and
and the Y-axis scaling module is used for scaling and adjusting the face, the hair, the eyebrows, the eyes, the nose and the mouth in the portrait respectively in the Y-axis direction according to the virtual portrait.
4. The apparatus for controlling a three-dimensional character model according to claim 1, wherein the three-dimensional character model creation module further comprises:
generating a basic model of a three-dimensional figure and manufacturing the Blendshape expression controller and the transform action controller;
analyzing a face area in the virtual portrait based on a face and face recognition algorithm;
converting a human face region into basic measurement parameters of a human face, and simultaneously performing matting on an image of the human face region and performing head mapping on the basic model;
the Blendshape expression controller receives the basic parameters of the human face and dynamically controls the head shape of the basic model according to the basic parameters of the human face so as to maximally match the head characteristics in the virtual portrait; and
and controlling the height, the body and the shape of the limbs of the basic model through the transform motion controller according to the virtual character described by the patient.
5. The apparatus for controlling a three-dimensional character model according to claim 4, wherein said Blendshape expression controller comprises a plurality of Blendshape control components, wherein said plurality of Blendshape control components are configured to control said head height, head width, skull height, skull width, face length, eyebrow height, eyebrow width, eye height, nose width, mouth height, and mouth width, respectively.
6. The apparatus for controlling a three-dimensional character model according to claim 1, further comprising a face capture function module configured to:
shooting the video of the face of the psychotherapist in real time through a network camera;
generating corresponding mark points at positions of eyebrows, eyes, pupils, a nose and a mouth by combining the face capture algorithm;
tracking the position of the corresponding mark point in real time in the video of the face of the psychotherapist; and
converting the positions of the corresponding mark points into the face capturing numerical values in real time and storing the face capturing numerical values; and
the facial expression driver module is further configured to:
receiving the face capture value from the face capture function module; and
controlling a plurality of Blendshape control components of the three-dimensional character model in real-time based on the face capture values to simulate facial expressions of the psychotherapist.
7. The apparatus for controlling a three-dimensional character model according to claim 1, wherein the plurality of sensors are configured to:
respectively arranging the plurality of sensors at different positions of a psychotherapist, wherein the different positions comprise hands, small arms, large arms, shoulders, heads, backs, chests, feet, crus and thighs;
the plurality of sensors respectively sense the bone motion data of the different positions in real time when the psychotherapist moves; and
and converting the bone motion data into bone motion quaternion data and transmitting the bone motion quaternion data to the action driving module.
8. The apparatus for controlling a three-dimensional character model according to claim 1, wherein the motion driver module is configured to:
receiving the bone motion quaternion data;
controlling the motions of the hand, the lower arm, the upper arm, the shoulder, the head, the back, the chest, the foot, the lower leg and the thigh of the three-dimensional character module in real time through the transform motion controller according to the bone motion quaternion data so as to simulate the motion of the psychotherapist.
9. The apparatus for controlling a three-dimensional character model according to claim 1, further comprising a sound change module, a synchronization module, and an output module, wherein,
the voice changing module is used for acquiring original voice of the psychotherapist in real time, and performing superposition modulation and pitch base frequency voice changing control on the original voice sequence so as to convert the original voice into a pseudonymous voice simulating the voice of the virtual character;
the synchronization module is used for synchronizing the auditory illusion sound and the facial expression; and
and the output module is used for simultaneously showing the auditory illusion sound and the action expression of the virtual character to the patient.
10. A method for controlling a three-dimensional character model, comprising:
selecting corresponding five sense organs from a plurality of five sense organ libraries to assemble a complete portrait according to the virtual portrait described by the patient;
based on the virtual portrait, making a Blendshape expression controller and a transform action controller and generating a three-dimensional character model simulating the virtual character;
controlling the facial expression of the three-dimensional character model in real time through the Blendshape expression controller according to a face capturing value obtained by a face video;
sensing in real time the actions of the sensors of the psychotherapist wearing the plurality of sensors and converting the actions into bone motion quaternion data; and
and controlling the whole body motion of the three-dimensional character model in real time through the transform motion controller according to the bone motion quaternion data.
CN202011065117.XA 2020-09-30 2020-09-30 Control device and method of three-dimensional character model Pending CN112150617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011065117.XA CN112150617A (en) 2020-09-30 2020-09-30 Control device and method of three-dimensional character model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011065117.XA CN112150617A (en) 2020-09-30 2020-09-30 Control device and method of three-dimensional character model

Publications (1)

Publication Number Publication Date
CN112150617A true CN112150617A (en) 2020-12-29

Family

ID=73952305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011065117.XA Pending CN112150617A (en) 2020-09-30 2020-09-30 Control device and method of three-dimensional character model

Country Status (1)

Country Link
CN (1) CN112150617A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819932A (en) * 2021-02-24 2021-05-18 上海莉莉丝网络科技有限公司 Method and system for manufacturing three-dimensional digital content and computer readable storage medium
CN113763518A (en) * 2021-09-09 2021-12-07 北京顺天立安科技有限公司 Multi-mode infinite expression synthesis method and device based on virtual digital human
CN114793286A (en) * 2021-01-25 2022-07-26 上海哔哩哔哩科技有限公司 Video editing method and system based on virtual image
CN115526966A (en) * 2022-10-12 2022-12-27 广州鬼谷八荒信息科技有限公司 Method for realizing virtual character expression display by scheduling five-sense-organ components

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793286A (en) * 2021-01-25 2022-07-26 上海哔哩哔哩科技有限公司 Video editing method and system based on virtual image
CN112819932A (en) * 2021-02-24 2021-05-18 上海莉莉丝网络科技有限公司 Method and system for manufacturing three-dimensional digital content and computer readable storage medium
CN112819932B (en) * 2021-02-24 2022-11-22 上海莉莉丝网络科技有限公司 Method, system and storage medium for manufacturing three-dimensional digital content
CN113763518A (en) * 2021-09-09 2021-12-07 北京顺天立安科技有限公司 Multi-mode infinite expression synthesis method and device based on virtual digital human
CN115526966A (en) * 2022-10-12 2022-12-27 广州鬼谷八荒信息科技有限公司 Method for realizing virtual character expression display by scheduling five-sense-organ components

Similar Documents

Publication Publication Date Title
CN112150617A (en) Control device and method of three-dimensional character model
CN110070944B (en) Social function assessment training system based on virtual environment and virtual roles
JP7344894B2 (en) Facial expressions from eye-tracking cameras
CN112164135A (en) Virtual character image construction device and method
Parke et al. Computer facial animation
US10249391B2 (en) Representation of symptom alleviation
JP2022159436A5 (en)
US7771343B2 (en) System and method for treating chronic pain
CN101149840A (en) Complex expression emulation system and implementation method
US20190374741A1 (en) Method of virtual reality system and implementing such method
US10204525B1 (en) Suggestion-based virtual sessions engaging the mirror neuron system
WO2015027286A1 (en) A medical training simulation system and method
Takacs Special education and rehabilitation: teaching and healing with interactive graphics
Niewiadomski et al. Towards multimodal expression of laughter
Shtern et al. A game system for speech rehabilitation
Pelachaud et al. Final report to NSF of the standards for facial animation workshop
CN112133409A (en) Virtual diagnosis and treatment system and method
CN113035000A (en) Virtual reality training system for central integrated rehabilitation therapy technology
Haber et al. Facial modeling and animation
Tarabalka et al. Can you" read tongue movements"? Evaluation of the contribution of tongue display to speech understanding
CN207886596U (en) A kind of VR rehabilitation systems based on mirror neuron
CN108648796A (en) A kind of virtual reality mirror image therapeutic equipment
Takacs Cognitive, Mental and Physical Rehabilitation Using a Configurable Virtual Reality System.
CN111667906A (en) Eyeball structure virtual teaching system and digital model establishing method thereof
Lai et al. Application of biometric technologies in biomedical systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220318

Address after: 030032 room 0505, 5 / F, block C, No. 529, South Central Street, Xuefu Industrial Park, Shanxi comprehensive transformation and reform demonstration zone, Taiyuan City, Shanxi Province

Applicant after: Shanxi Zhiquan Medical Technology Co.,Ltd.

Address before: 5104, 10th floor, Zhonghai huanyutianxia, No.8, xinjinci Road, Wanbailin District, Taiyuan City, Shanxi Province

Applicant before: Shanxi Zhiyou Limin Health Management Consulting Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231122

Address after: No. 33 Industrial Park, Zhangsanzhai Town, Changyuan City, Xinxiang City, Henan Province, 453414

Applicant after: Xinzhixin (Henan) Medical Technology Co.,Ltd.

Address before: 030032 room 0505, 5 / F, block C, No. 529, South Central Street, Xuefu Industrial Park, Shanxi comprehensive transformation and reform demonstration zone, Taiyuan City, Shanxi Province

Applicant before: Shanxi Zhiquan Medical Technology Co.,Ltd.

TA01 Transfer of patent application right