CN106175780A - Facial muscle motion-captured analysis system and the method for analysis thereof - Google Patents
Facial muscle motion-captured analysis system and the method for analysis thereof Download PDFInfo
- Publication number
- CN106175780A CN106175780A CN201610549697.7A CN201610549697A CN106175780A CN 106175780 A CN106175780 A CN 106175780A CN 201610549697 A CN201610549697 A CN 201610549697A CN 106175780 A CN106175780 A CN 106175780A
- Authority
- CN
- China
- Prior art keywords
- point
- observation
- host computer
- image
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
- A61B5/0064—Body surface scanning
Abstract
The present invention relates to a kind of facial muscle motion-captured analysis system and analysis method thereof, including motion capture device and host computer, described motion capture device includes programmable gate array and image visual transducer, triggering signal is sent to image visual transducer by programmable gate array programmable gate array, host computer receives the data processed result of image visual transducer shooting image pair, feature point extraction to this image pair, follow the trail of and three-dimensional reconstruction, automatically identify that face point of observation is numbered, tracing computation is predicted and error correction, automatic discrimination wrong data is modified and automatically fills a vacancy according to model of mind data, face point of observation three-dimensional motion is analyzed and exported correlation report, thus realize low cost, high efficiency, high reliability and high-precision real time kinematics are followed the tracks of, rebuild and interpretation of result.
Description
Technical field
The invention belongs to motion-captured tracking technique field, relate to facial motion capture and follow the tracks of, especially a kind of facial muscle fortune
Dynamic seizure analysis system and the method for analysis thereof.
Background technology
The facial movement of face is trickle and complicated, motion-captured, due to size sensor and layout for facial expression
The restriction of mode, and the requirement that facial expression is higher to minutia tracking measurement precision, the most traditional motion-captured skill
Art (such as mechanical type, electromagnetic type, acoustics formula, acceleration transducer formula etc.) has been difficult to apply, and optical motion catches into
For the most ripe and wide variety of technology of facial motion capture.
There is the feature of the unstable factor keeping natural feature points form stable and eliminating illumination condition, labelling point type face
Portion's movement capturing technology has had the application of maturation in d-making field.Its principle is to utilize polyphaser stereoscopic vision skill
Art, carries out high-speed synchronous shooting, signature tracking and in real time the most accurately three to the highlighted reflective marker being pasted on measurand surface
Dimension is rebuild, and records and reappear the movement locus of key point.The method has the technical advantage of uniqueness: can appoint face neatly
What specified point or position interested carry out reliable and stable motion tracking analysis;Motion measurement is accurate, and precision can reach
0.1mm/1m;May be implemented in the database preparation analysis that line is real-time, simple and convenient, it is not necessary to complicated technical operation, with
Time ensure higher precision and reliability.
Labelling point type facial motion capture technology is the extension of limb action capturing technology mostly, and principle catches not with limbs
There is essential distinction.Some existing schemes reach considerable effect on face capturing technology, but expense is sufficiently expensive and defective.
Such as, the scheme that high-end motion capture brand Vicon and mumbo jumbo use is while limbs catch, at face binding mark point,
Utilize high resolution system synchronization catch limbs and the motion of face detail.The advantage of this technology is that capturing efficiency is high, it is achieved letter
Single, limbs and facial motion data high level of synchronization, shortcoming is owing to face detail labelling point is too much, it is adaptable to it is big that limbs catch
Visual field catches easily lost efficacy to facial dense marker tracking, caused a large amount of hop problem to occur, and post processing is the most numerous and diverse, becomes
The biggest obstacle of actual application.OptiTrack scheme is by employing console model motion capture system, will catch effective range and adjust
It is suitable for size to face, fixes and be marked a seizure towards face.This technology equally exists loses point, hop problem, shadow in a large number
Ring final data stability and degree of accuracy.
In recent years, facial motion capture technology is widely used to multiple field.Facial paralysis not only has a strong impact on the outward appearance of people,
Will also result in its social behavior, the change working, living.But lack facial nerve function accurate, objective evaluation system to govern
The diagnosis and treatment of facial paralysis and research.Actual clinical in facial paralysis works and also there is larger difference in clinical research, and clinical position is general
Being qualitative and flexibly, clinical research should be then quantitative and " that clinches it ".Analysis side based on three-dimensional dynamic images
Rule can provide all dynamic parameters such as static measurement and facial muscle movement observation point displacement, direction, speed, acceleration,
The function of facial muscle, nervus facialis can be reflected by analyzing these parameters.Making type positioning feature point and tracking technique grind in medical science
Study carefully in the morPhometric analysis in facial muscle key point (such as certain point on cheek) interested and can play its advantage, and this technology
The actual measurement requirement to facial muscle movements on line real time can be met, used by some facial paralysis researchers.
Summary of the invention
In place of it is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of facial muscle for medical treatment is motion-captured
Analysis system and the method for analysis thereof, it is possible to the feature point set catching face intensive quickly, easily and accurately is continuous at three dimensions
The track of motion, realization are rebuild and automatic numbering, carry out data analysis, resolve kinematic parameter, print result report, face for medical science
Bed research or diagnosis analysis provide objective basis.
The present invention solves technical problem and be the technical scheme is that
A kind of facial muscle motion-captured analysis system, including motion capture device and host computer, described motion capture device
Including programmable gate array and image visual transducer, described host computer respectively with programmable gate array and visual image sensing
Device connects, and programmable gate array is connected with image visual transducer, and programmable gate array receives the predetermined pulse that host computer sends
Trigger signal and time of exposure, trigger signal and time of exposure according to predetermined pulse and send triggering to described image visual transducer
Signal, image visual transducer receives the start pulse signal that programmable gate array sends, and exposure gathers image and enters also data
Row pretreatment, the data after processing are sent to host computer, the characteristic point that image visual transducer is uploaded by described host computer
Carry out extracting, predicting tracking and three-dimensional reconstruction, the automatic identiflication number of characteristic point;Auto-mending is lost during following the tracks of or omits
Observation of characteristics point and removal rigid body displacement;Generate the correlation report that face point of observation three-dimensional motion catches.
And, the pretreatment of described image is to be split image by programmable gate array, only retains highlighted mark
Note point small images around, removes image major part region black background image data.
And, described image visual transducer includes that multiple optical camera, multiple optical cameras are calibrated by host computer,
In calibration computational methods, add lens distortion mathematical model, use the non-linear quick calculation method solution of Large sparse matrix
Calculate multi-vision visual system external and internal compositions parameter.
And, described image visual transducer includes that servicing lighting, described servicing lighting are annular LED
Lamp, described annular LED lamp is coaxially mounted to the outer of camera optics camera lens, by the signal input output interface of camera self with
Camera connects, and the blinker signal of this interface output controls LED and carries out the stroboscopic operation Tong Bu with collected by camera.
And, described host computer identiflication number automatic to characteristic point process is as follows: first determine reference point in point of observation
Position, remaining particular location is found in unacknowledged point of observation, if 2 points that ear is x-axis direction distance maximum, mouth
In the bottom of face, for z-axis direction, confirm that mouth 4 lays respectively at major axis or short axle according to minimax distance principle
On, to find maximum distance in three points minimum in remaining point of observation and determine the position of the left and right wing of nose, middle part is nose, by
2 line midpoints of ear, nose and 3 midplanes determined of lower lip, closest for the bridge of the nose, distance in left point
Two is near for place between the eyebrows, in remaining point of observation, distance midplane closest for left and right inner eye corner, according to corresponding x value
About judgement, determine in the eyebrow of left and right according to z direction minima, further according to about corresponding x value judgement, the maximum of distance midplane
For the left and right tail of the eye, according to about corresponding x value judgement, in remaining point of observation, on the basis of nose, z direction is found in left side
Maximin determine upper left palpebra inferior, in like manner upper right palpebra inferior, the most automatically identify that process terminates.
And, described host computer auto-mending loses during following the tracks of or the observation of characteristics point omitted is affiliated according to it
In rigid structure and structure, the three-dimensional coordinate of this point is reduced by the three dimensional space coordinate of other points.
And, the removal rigid body displacement of described host computer is to look for multiple point of observation as basis reference, produces in entirety
During displacement, the relative invariant position between reference point, according to the rough transformation relation [R 1 | T1] between front and back's frame reference point,
The three dimensional space coordinate of whole observation point is carried out same conversion, removes rigid body displacement.
And, the correlation report generating face point of observation three-dimensional motion seizure of described host computer includes the interval of point of observation
Maximum displacement, present frame speed, acceleration.
A kind of analysis method of facial muscle motion-captured analysis system, step is as follows:
(1) host computer and image visual transducer communication, detect image acquisition time of exposure set in advance, and host computer will
Above-mentioned time of exposure sends to programmable gate array, programmable gate array setting pulse signal output width and triggering output frequency
Rate;
(2) programmable gate array sends to trigger signal control optical element stroboscopic and control image visual transducer and starts to expose
Light collection image to and carry out data process;
(3) the home position gathering image pair all observation of characteristics point is sent to host computer by image visual transducer;
(4) the point of observation during host computer uses all images of Two-Dimensional Kalman tracking prediction;According to principle of stereoscopic vision to pole
Geometric theory, clicks on row space three-dimensionalreconstruction, and carries out three-dimensional Kalman Prediction, revise prediction deviation observation of characteristics;
(5) the characteristic point rebuild is identified and numbers by host computer automatically;
(6) host computer is lost or the seizure result of mistake according to Rigid Constraints auto-mending, according to the ginseng that relative position is fixing
Examination point, removes the unnecessary rigid body displacement in capture-process;
(7) host computer judges whether to complete all tracking prediction tasks, if completed, generates face point of observation three maintenance and operation
The dynamic correlation report caught.
Described step idiographic flow (2) is:
1. programmable gate array sends pulse triggering signal to image visual transducer;
2. optical element lights in the energising of pulse signal rising edge;
3. image visual transducer starts exposure at pulse signal rising edge;
4. image visual transducer end exposure, completes an image to collection;
5. optical element extinguishes in pulse falling edge power-off;
6. wait that FPGA triggers next time.
Advantages of the present invention and good effect be:
1, view-based access control model three-dimensional measurement principle of the present invention, feature point extraction, tracking and the three-dimensional reconstruction to this image pair,
Automatically identifying that face point of observation is numbered, use Kalman filtering to be predicted tracing computation and error correction, automatic discrimination is wrong
By mistake data are modified and automatically fill a vacancy according to model of mind data, analyze face point of observation three-dimensional motion export phase
Close report, thus realize low cost, high efficiency, high reliability and high-precision real time kinematics and follow the tracks of, rebuild and interpretation of result.
2, the present invention also rebuilds motion sequence based on reliable forecast analysis technology, capture facial motion data, uses one
Keyed intelligence post-processing technology is under the conditions of without too much artificial treatment, it is ensured that the integrity of data and accuracy, for medical treatment
The Study on Kinematic Analysis of facial muscle brings great convenience condition.
3, the present invention realizes precise synchronization and the logic control of facial motion capture system based on programmable gate array FPGA,
Use annular LED as projection light source, use Camera Self-Calibration technology that polyphaser is carried out parameter calibration, face feature point is entered
Row extracts, follows the trail of, three-dimensional reconstruction and automatic numbering identification, and exports seizure interpretation of result report, thus completes low cost, efficiently
Rate, high reliability and high-precision facial motion capture, reconstruction and interpretation of result.
4, invention achieves the image transmitting of high speed, high resolution, use near infrared band to become according to motion capture cameras
The principle of picture, the feature that image white gray darkly is with distinct contrast, devise unit collecting cassette hardware plan, camera original image leads to
Cross high speed Camera Link cable transmission to collecting cassette, the FPGA in collecting cassette carries out image segmentation, only retains highlighted mark
Note point small images around, removes image major part region black background image data, and the data occupied bandwidth of reservation reduces
One of percentage to original image, a gigabit network cable can transmit multiway images to work station.Data are carried out at work station end
Feature extraction, due to image over-segmentation, data volume reduces, and arithmetic speed promotes nearly a hundred times than original image, takies work
Resource of standing is little.Having an advantage in that camera cost does not has any increase, unit collecting cassette is based on FPGA design, and cost is the lowest, has
Effect image information does not has any loss, catches precision high, data stabilization, and transport module is simple equally, and cable is few, it is ensured that action is caught
Cost is controlled well while catching the quality of data.
5, calibration precision of the present invention is high.Based on scaling board static state calibration scheme, add in calibration computational methods
Lens distortion mathematical model, uses the non-linear quick calculation method of Large sparse matrix to resolve multi-vision visual system external and internal compositions
Parameter.Its advantage is simple to operate, easy to use, has obtained accurate external and internal compositions parameter simultaneously, and reconstruction accuracy improves
While reduce the system requirement for optical lens aberration control quality, thus again reduce hardware cost.
6, the present invention is significantly increased service life.Being limited by the processing speed of image acquiring sensor, image effectively exposes
Time is about 1/10 to the accounting of scanning total time, therefore under the lasting scan pattern of prior art, and major optical LED element
(such as laser LED etc.) has the non-productive work time of up to 9/10, and after using the flash scanning of pulse frequency, major optical LED element
Non-productive work time accounting reduced to 0 by 9/10, according to LED nominal parameters estimate, promote more than 10 times service life, energy consumption is big
Width reduces, and dispels the heat and almost negligible disregards, and eliminate radiator structure is designed and manufactured as this simultaneously.
7, the present invention solves tracking mistake or the problem of track rejection.The problem such as block, overlapping during motion tracking
Often bring substantial amounts of post processing workload for motion capture.System uses Intelligent track error correction and repairing technique, utilizes approximation just
Property structure feature, object is carried out segmentation of structures, by model learning obtain three dimensions approximation rigid structure model, model
Carry out real-time comparison and guidance to following the tracks of result, with this, error tracking result is carried out error correction, the target losing tracking is carried out
Track is repaired.The acquisition frame rate of camera is not had high requirements by scheme, controls cost well, inherently solves simultaneously
Follow the tracks of the problem causing motion capture accuracy low of makeing mistakes.
Accompanying drawing explanation
Fig. 1 is structure and the functional block diagram of the present invention;
Fig. 2 is the structural representation of motion capture device;
Fig. 3 is the schematic flow sheet of programmable gate array (FPGA) method of work;
Fig. 4 is that host computer is to the extraction of observation of characteristics point, tracking, three-dimensional reconstruction and automatic identiflication number schematic flow sheet;
Fig. 5 is the complete job flow chart of the present invention.
Detailed description of the invention
The invention will be further described below in conjunction with the accompanying drawings and by specific embodiment, and following example are descriptive
, it not determinate, it is impossible to limit protection scope of the present invention with this.
A kind of facial muscle motion-captured analysis system, including motion capture device and host computer, described motion capture device
Including programmable gate array (FPGA) and image visual transducer, described host computer 1 respectively with programmable gate array 3 and vision
Imageing sensor 2 connects, and programmable gate array is connected with image visual transducer.This host computer can be understood as controlling equipment,
Such as: computer.It has Camera Self-Calibration, feature point extraction, tracking, three-dimensional reconstruction, automatically identify, rigid body displacement remove and
The functions such as output analysis report.
Programmable gate array is connected with image visual transducer.FPGA sends pulse-triggered letter to image visual transducer
Number, accurately control the sync pulse jamming of image visual transducer.The idiographic flow that FPGA precise synchronization controls is with reference to Fig. 3, in S301,
Host computer and image visual transducer communication, detect image acquisition time of exposure set in advance;In S302, host computer is by above-mentioned
Time of exposure sends to FPGA;In S303, FPGA is defeated according to the time of exposure received and filming frequency setting pulse signal
Go out width and trigger output frequency;In S304, FPGA sends pulse triggering signal to image visual transducer;In S305, optics
Element (including the floor light light source etc. that may add on image visual transducer) lights in the energising of pulse signal rising edge;
In S306, image visual transducer starts exposure at pulse signal rising edge;In S307, image visual transducer end exposure,
Complete an image to collection;In S308, optical element extinguishes in pulse falling edge power-off;In S309, hardware device waits
FPGA triggers next time and is i.e. recycled to S304.
Image visual transducer is made up of at least two multiple optical cameras, the structure phase between multiple optical cameras
To fixing, and relative position relation and the camera internal parameter between camera is known, and what multiple cameras reception FPGA sent touches
Sending out pulse signal, gather image in the exposure of same time point, the multiple image every time gathered forms one group of Stereo matching image pair,
It processes data and is sent to host computer by camera transmission cable, carries out feature point extraction, tracking, three-dimensional reconstruction and automatically identifies
Numbering.Image visual transducer includes a kind of servicing lighting, for increasing the measured object that image acquiring sensor collects
The intensity of reflected light of surface, such as, a kind of annular LED lamp concentric with image acquiring sensor optical lens excircle configuration,
And be connected with camera by the signal input output interface of camera self, the blinker signal function of this interface output can control
LED carries out the stroboscopic operation Tong Bu with collected by camera.For purposes of illustration only, the present embodiment is with the image visual transducer of double camera
As a example by, as shown in Fig. 2 structural representation, double camera is arranged with up-down structure, therefore the upper camera of top camera, lower section camera
The most lower camera.
Camera calibration process: using surface to have the demarcation thing of some reflective spots to carry out camera calibration, demarcating thing is cross
Frame shape, respectively holding in 90 degree of cross, shooting cross rotates attitude several times, carries out many according to Camera Self-Calibration method
Camera parameter is demarcated.
What observation of characteristics point tracking processed implements flow process as shown in Figure 4: in S401, and FPGA sends and triggers signal control
Image visual transducer exposure once gathers image pair;In S402, to each image to carrying out image procossing respectively, extract all spies
Levy the home position of point of observation;In S403, image visual transducer will gather the position, the center of circle of image pair all observation of characteristics point
Put and be sent to host computer;In S404, use the point of observation in all images of Two-Dimensional Kalman tracking prediction;In S405, according to vertical
Body vision principle Epipolar geometry is theoretical, observation of characteristics clicks on row space three-dimensionalreconstruction, and carries out three-dimensional Kalman Prediction, revise
Prediction deviation;In S406, the characteristic point rebuild automatically is identified and numbers, in S407, according to Rigid Constraints auto-mending
Lose or the seizure result of mistake, according to the reference point that relative position is fixing, remove the unnecessary rigid body displacement in capture-process,
Jump to S408 judge whether to enter circulation;S409, if it is determined that completed all tracking prediction tasks, is tracked analyzing report
Accuse output.
The function that three-dimensional reconstruction in host computer calculates, for the body surface to the image pair that vision sensor gathers
Characteristic point carries out three-dimensional reconstruction, i.e. utilizes the two dimensional character point set of Stereo matching image pair to convert based on triangulation meter algorithm
For three-dimensional feature point set.In the present embodiment, body surface feature is the circular mark of a kind of engineer being pasted onto body surface
Note, the two dimensional character point that the elliptical center that image procossing extracts is on image.According to Epipolar geometry principle, for upper camera figure
Each observation of characteristics point in Xiang, finds closest two dimensional character point of observation, phase up and down on the polar curve of lower camera image
Two dimensional character point of observation corresponding to the machine three dimensional space coordinate to can calculate this feature point according to triangulation.
The automatic identification function to face point of observation in host computer, for automatically identifying that the face that point of observation is corresponding is intrinsic
Topological structure node.In motion analysis, point of observation model includes left and right ear, upper lower lip, the left and right corners of the mouth, the left and right wing of nose, nose
On point, the bridge of the nose, place between the eyebrows, left and right inner eye corner, the left and right tail of the eye, left and right, palpebra inferior etc. are necessary identifies point of observation.Recognition principle is: first
First determining the position of reference point in point of observation, remaining particular location is found in unacknowledged point of observation.If ear is x-axis side
To 2 points of distance maximum, mouth, in the bottom of face, for z-axis direction, confirms mouth four according to minimax distance principle
Point lays respectively on major axis or short axle.Find maximum distance in three points minimum in remaining point of observation and determine the left and right wing of nose
Position, middle part is nose.By 2 line midpoints of ear, nose and 3 midplanes determined of lower lip, left point middle-range
From nearest for the bridge of the nose, distance second near for place between the eyebrows.In remaining point of observation, distance midplane closest for left and right
Inner eye corner, according to about corresponding x value judgement.Determine in the eyebrow of left and right according to z direction minima, judge a left side further according to corresponding x value
Right.The maximum of distance midplane is the left and right tail of the eye, according to about corresponding x value judgement.In remaining point of observation, with nose
On the basis of, left side is found the maximin in z direction and is determined upper left palpebra inferior, in like manner upper right palpebra inferior, the most automatically identifies
Journey terminates.
Omit or lose the function of point of observation when auto-mending in host computer is followed the trail of, for repairing during catching
Owing to blocking or position skew causes loss and the omission of point of observation.Each point of observation is present in a topological structure,
When certain point of observation lost, some specific topological structure invariance based on face, can be according to its affiliated topology knot
In structure and structure, the three-dimensional coordinate of this point is reduced by the three dimensional space coordinate of other points.
Removal rigid body displacement function in host computer, in observation of characteristics point motion capture process, removes due to quilt
The head of observer or health occur displacement that is slight mobile and that produce to predict, on following the trail of, the impact produced.Concrete grammar is for looking for
Zadoi point of observation is as basis reference, and these points of observation can move along with head health and produce displacement, in facial movement mistake
In journey, but position is almost unchanged or displacement is negligible, and these point of observation reference points can be fixed on the crown, with face
Point of observation separate, it is also possible to the almost unchanged characteristic point in some position of picking in face point of observation, such as canthus, left and right, nose
Beam etc..When entirety produces displacement, due to the relative invariant position between reference point, thick according between front and back's frame reference point
Slightly transformation relation, and i.e. [R1 | T1], the three dimensional space coordinate of whole observation point is carried out same conversion, the most removable rigid body position
Move.
Trace analysis report output function in host computer, for doing the motion conditions catching the facial point of observation followed the tracks of
The result output of quantitative analysis, as research reference, such as interval maximum displacement, present frame speed and the acceleration of some point of observation
Deng.The interval maximum displacement of certain numbering point of observation is in interval frame, inclined with the position of the reference numeral point of observation in standard frame
Shifting amount maximum.In continuous motor process, speed is the derivation of displacement versus time, and the point of observation motion that camera obtains is the time
Discrete motion on axle, owing to camera has higher frame per second (at least 60 frames/second, it is recommended that the 120 frames/more than second), carries out high frequency
According to when gathering, differential mode can be used to calculate speed and acceleration, the point of observation speed v of present frame=Δ S/ Δ t, wherein
Δ S be this point of observation of present frame corresponding to the displacement of this point of former frame, Δ t is collection period, by 60 frames/as a example by the second, gather
Cycle is 1/60 second, and in like manner, acceleration a=Δ v/ Δ t, Δ v are the speed speed difference with former frame of this point of observation present frame.
Fig. 5 show the entirety of one's duty analysis method and realizes flow process: S501, PC control FPGA, image visual transducer
And host computer unlatching, enter duty;In S502, FPGA sends and triggers signal control optical element stroboscopic and control vision
Imageing sensor start exposure gather image to and carry out data process;In S503, image visual transducer will gather image pair
Process data be sent to host computer;In S504, characteristic point is extracted, is followed the trail of and three-dimensional reconstruction by host computer;In S505, on
The characteristic point on testee surface is identified and numbers by position machine automatically;In S506, host computer auto-mending is lost during following the tracks of
Or omit observation of characteristics point and remove rigid body displacement;In S507, host computer generates the phase that face point of observation three-dimensional motion catches
Close report;In S508, wait that FPGA triggers signal next time, jump to S502 and enter circulation.
Above-described is only the preferred embodiment of the present invention, it is noted that for those of ordinary skill in the art
For, on the premise of without departing from inventive concept, it is also possible to make some deformation and improvement, these broadly fall into the protection of the present invention
Scope.
Claims (10)
1. a facial muscle motion-captured analysis system, it is characterised in that: include motion capture device and host computer, described motion
Trap setting includes programmable gate array and image visual transducer, described host computer respectively with programmable gate array and vision
Imageing sensor connects, and programmable gate array is connected with image visual transducer, and programmable gate array receives what host computer sent
Predetermined pulse triggers signal and time of exposure, triggers signal and time of exposure to described image visual transducer according to predetermined pulse
Sending and trigger signal, image visual transducer receives the start pulse signal that programmable gate array sends, and exposure gathers image pair
And data carry out pretreatment, the data after processing are sent to host computer, and image visual transducer is uploaded by described host computer
Characteristic point carry out extracting, predict tracking and three-dimensional reconstruction, the automatic identiflication number of characteristic point;Auto-mending is lost during following the tracks of
Or omit observation of characteristics point and remove rigid body displacement;Generate the correlation report that face point of observation three-dimensional motion catches.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: the pretreatment of described image is
By programmable gate array, image is split, only retain the small images around highlight mark point, remove image major part
Region black background image data.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: described image visual transducer
Including multiple optical cameras, multiple optical cameras are calibrated by host computer, add lens distortion number in calibration computational methods
Learn model, use the non-linear quick calculation method of Large sparse matrix to resolve multi-vision visual system external and internal compositions parameter.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: described image visual transducer
Including servicing lighting, described servicing lighting is annular LED lamp, and described annular LED lamp is coaxially mounted to camera light
Learn the outer of camera lens, be connected with camera by the signal input output interface of camera self, the blinker signal of this interface output
Control LED and carry out the stroboscopic operation Tong Bu with collected by camera.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: described host computer is to characteristic point certainly
Dynamic identiflication number process is as follows: first determining the position of reference point in point of observation, remaining particular location is in unacknowledged observation
Finding in point, if 2 points that ear is x-axis direction distance maximum, mouth is in the bottom of face, for z-axis direction, according to maximum
Minimal distance principle confirms that mouth 4 lays respectively on major axis or short axle, seeks in three points minimum in remaining point of observation
Looking for maximum distance to determine the position of the left and right wing of nose, middle part is nose, true by 2 line midpoints of ear, nose and lower lip 3
Fixed midplane, closest for the bridge of the nose in left point, distance second near for place between the eyebrows, in remaining point of observation, distance
Midplane closest for left and right inner eye corner, according to about corresponding x value judgement, determine left and right eyebrow according to z direction minima
In, further according to about corresponding x value judgement, the maximum of distance midplane is the left and right tail of the eye, judges a left side according to corresponding x value
The right side, in remaining point of observation, on the basis of nose, left side is found the maximin in z direction and is determined upper left palpebra inferior, in like manner right
Upper palpebra inferior, identifies that process terminates the most automatically.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: described host computer auto-mending with
The observation of characteristics point lost during track or omit is the three dimensions of other points in the rigid structure affiliated according to it and structure
The three-dimensional coordinate of this point is reduced by coordinate.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: the removal rigid body of described host computer
Displacement is to look for multiple point of observation as basis reference, when entirety produces displacement, the relative invariant position between reference point, root
According to the rough transformation relation [R1 | T1] between front and back's frame reference point, the three dimensional space coordinate of whole observation point is carried out equally
Conversion, remove rigid body displacement.
Facial muscle the most according to claim 1 motion-captured analysis system, it is characterised in that: the generation face of described host computer
The correlation report that point of observation three-dimensional motion catches includes the interval maximum displacement of point of observation, present frame speed, acceleration.
9. an analysis method for the facial muscle motion-captured analysis system described in claim 1, step is as follows:
(1) host computer and image visual transducer communication, detect image acquisition time of exposure set in advance, and host computer is by above-mentioned
Time of exposure sends to programmable gate array, programmable gate array setting pulse signal output width and triggering output frequency;
(2) programmable gate array sends and triggers signal and control optical element stroboscopic and control image visual transducer and start exposure and adopt
Collection image to and carry out data process;
(3) the home position gathering image pair all observation of characteristics point is sent to host computer by image visual transducer;
(4) the point of observation during host computer uses all images of Two-Dimensional Kalman tracking prediction;According to principle of stereoscopic vision Epipolar geometry
Theory, clicks on row space three-dimensionalreconstruction, and carries out three-dimensional Kalman Prediction, revise prediction deviation observation of characteristics;
(5) the characteristic point rebuild is identified and numbers by host computer automatically;
(6) host computer is lost or the seizure result of mistake according to Rigid Constraints auto-mending, according to the reference that relative position is fixing
Point, removes the unnecessary rigid body displacement in capture-process;
(7) host computer judges whether to complete all tracking prediction tasks, if completing, generating face point of observation three-dimensional motion and catching
The correlation report caught.
Analysis method the most according to claim 9, it is characterised in that: described step idiographic flow (2) is:
1. programmable gate array sends pulse triggering signal to image visual transducer;
2. optical element lights in the energising of pulse signal rising edge;
3. image visual transducer starts exposure at pulse signal rising edge;
4. image visual transducer end exposure, completes an image to collection;
5. optical element extinguishes in pulse falling edge power-off;
6. wait that programmable gate array triggers next time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610549697.7A CN106175780A (en) | 2016-07-13 | 2016-07-13 | Facial muscle motion-captured analysis system and the method for analysis thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610549697.7A CN106175780A (en) | 2016-07-13 | 2016-07-13 | Facial muscle motion-captured analysis system and the method for analysis thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106175780A true CN106175780A (en) | 2016-12-07 |
Family
ID=57477062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610549697.7A Pending CN106175780A (en) | 2016-07-13 | 2016-07-13 | Facial muscle motion-captured analysis system and the method for analysis thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106175780A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830861A (en) * | 2018-05-28 | 2018-11-16 | 上海大学 | A kind of hybrid optical motion capture method and system |
CN108848824A (en) * | 2018-06-13 | 2018-11-23 | 苏州创存数字科技有限公司 | A kind of agricultural machinery based on multi-vision visual measurement |
CN110177486A (en) * | 2016-12-20 | 2019-08-27 | 株式会社资生堂 | Coating controller, coating control method, program and recording medium |
CN110717928A (en) * | 2019-10-21 | 2020-01-21 | 网易(杭州)网络有限公司 | Parameter estimation method and device of face motion unit AUs and electronic equipment |
CN111553250A (en) * | 2020-04-25 | 2020-08-18 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on face characteristic points |
CN111670419A (en) * | 2018-02-05 | 2020-09-15 | 高通股份有限公司 | Active supplemental exposure settings for autonomous navigation |
CN112998693A (en) * | 2021-02-01 | 2021-06-22 | 上海联影医疗科技股份有限公司 | Head movement measuring method, device and equipment |
CN114748085A (en) * | 2022-04-22 | 2022-07-15 | 南方医科大学南方医院 | X-ray exposure method, system and device based on motion recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101310289B (en) * | 2005-08-26 | 2013-03-06 | 索尼株式会社 | Capturing and processing facial motion data |
WO2013058710A1 (en) * | 2011-10-18 | 2013-04-25 | Nanyang Technological University | Apparatus and method for 3d surface measurement |
CN102289801B (en) * | 2011-05-16 | 2013-08-21 | 大连大学 | Data repairing method and system for motion capture and motion capture system |
CN104107048A (en) * | 2014-03-03 | 2014-10-22 | 中国医学科学院北京协和医院 | Musculus facialis three-dimensional motion measuring device on basis of motion capture |
US20150022639A1 (en) * | 2013-07-18 | 2015-01-22 | A.Tron3D Gmbh | Method of capturing three-dimensional (3d) information on a structure |
CN105203046A (en) * | 2015-09-10 | 2015-12-30 | 北京天远三维科技有限公司 | Multi-line array laser three-dimensional scanning system and method |
-
2016
- 2016-07-13 CN CN201610549697.7A patent/CN106175780A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101310289B (en) * | 2005-08-26 | 2013-03-06 | 索尼株式会社 | Capturing and processing facial motion data |
CN102289801B (en) * | 2011-05-16 | 2013-08-21 | 大连大学 | Data repairing method and system for motion capture and motion capture system |
WO2013058710A1 (en) * | 2011-10-18 | 2013-04-25 | Nanyang Technological University | Apparatus and method for 3d surface measurement |
US20150022639A1 (en) * | 2013-07-18 | 2015-01-22 | A.Tron3D Gmbh | Method of capturing three-dimensional (3d) information on a structure |
CN104107048A (en) * | 2014-03-03 | 2014-10-22 | 中国医学科学院北京协和医院 | Musculus facialis three-dimensional motion measuring device on basis of motion capture |
CN105203046A (en) * | 2015-09-10 | 2015-12-30 | 北京天远三维科技有限公司 | Multi-line array laser three-dimensional scanning system and method |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110177486A (en) * | 2016-12-20 | 2019-08-27 | 株式会社资生堂 | Coating controller, coating control method, program and recording medium |
CN110177486B (en) * | 2016-12-20 | 2022-04-08 | 株式会社资生堂 | Coating control device, coating control method, program, and recording medium |
CN111670419A (en) * | 2018-02-05 | 2020-09-15 | 高通股份有限公司 | Active supplemental exposure settings for autonomous navigation |
CN108830861A (en) * | 2018-05-28 | 2018-11-16 | 上海大学 | A kind of hybrid optical motion capture method and system |
CN108848824A (en) * | 2018-06-13 | 2018-11-23 | 苏州创存数字科技有限公司 | A kind of agricultural machinery based on multi-vision visual measurement |
CN110717928A (en) * | 2019-10-21 | 2020-01-21 | 网易(杭州)网络有限公司 | Parameter estimation method and device of face motion unit AUs and electronic equipment |
CN110717928B (en) * | 2019-10-21 | 2022-03-18 | 网易(杭州)网络有限公司 | Parameter estimation method and device of face motion unit AUs and electronic equipment |
CN111553250A (en) * | 2020-04-25 | 2020-08-18 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on face characteristic points |
CN112998693A (en) * | 2021-02-01 | 2021-06-22 | 上海联影医疗科技股份有限公司 | Head movement measuring method, device and equipment |
CN114748085A (en) * | 2022-04-22 | 2022-07-15 | 南方医科大学南方医院 | X-ray exposure method, system and device based on motion recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106175780A (en) | Facial muscle motion-captured analysis system and the method for analysis thereof | |
CN111897332B (en) | Semantic intelligent substation robot humanoid inspection operation method and system | |
CN107229930B (en) | Intelligent identification method for numerical value of pointer instrument | |
CN111958592B (en) | Image semantic analysis system and method for transformer substation inspection robot | |
CN105956586B (en) | A kind of intelligent tracking system based on TOF 3D video camera | |
CN109977813A (en) | A kind of crusing robot object localization method based on deep learning frame | |
CN110221603A (en) | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud | |
CN109934108B (en) | Multi-target and multi-type vehicle detection and distance measurement system and implementation method | |
CN106228119A (en) | A kind of expression catches and Automatic Generation of Computer Animation system and method | |
CN108592788A (en) | A kind of 3D intelligent camera systems towards spray-painting production line and workpiece On-line Measuring Method | |
CN106256394A (en) | The training devices of mixing motion capture and system | |
EP3001138A1 (en) | Measuring device for determining the spatial location of a measuring aid | |
CN106197452A (en) | A kind of visual pattern processing equipment and system | |
CN102106758A (en) | Automatic visual location device and automatic visual location method for head marks of patient in stereotactic neurosurgery | |
CN103247056B (en) | Human bone articular system three-dimensional model-bidimensional image spatial registration method | |
CN106162144A (en) | A kind of visual pattern processing equipment, system and intelligent machine for overnight sight | |
CN113160327A (en) | Method and system for realizing point cloud completion | |
CN105136108A (en) | High-accuracy wave height measuring method based on stereoscopic vision | |
CN108564657A (en) | A kind of map constructing method, electronic equipment and readable storage medium storing program for executing based on high in the clouds | |
CN110514133A (en) | It is a kind of based on photogrammetric unmanned plane tunnel deformation detection method | |
CN206863817U (en) | Camera review automatic acquisition device and camera calibration systems | |
CN108780319A (en) | Oftware updating method, system, mobile robot and server | |
CN111596767A (en) | Gesture capturing method and device based on virtual reality | |
CN107749060A (en) | Machine vision equipment and based on flying time technology three-dimensional information gathering algorithm | |
CN110136186A (en) | A kind of detection target matching method for mobile robot object ranging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161207 |