CN108846342A - A kind of harelip operation mark point recognition system - Google Patents

A kind of harelip operation mark point recognition system Download PDF

Info

Publication number
CN108846342A
CN108846342A CN201810570020.0A CN201810570020A CN108846342A CN 108846342 A CN108846342 A CN 108846342A CN 201810570020 A CN201810570020 A CN 201810570020A CN 108846342 A CN108846342 A CN 108846342A
Authority
CN
China
Prior art keywords
picture
harelip
muffle
face
contouring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810570020.0A
Other languages
Chinese (zh)
Inventor
李杨
李洲
李一洲
梅宏翔
程俊豪
马凰税
寿宇柯
巩娜
唐秀美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201810570020.0A priority Critical patent/CN108846342A/en
Publication of CN108846342A publication Critical patent/CN108846342A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of harelip operation mark point recognition system, including:Picture screening module is screened for receiving the original image comprising face, and to the original image, obtains the harelip picture of specification;Face detection module obtains face picture for extracting face part from the harelip picture of the specification according to preset target component;Frame contour determining module obtains muffle contouring frame for the first convolutional neural networks according to training in advance from the face picture;Cusp position determining module obtains operation index point position for the second convolutional neural networks according to training in advance from the muffle contouring frame.Technical solution provided by the invention, can operation index point position that is automatic, accurately determining cleft lip patients, provide effective surgical guidance for doctor.

Description

A kind of harelip operation mark point recognition system
Technical field
The present invention relates to depth learning technology field more particularly to a kind of harelip operation mark point recognition systems.
Background technique
Harelip (being commonly called as harelip) is the common congenital abnormality disease of oromaxillo-facial region, and the disease incidence in China is about 1.82 ‰, existing patient 2,540,000, enormous amount.Harelip belongs to multiple-factor inheritance disease, not only seriously affects face appearance, Also because mouth, nasal cavity communicate, infant development is directly affected, and frequently result in the infection of the upper respiratory tract, concurrent tympanitis, meanwhile, deformity It changes also with the age.Therefore, it carries out operation to infant to repair being current most effective treatment means in time.
However, cheiloplasty faces the awkward situation of " unprofessional, system, operation difficulty be not big " in China, patient is caused to collect The high-level hospital of middle flow direction, wastes the utilizable medical resource of hospital of prefecture-level city sheet.It is most important, most tired in harelip operation Difficult is a little to become the core technology in harelip operation which determine the success or failure of operation to the determination of operation index point position. Currently, more advanced technology only rests in a small number of doctor's hands in this operation, and existing training of doctors mechanism make it is advanced The popularization of technology is slower, so that most of harelip operation effects are undesirable at present, is far from satisfying the demand of many patients.
Summary of the invention
The present invention is intended to provide a kind of harelip operation mark point recognition system, can automatically determine the operation of cleft lip patients Index point position provides effective surgical guidance for doctor.
In order to achieve the above objectives, the technical solution adopted by the present invention is as follows:
A kind of harelip operation mark point recognition system, including:Picture screening module, for receiving the original graph comprising face Piece, and the original image is screened, obtain the harelip picture of specification;Face detection module, for according to preset mesh Parameter is marked, face part is extracted from the harelip picture of the specification, obtains face picture;Frame contour determining module is used for root According to the first convolutional neural networks of training in advance, muffle contouring frame is obtained from the face picture;Cusp position determines mould Block obtains operation mark point for the second convolutional neural networks according to training in advance from the wheel muffle contouring frame It sets.
Preferably, the picture screening module uses the CustomVision API in Microsoft Cognitive Service, The original image is screened, the harelip picture of specification is obtained.
Preferably, the preset target component includes:The size of picture, the resolution ratio of picture, face are in picture Position.
Preferably, the face detection module is using the Face API in Microsoft Cognitive Service, from the rule Face part is extracted in the harelip picture of model, obtains face picture.
Preferably, first convolutional neural networks are identical as the structure of second convolutional neural networks;Described first Convolutional neural networks include sequentially connected input layer, Google Xception network, global average pond layer, SPP layers, entirely Articulamentum, output layer;The Google Xception network is trained in advance on ImageNet.
Further, Rescale layers are additionally provided between the input layer and the Google Xception network, be used for The picture dimension in input layer being more than threshold value is adjusted.
Further, the training data of first convolutional neural networks includes:The face picture of cleft lip patients, the lip Split the coordinate of muffle contouring frame in the face picture of patient;The coordinate of the muffle contouring frame is the muffle contouring frame Relative coordinate in the face picture of the cleft lip patients;The muffle contouring frame carries out hand in preset labeling system Dynamic mark, the coordinate of the labeling system output muffle contouring frame;The training data of second convolutional neural networks includes: Muffle contouring block diagram piece, the coordinate for index point of performing the operation in the muffle contouring block diagram piece;The coordinate of the operation index point For relative coordinate of the operation index point in the muffle contouring block diagram piece;The operation index point is in the mark It is marked manually in system, the coordinate of the labeling system output operation index point.
Preferably, the muffle contouring frame is rectangle, and the coordinate of muffle contouring frame includes:Muffle contouring frame upper left The coordinate of angle point and the coordinate of bottom right angle point;The operation index point has 12.
Preferably, first convolutional neural networks and the second convolutional neural networks are trained on GPU.
Harelip operation mark point recognition system provided in an embodiment of the present invention, by obtaining the original image of cleft lip patients, And after carrying out Face datection to original image, muffle contouring frame is oriented from face picture, and further take turns from muffle portion Multiple operation index points are calculated in wide frame.It is trained in advance since the positioning of muffle contouring frame and operation index point is all made of Convolutional neural networks, therefore whole process is automatically performed by computer, the artificial participation without doctor.Work as convolutional neural networks Training data when manually being marked by veteran outstanding harelip doctor, the convolutional neural networks trained also have Expert's grade stationkeeping ability, the operation index point position of output is accurately and postoperative effect is best.It therefore, will be of the invention Applied in the training system of harelip operation doctor or medico, good training instruction can be played the role of, make doctor or doctor The deficiency of student's timely correction self-technique;Alternatively, can also apply the present invention in the Preoperative Method work of operative doctor, it is right Operative doctor provides effective surgical guidance.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of the first convolutional neural networks in the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of the second convolutional neural networks in the embodiment of the present invention;
Fig. 4 is the method flow diagram of the embodiment of the present invention;
Fig. 5 is the feature extraction effect contrast figure of Xception and other networks in the embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing, to the present invention into Row is further described.
In order to realize core of the invention function, we start with from medical technology and image processing techniques these two aspects, main Use following technology:
(a) West China method harelip reconstructive surgery technology
West China stomatological hospital harelip section began to a kind of modus operandi of research from 1998, after West China two generations, West China method Unilateral Cleft Lip Repair is formed, is the operation designing that can uniquely take into account upper lip length and width symmetry in the world at present, in the world In first-class top standard.The operation index point position that the technology determines, postoperative effect are preferable.Therefore, the present invention uses the skill Art manually marks patient's picture, and using the picture manually marked as convolutional neural networks (Convolutional Neural Network, CNN) training data so that convolutional neural networks obtain expert grade position level.
(b) the operation mark point recognition technology based on deep learning
This system needs to solve the key problem in harelip operation, i.e., finds out the operation fixed point and notch of patient facial region automatically Position.In order to efficiently solve the problems, such as this, the present invention proposes a kind of based on multilayer cascade CNN, transfer learning fine- The deep learning network structure of tuning, SPP (Spatial Pyramid Pooling) layer, wherein SPP layers can be realized CNN Dynamic inputs the function of dimension.
For individual patient's face, our target is all point of contact coordinates of output.Identification division is divided into here Contour detecting and operation index point detect two parts, and devise concatenated convolutional neural network structure on this basis.
As for why multilayer cascade network is used, rather than individual network is because if returning only with a network If returning training, it is found that obtained characteristic point coordinate is not accurate enough.And the method for using cascade to return CNN, carry out segmented Positioning feature point, can more rapidly, be accurately located patient's affected part characteristic point.If using bigger network, characteristic point it is pre- Surveying can be more accurate, but time-consuming will increase;In order to find an equalization point in speed and performance, lesser network, institute are used To use cascade thought, rough detection is first carried out, then micro-adjustment feature point.
Experiment shows that the multilayer in the embodiment of the present invention cascades CNN deep learning network, uses transfer learning plus SPP layers Mode, the reliable API provided in conjunction with Microsoft Cognitive Service and Custom Vision is aided with for simplifying network The tens of thousands of reliable mark training datas that West China doctor provides, so that network works well, average Mean Square Loss has reached the level of practical application in 10E-5 or so.
Fig. 1 is the structural schematic diagram of the embodiment of the present invention, including:
(1) picture screening module is screened for receiving the original image comprising face, and to the original image, Obtain the harelip picture of specification;In the present embodiment, using the CustomVision API in Microsoft Cognitive Service, The original image is detected, is screened, if detecting that the picture of input is not the picture of cleft lip patients, alternatively, input Cleft lip patients picture it is lack of standardization, unclear, system can report an error automatically and user is prompted to re-enter.For the harelip figure of specification Piece will continue to send to face detection module.
Wherein, CustomVision API uses the preparatory training that tens of thousands of patient's pictures carry out classification and Detection, accurately Rate works well 90% or more.
(2) face detection module, for extracting face from the harelip picture of the specification according to preset target component Part obtains face picture;
In the present embodiment, the preset target component includes:The size of picture, the resolution ratio of picture, face are in picture In position.Specifically, face detection module detects the face in the harelip picture of above-mentioned specification, and every face of rough estimate exists Position and size in picture, and carried out according to above-mentioned target component, picture top left co-ordinate, picture lower right corner coordinate pair picture It cuts, and further zooms to unified resolution, treated face picture output will be carried out to frame contour determining module In.
In the present embodiment, main part in the harelip picture of face detection module Extraction specification, i.e. face part, simultaneously First time Data Dimensionality Reduction is carried out.The Face API that this module uses in Microsoft Cognitive Service is easily obtained Face location information, and cut, to eliminate the time and efforts of oneself design, training Face datection network.
(3) frame contour determining module, for the first convolutional neural networks according to training in advance, from the face picture Obtain muffle contouring frame;
In the present embodiment, frame contour determining module is connected with face detection module.For the face picture of input, utilize First convolutional neural networks CNN detects the rectangle frame (i.e. the profile at harelip position) comprising all operation fixed points, by rectangle frame The profile picture that outlines is passed to next stage CNN, i.e. in the second convolutional neural networks, further extracts main body, carry out Data Dimensionality Reduction, So that it is guaranteed that the accuracy of operation fixed point later.
First convolutional neural networks use the trained Google Xception network on ImageNet, use migration Fine-Tuning is in study, in the case where keeping legacy network various parameter values, remove it is last several layers of, plus global averagely pond Change (Global Average Pooling) layer and full articulamentum is formed.Meanwhile SPP can be also added before full articulamentum (Spatial Pyramid Pooling) layer inputs the dynamic dimension of picture for realizing user, to solve image preprocessing When, the methods of Resize, Crop ineffective problem.Specifically, as shown in Fig. 2, first convolutional neural networks include Sequentially connected input layer, Google Xception network, global average pond layer, SPP layers, full articulamentum, output layer;Its In, input layer is triple channel, can input the picture of any dimension;The average pond layer of the overall situation prevents over-fitting;SPP layers of realization CNN dynamic input dimension;Full articulamentum is used to extract the flag sign of training data concentration to complete profile point recurrence.It is described defeated Enter and be additionally provided with Rescale layers between layer and the Google Xception network, for the picture in input layer being more than threshold value Dimension is adjusted.The full articulamentum of first convolutional neural networks only has 4 nodes (having 4 outputs), exports harelip respectively Relative coordinate position of the upper left corner and the lower right corner of profile rectangle frame in muffle contouring frame.Wherein, the upper left corner, the lower right corner Point occupies 2 outputs respectively, this 2 outputs partner relative coordinate values (X, Y).
In the present embodiment, the training data of the first convolutional neural networks includes:The face picture of cleft lip patients, the harelip The coordinate of muffle contouring frame in the face picture of patient;The coordinate of the muffle contouring frame is that the muffle contouring frame exists Relative coordinate in the face picture of the cleft lip patients;The muffle contouring frame carries out manually in preset labeling system Mark, the labeling system can export the coordinate of muffle contouring frame automatically.Using above-mentioned training data to the first convolution mind It is trained through network, the first convolutional neural networks is made to obtain the frame contour location technology of expert's grade.It should be noted that opposite The relative position of a coordinate position i.e. point in the picture, rather than absolute pixel position.Such as the picture of a 512X512, In operation fixed point in (256,256), then relative coordinate position of this operation fixed point is (0.5,0.5).
As for why basic model (Base Model) of the Xception as us is selected, this is because Xception exists While possessing parameter less than networks such as Inception, ResNet, feature extraction effect is but apparently higher than the latter, specifically shows It is intended to as shown in Figure 5.
(4) cusp position determining module is taken turns for the second convolutional neural networks according to training in advance from the muffle portion Operation index point position is obtained in wide frame.
Cusp position determining module is connected with frame contour determining module, according to the frame contour of output, detects wherein all Operation index point simultaneously exports result.Second convolutional neural networks are identical as the structure of the first convolutional neural networks, they are not It is the parameter setting of full articulamentum with place, as shown in Figure 3.The full articulamentum of second convolutional neural networks is for extracting training Flag sign in data set is returned with completing operation index point.Full articulamentum has 24 nodes, exports all 12 lips respectively Relative coordinate position of the cleft hand art index point in harelip profile picture.Wherein, the output of every 2 nodes partners and sits relatively Scale value indicates this operation index point position.For example, 24 nodes of output are x1,y1,x2,y2,x3,y3..., wherein often A (xi,yi) determine an operation index point.
In the present embodiment, the training data of the second convolutional neural networks includes:Muffle contouring block diagram piece, the muffle portion The coordinate of operation index point in frame contour picture;The coordinate of the operation index point is the operation index point in the muffle portion Relative coordinate in frame contour picture;The operation index point is marked manually in the labeling system, the mark system System can export the coordinate of operation index point automatically.The second convolutional neural networks are trained using above-mentioned training data, are made Second convolutional neural networks obtain the point of contact location technology of expert's grade.
Pair of West China Hospital offer is provided in the present embodiment for the training data of the first, second convolutional neural networks The mark picture of the 3D photo of harelip infant and the 2D photo shot from three fixed angles, training data scale are more than 20000.In order to guarantee training effect, all training datas by West China expert according to the fixed point of West China method Unilateral Cleft Lip Repair, Incision process is labeled in person, and accurate reliable, the locating effect for cascading CNN network for depth lays foundation stone.It needs to illustrate It is preferably to carry out network training only with 2D picture in the present embodiment, because directly being trained compared to three-dimensional stereo data, Two dimensional image convolutional network all has very big advantage in maturity, speed.It is equivalent to and training data has been subjected to " dimensionality reduction " Processing can also but have the training effect of remote super three-dimensional data.
Under 20000 or more training set scale, it can be spent greatly using common CP U training this depth cascade CNN network The time (about several weeks) of amount.In order to accelerate efficiency, we carry out the fitting of CNN network using GPU.Therefore, network uses Each CNN network is implemented separately to realize that GPU accelerates in the GPU version TensorFlow framework that Google is provided.GPU resource is come From in the laboratory of School of Computer Science of Sichuan University.
After above-mentioned two convolutional neural networks are trained, it is also necessary to carry out model measurement.Test data is cured by West China Institute provides, the photo of about 1000 or so cleft lip patients.
For the error of metric algorithm testing result and practical mark point position, defined herein error rate (ErrorRate):
So mean error is:
Wherein, ErrorRateiIndicate that the error rate of the i-th picture, ErrorRate indicate the average mistake of entire data set Rate.N indicates picture number in data set, and M indicates harelip operation index point number.pijIt indicates in i-th face picture j-th The coordinate measurement of operation index point is as a result, gijIndicate the actual coordinate at j-th of point of contact in i-th face picture.riIndicate i-th Face datection frame side.
We define average failure rate (FailureRate) simultaneously:
Count the picture ratio that error in entire data set is more than 10%.
Using above-mentioned tool during modeling, verifying is collected and carries out error analysis, then filters out optimal network model Afterwards, our cascade CNN model is restrained after the several epoches of training, and the Averaged Square Error of Multivariate on test set is 10E-5, is put down Equal failure rate is 1% hereinafter, having reached good prediction effect.
Harelip operation mark point recognition system provided in an embodiment of the present invention, by obtaining the original image of cleft lip patients, And after carrying out Face datection to original image, muffle contouring frame is oriented from face picture, and further take turns from muffle portion Multiple operation index points are calculated in wide frame.It is trained in advance since the positioning of muffle contouring frame and operation index point is all made of Convolutional neural networks, therefore whole process is automatically performed by computer, the artificial participation without doctor.Work as convolutional neural networks Training data when manually being marked by veteran outstanding harelip doctor, the convolutional neural networks trained also have Expert's grade stationkeeping ability, the operation index point position of output is accurately and postoperative effect is best.It therefore, will be of the invention Applied in the training system of harelip operation doctor or medico, good training instruction can be played the role of, make doctor or doctor The deficiency of student's timely correction self-technique;Alternatively, can also apply the present invention in the Preoperative Method work of operative doctor, it is right Operative doctor provides effective surgical guidance.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.

Claims (9)

1. a kind of harelip operation mark point recognition system, which is characterized in that including:
Picture screening module is screened for receiving the original image comprising face, and to the original image, obtains specification Harelip picture;
Face detection module, for extracting face part from the harelip picture of the specification, obtaining according to preset target component Take face picture;
Frame contour determining module obtains nose from the face picture for the first convolutional neural networks according to training in advance Lip frame contour;
Cusp position determining module, for the second convolutional neural networks according to training in advance, from the muffle contouring frame Obtain operation index point position.
2. harelip operation mark point recognition system according to claim 1, which is characterized in that the picture screening module is adopted With the CustomVisionAPI in Microsoft Cognitive Service, the original image is screened, specification is obtained Harelip picture.
3. harelip operation mark point recognition system according to claim 1, which is characterized in that the preset target component Including:Position of the size, the resolution ratio of picture, face of picture in picture.
4. harelip operation mark point recognition system according to claim 3, which is characterized in that the face detection module is adopted With the Face API in Microsoft Cognitive Service, face part is extracted from the harelip picture of the specification, obtains people Face picture.
5. harelip operation mark point recognition system according to claim 1, which is characterized in that the first convolution nerve net Network is identical as the structure of second convolutional neural networks;First convolutional neural networks include sequentially connected input layer, Google Xception network, global average pond layer, SPP layers, full articulamentum, output layer;The Google Xception Network is trained in advance on ImageNet.
6. harelip operation mark point recognition system according to claim 5, which is characterized in that the input layer with it is described E layers of Resca l are additionally provided between Google Xception network, for carrying out to the picture dimension in input layer being more than threshold value Adjustment.
7. harelip operation mark point recognition system according to claim 5, which is characterized in that the first convolution nerve net The training data of network includes:The face picture of cleft lip patients, the seat of muffle contouring frame in the face picture of the cleft lip patients Mark;The coordinate of the muffle contouring frame is opposite seat of the muffle contouring frame in the face picture of the cleft lip patients Mark;The muffle contouring frame is marked manually in preset labeling system, and the labeling system exports muffle contouring The coordinate of frame;
The training data of second convolutional neural networks includes:Muffle contouring block diagram piece, the muffle contouring block diagram piece The coordinate of middle operation index point;The coordinate of the operation index point is the operation index point in the muffle contouring block diagram piece In relative coordinate;The operation index point is marked manually in the labeling system, the labeling system output operation The coordinate of index point.
8. harelip operation mark point recognition system according to claim 7, which is characterized in that the muffle contouring frame is The coordinate of rectangle, muffle contouring frame includes:The coordinate of muffle contouring frame upper left angle point and the coordinate of bottom right angle point;The hand Art index point has 12.
9. harelip operation mark point recognition system according to claim 7, which is characterized in that by first convolutional Neural Network and the second convolutional neural networks are trained on GPU.
CN201810570020.0A 2018-06-05 2018-06-05 A kind of harelip operation mark point recognition system Pending CN108846342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810570020.0A CN108846342A (en) 2018-06-05 2018-06-05 A kind of harelip operation mark point recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810570020.0A CN108846342A (en) 2018-06-05 2018-06-05 A kind of harelip operation mark point recognition system

Publications (1)

Publication Number Publication Date
CN108846342A true CN108846342A (en) 2018-11-20

Family

ID=64210586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810570020.0A Pending CN108846342A (en) 2018-06-05 2018-06-05 A kind of harelip operation mark point recognition system

Country Status (1)

Country Link
CN (1) CN108846342A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929792A (en) * 2019-11-27 2020-03-27 深圳市商汤科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN111933253A (en) * 2020-07-14 2020-11-13 北京邮电大学 Neural network-based marking point marking method and device for bone structure image
CN113241155A (en) * 2021-03-31 2021-08-10 正雅齿科科技(上海)有限公司 Method and system for acquiring mark points in lateral skull tablet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN103914699A (en) * 2014-04-17 2014-07-09 厦门美图网科技有限公司 Automatic lip gloss image enhancement method based on color space
CN106295533A (en) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 Optimization method, device and the camera terminal of a kind of image of autodyning
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description
US20170351905A1 (en) * 2016-06-06 2017-12-07 Samsung Electronics Co., Ltd. Learning model for salient facial region detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN103914699A (en) * 2014-04-17 2014-07-09 厦门美图网科技有限公司 Automatic lip gloss image enhancement method based on color space
US20170351905A1 (en) * 2016-06-06 2017-12-07 Samsung Electronics Co., Ltd. Learning model for salient facial region detection
CN106295533A (en) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 Optimization method, device and the camera terminal of a kind of image of autodyning
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929792A (en) * 2019-11-27 2020-03-27 深圳市商汤科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN110929792B (en) * 2019-11-27 2024-05-24 深圳市商汤科技有限公司 Image labeling method, device, electronic equipment and storage medium
CN111933253A (en) * 2020-07-14 2020-11-13 北京邮电大学 Neural network-based marking point marking method and device for bone structure image
CN111933253B (en) * 2020-07-14 2022-09-23 北京邮电大学 Neural network-based marking point marking method and device for bone structure image
CN113241155A (en) * 2021-03-31 2021-08-10 正雅齿科科技(上海)有限公司 Method and system for acquiring mark points in lateral skull tablet
CN113241155B (en) * 2021-03-31 2024-02-09 正雅齿科科技(上海)有限公司 Method and system for acquiring mark points in skull side position slice

Similar Documents

Publication Publication Date Title
CN108428229B (en) Lung texture recognition method based on appearance and geometric features extracted by deep neural network
CN104866829B (en) A kind of across age face verification method based on feature learning
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN104992445B (en) A kind of automatic division method of CT images pulmonary parenchyma
CN109800648A (en) Face datection recognition methods and device based on the correction of face key point
CN109034045A (en) A kind of leucocyte automatic identifying method based on convolutional neural networks
CN107506770A (en) Diabetic retinopathy eye-ground photography standard picture generation method
CN104408462B (en) Face feature point method for rapidly positioning
CN108846342A (en) A kind of harelip operation mark point recognition system
CN108648182B (en) Breast cancer nuclear magnetic resonance image tumor region segmentation method based on molecular subtype
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN106203284B (en) Method for detecting human face based on convolutional neural networks and condition random field
CN110533639B (en) Key point positioning method and device
CN109636817A (en) A kind of Lung neoplasm dividing method based on two-dimensional convolution neural network
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN108765409A (en) A kind of screening technique of the candidate nodule based on CT images
CN109829901A (en) A kind of fungal keratitis detection method and system based on convolutional neural networks
CN113241155B (en) Method and system for acquiring mark points in skull side position slice
CN115661509A (en) Surgical instrument identification and classification method based on three-dimensional point cloud ICP (inductively coupled plasma) registration algorithm
CN109378068A (en) A kind of method for automatically evaluating and system of Therapeutic Effects of Nasopharyngeal
CN112967285A (en) Chloasma image recognition method, system and device based on deep learning
CN110334566A (en) Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN107578448A (en) Blending surfaces number recognition methods is included without demarcation curved surface based on CNN
CN106778491A (en) The acquisition methods and equipment of face 3D characteristic informations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Yang

Inventor after: Li Yizhou

Inventor after: Mei Hongxiang

Inventor after: Cheng Junhao

Inventor after: Ma Huangshui

Inventor after: Shou Yuke

Inventor after: Gong Na

Inventor after: Tang Xiumei

Inventor after: Zhang Kaiwen

Inventor before: Li Yang

Inventor before: Li Yizhou

Inventor before: Mei Hongxiang

Inventor before: Cheng Junhao

Inventor before: Ma Huangshui

Inventor before: Shou Yuke

Inventor before: Gong Na

Inventor before: Tang Xiumei

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181120