CN114743252A - Feature point screening method, device and storage medium for head model - Google Patents

Feature point screening method, device and storage medium for head model Download PDF

Info

Publication number
CN114743252A
CN114743252A CN202210649441.9A CN202210649441A CN114743252A CN 114743252 A CN114743252 A CN 114743252A CN 202210649441 A CN202210649441 A CN 202210649441A CN 114743252 A CN114743252 A CN 114743252A
Authority
CN
China
Prior art keywords
feature point
head
feature
feature points
head model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210649441.9A
Other languages
Chinese (zh)
Other versions
CN114743252B (en
Inventor
吴志新
刘志新
范正奇
陈弘
刘海
刘伟东
解明浩
郝烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Technology and Research Center Co Ltd
CATARC Automotive Test Center Tianjin Co Ltd
Original Assignee
China Automotive Technology and Research Center Co Ltd
CATARC Automotive Test Center Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Technology and Research Center Co Ltd, CATARC Automotive Test Center Tianjin Co Ltd filed Critical China Automotive Technology and Research Center Co Ltd
Priority to CN202210649441.9A priority Critical patent/CN114743252B/en
Publication of CN114743252A publication Critical patent/CN114743252A/en
Application granted granted Critical
Publication of CN114743252B publication Critical patent/CN114743252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of measurement, and discloses a method and equipment for screening feature points of a head model and a storage medium. The method comprises the following steps: preprocessing an original image of a preset head model to obtain a target image, wherein the preset head model is used for an automobile collision dummy; determining a head reference plane and a reference line based on the target image; combining a head reference surface and a reference line, and dividing a face region according to the proportional relation of the three parts, the three parts and the five parts to obtain a plurality of face subregions; determining initial feature points from the plurality of face sub-regions according to a preset selection condition; and screening the initial feature points at least by combining facial parts missing from a preset head model to obtain a plurality of feature point sets, wherein different feature point sets are used for describing features of different parts of the head and the face. The embodiment can quickly obtain the head characteristic points required by measurement, and provides guidance for designing the bionic dummy head model with the face and the size according with the Chinese characteristics.

Description

用于头部模型的特征点筛选方法、设备和存储介质Feature point screening method, device and storage medium for head model

技术领域technical field

本发明涉及测量技术领域,尤其涉及一种用于头部模型的特征点筛选方法、设备和存储介质。The invention relates to the technical field of measurement, and in particular, to a feature point screening method, device and storage medium for a head model.

背景技术Background technique

随着技术的不断进步,人体测量数据的应用领域越来越广泛,如汽车碰撞假人的头部设计就与人体头面部测量数据紧密相关。头面部具有独特性和多样性的特征,是区分不同人之间的最直接也是最关键的外部特征。作为测试车辆安全性重要的仿生设备,碰撞假人头部的设计需要精确的头面部尺寸数据作为设计参考,以描述复杂的人类头面部形态。With the continuous advancement of technology, the application fields of human measurement data are becoming more and more extensive. For example, the head design of a car crash dummy is closely related to the measurement data of human head and face. The head and face have unique and diverse features, which are the most direct and critical external features to distinguish different people. As an important bionic device for testing vehicle safety, the design of the crash dummy head requires accurate head and face size data as a design reference to describe the complex human head and face shape.

然而,目前我国汽车安全法规所采用的碰撞假人均是依据美国人体测量数据开发的,使得现有汽车安全设计对中国人体保护效能值得商榷,所以需要对假人头面部展开测量研究,以将中国人头面部测量数据应用到假人头部设计中,使其满足中国人的头面部特征。However, the current crash dummies used in my country's automobile safety regulations are all developed based on American anthropometric data, which makes the protection effectiveness of the existing automobile safety design on the Chinese human body debatable. The facial measurement data is applied to the dummy head design to meet the Chinese head and facial features.

有鉴于此,特提出本发明。In view of this, the present invention is proposed.

发明内容SUMMARY OF THE INVENTION

为了解决上述技术问题,本发明提供了一种用于头部模型的特征点筛选方法、设备和存储介质,实现了用于头部模型的特征点筛选,为仿生头部模型的简化设计提供参考。In order to solve the above technical problems, the present invention provides a feature point screening method, device and storage medium for a head model, which realizes the feature point screening for the head model and provides a reference for the simplified design of the bionic head model. .

本发明实施例提供了一种用于头部模型的特征点筛选方法,该方法包括:An embodiment of the present invention provides a feature point screening method for a head model, the method comprising:

针对预设头部模型的原始图像进行预处理,获得目标图像,其中,所述预设头部模型用于汽车碰撞假人;Perform preprocessing on an original image of a preset head model to obtain a target image, wherein the preset head model is used for a car crash dummy;

基于所述目标图像确定头部基准面和基准线;determining a head reference plane and a reference line based on the target image;

结合所述头部基准面和基准线,按照三停五部的比例关系进行面部区域划分,获得多个面部子区域;Combining the head datum plane and the datum line, the facial area is divided according to the proportional relationship of three stops and five parts, and a plurality of facial sub-areas are obtained;

按照预设选取条件从所述多个面部子区域中确定初始特征点;Determine initial feature points from the plurality of face sub-regions according to preset selection conditions;

至少结合所述预设头部模型缺失的面部部位对所述初始特征点进行筛选,获得多个特征点集,其中,不同的所述特征点集用于描述头面部不同部位的特征。The initial feature points are screened at least in combination with the missing facial parts of the preset head model to obtain multiple feature point sets, wherein the different feature point sets are used to describe the features of different parts of the head and face.

本发明实施例提供了一种电子设备,所述电子设备包括:An embodiment of the present invention provides an electronic device, and the electronic device includes:

处理器和存储器;processor and memory;

所述处理器通过调用所述存储器存储的程序或指令,用于执行任一实施例所述的用于头部模型的特征点筛选方法的步骤。The processor is configured to execute the steps of the feature point screening method for a head model described in any one of the embodiments by invoking a program or an instruction stored in the memory.

本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储程序或指令,所述程序或指令使计算机执行任一实施例所述的用于头部模型的特征点筛选方法的步骤。An embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores a program or an instruction, and the program or instruction causes a computer to perform the feature point screening for a head model described in any one of the embodiments steps of the method.

本发明实施例具有以下技术效果:The embodiment of the present invention has the following technical effects:

通过本发明实施例提供的特征点筛选方法能够快速简单地从碰撞假人头部众多特征点中筛选出具有代表性的特征点,避免过多描述,筛选得到的特征点能反映主要头面部信息,描述面貌特征。该方法原理清晰易懂,操作方法也相对简单,不需要昂贵的设备,时间周期短,工程成本低,能够快速得到测量所需要的头部特征点,为设计面貌与尺寸符合中国人特征的仿生假人头部模型提供指导。The feature point screening method provided by the embodiment of the present invention can quickly and simply screen out representative feature points from the numerous feature points on the head of the collision dummy, avoiding excessive description, and the screened feature points can reflect the main head and face information , describe facial features. The principle of the method is clear and easy to understand, the operation method is relatively simple, no expensive equipment is required, the time period is short, the engineering cost is low, and the head feature points required for the measurement can be quickly obtained. The dummy head model provides guidance.

附图说明Description of drawings

为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the specific embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the specific embodiments or the prior art. Obviously, the accompanying drawings in the following description The drawings are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without creative efforts.

图1是本发明实施例提供的一种用于头部模型的特征点筛选方法的流程图;1 is a flowchart of a feature point screening method for a head model provided by an embodiment of the present invention;

图2是本发明实施例提供的Hybrid Ⅲ假人头部模型的示意图;2 is a schematic diagram of a Hybrid III dummy head model provided by an embodiment of the present invention;

图3是本发明实施例提供的一种头部基准面的示意图;3 is a schematic diagram of a head reference plane provided by an embodiment of the present invention;

图4是本发明实施例提供的一部分头部基准线的示意图;4 is a schematic diagram of a part of a head reference line provided by an embodiment of the present invention;

图5是本发明实施例提供的一部分头部基准线的示意图;5 is a schematic diagram of a part of a head reference line provided by an embodiment of the present invention;

图6是本发明实施例提供的一种头面部区域划分的示意图;6 is a schematic diagram of a head and face area division provided by an embodiment of the present invention;

图7是本发明实施例提供的一种假人头部模型特征点的示意图;7 is a schematic diagram of feature points of a dummy head model provided by an embodiment of the present invention;

图8是本发明实施例提供的一种假人头部模型特征点的示意图;8 is a schematic diagram of a feature point of a dummy head model provided by an embodiment of the present invention;

图9是本发明实施例提供的一种假人头部模型特征点的示意图;9 is a schematic diagram of a feature point of a dummy head model provided by an embodiment of the present invention;

图10是本发明实施例提供的一种假人头部模型特征点的示意图;10 is a schematic diagram of a feature point of a dummy head model provided by an embodiment of the present invention;

图11是本发明实施例提供的一种局部特征点搜索过程的示意图;11 is a schematic diagram of a local feature point search process provided by an embodiment of the present invention;

图12是本发明实施例提供的一种假人头部轮廓特征点的示意图;12 is a schematic diagram of a feature point of a dummy head profile provided by an embodiment of the present invention;

图13是本发明实施例提供的一种假人头部轮廓特征点的示意图;13 is a schematic diagram of a feature point of a dummy head profile provided by an embodiment of the present invention;

图14是本发明实施例提供的一种假人头部五官特征点示意图;14 is a schematic diagram of feature points of facial features on a dummy head provided by an embodiment of the present invention;

图15是本发明实施例一种特征点筛选流程示意图;15 is a schematic diagram of a feature point screening process according to an embodiment of the present invention;

图16是本发明实施例提供的一种特征点提取的流程示意图;16 is a schematic flowchart of a feature point extraction provided by an embodiment of the present invention;

图17是本发明实施例提供的一种头部轮廓特征和鼻部主要特征筛选示意图;FIG. 17 is a schematic diagram of screening head profile features and main features of the nose according to an embodiment of the present invention;

图18是本发明实施例提供的一种特征点筛选的技术路线示意图;18 is a schematic diagram of a technical route for feature point screening provided by an embodiment of the present invention;

图19为本发明实施例提供的一种电子设备的结构示意图。FIG. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将对本发明的技术方案进行清楚、完整的描述。显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所得到的所有其它实施例,都属于本发明所保护的范围。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be described clearly and completely below. Obviously, the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention.

本发明实施例提供的用于头部模型的特征点筛选方法,主要适用于克服现阶段国内无法快速简单地筛选碰撞假人头部这种简化的头部模型特征点的问题,本发明实施例旨在提供一种逻辑合理,操作简单,经济适用性高,且满足现阶段该类头部模型特征点筛选的方法,为仿生头部模型的简化设计提供参考。The feature point screening method for the head model provided by the embodiment of the present invention is mainly suitable for overcoming the problem that the simplified head model feature point of the collision dummy head cannot be quickly and easily screened in China at present. The embodiment of the present invention The purpose is to provide a method with reasonable logic, simple operation, high economical applicability, and satisfying the feature point screening of this type of head model at this stage, and provide a reference for the simplified design of the bionic head model.

图1是本发明实施例提供的一种用于头部模型的特征点筛选方法的流程图。参见图1,该用于头部模型的特征点筛选方法具体包括:FIG. 1 is a flowchart of a feature point screening method for a head model provided by an embodiment of the present invention. Referring to Figure 1, the feature point screening method for the head model specifically includes:

S110、针对预设头部模型的原始图像进行预处理,获得目标图像,其中,所述预设头部模型用于汽车碰撞假人。S110. Perform preprocessing on an original image of a preset head model to obtain a target image, where the preset head model is used for a car collision dummy.

以Hybrid Ⅲ假人头部模型对本发明做进一步的详细说明,以令本领域技术人员参照说明书文字能够据以实施。其中,Hybrid Ⅲ假人头部模型的示意图可以参考如图2所示。The present invention will be further described in detail with the Hybrid III dummy head model, so that those skilled in the art can refer to the description text to implement accordingly. Among them, the schematic diagram of the Hybrid III dummy head model can be referred to as shown in Figure 2.

用于头部模型原始图像的采集设备一般是高清摄像头、数码相机、手机摄像头等,这就会导致所采集的原始图像的质量和清晰程度不同,有的原始图像甚至伴有强烈的噪声,颜色光亮也会不一样,从而导致图像的特征信息难以处理,会影响图像特征的提取和准确度,给分析带来困难。所以图像预处理工作在特征点筛选中是较有意义的,通过灰度化等预处理手段将彩色照片处理成灰度化一致的图像,保证高质量高效识别,通过对图像进行边缘检测,可以达到减少后续特征点筛选所花费时间的目的。The acquisition devices used for the original images of the head model are generally high-definition cameras, digital cameras, mobile phone cameras, etc., which will lead to different quality and clarity of the original images collected, and some original images are even accompanied by strong noise, color, etc. The brightness will also be different, which will make the feature information of the image difficult to process, which will affect the extraction and accuracy of image features and bring difficulties to the analysis. Therefore, the image preprocessing work is more meaningful in the feature point screening. The color photos are processed into grayscale images through grayscale and other preprocessing methods to ensure high-quality and efficient recognition. By performing edge detection on the image, you can To achieve the purpose of reducing the time spent in subsequent feature point screening.

概括性的,所述针对预设头部模型的原始图像进行预处理,获得目标图像,包括:In general, the preprocessing is performed on the original image of the preset head model to obtain the target image, including:

对所述原始图像进行灰度化处理,获得所述原始图像对应的灰度图像;对所述灰度图像进行边缘检测,获得所述目标图像。Grayscale processing is performed on the original image to obtain a grayscale image corresponding to the original image; edge detection is performed on the grayscale image to obtain the target image.

具体的,彩色图像中的颜色是由R、G、B 三个基本色素分量组合而成的,根据每个基本色素分量分配的比重不同导致三个基本色素的亮度不同,从而形成不同的颜色。因此,在对原始图像进行灰度化处理时,首先根据原始图像中像素点的三个基色色素分量计算像素点的灰度值,然后将灰度值赋予各基本色素分量,最后得到灰度图像。像素点灰度值的计算公式如下算式(1)所示。Specifically, the color in a color image is composed of three basic pigment components, R, G, and B. According to the different proportions assigned to each basic pigment component, the brightness of the three basic pigments is different, thereby forming different colors. Therefore, when grayscale processing is performed on the original image, the grayscale value of the pixel point is first calculated according to the three primary color pigment components of the pixel point in the original image, and then the grayscale value is assigned to each basic pigment component, and finally the grayscale image is obtained. . The calculation formula of the gray value of the pixel point is shown in the following formula (1).

Figure 373238DEST_PATH_IMAGE001
(1)
Figure 373238DEST_PATH_IMAGE001
(1)

其中,gx,y)为原始图像中一像素点转化后所得的灰度值,Rx,y)为所述原始图像中一像素点的红色分量,Gx,y)为所述原始图像中一像素点的绿色分量,Bx,y)为所述原始图像中一像素点的蓝色分量。Among them, g ( x, y ) is the gray value obtained by converting a pixel in the original image, R ( x, y ) is the red component of a pixel in the original image, G ( x, y ) is the is the green component of a pixel in the original image, and B ( x, y ) is the blue component of a pixel in the original image.

边缘是指其周围像素灰度急剧变化的那些象素的集合,它是图像基本的特征。边缘存在于目标、背景和区域之间,是位置的标志,对灰度的变化不敏感,采用先对头部模型的灰度图像进行边缘检测的方法,再进行特征点提取,可以达到减少特征匹配计算量、提高处理效率和准确度的效果。可选的,使用Sobel 算子边缘检测法确定灰度图像的边缘,进而获得所述目标图像。The edge refers to the set of pixels whose surrounding pixels change sharply, and it is the basic feature of the image. The edge exists between the target, the background and the area, which is a sign of the position and is not sensitive to the change of grayscale. The method of edge detection on the grayscale image of the head model first, and then the feature point extraction can be achieved to reduce the feature. The effect of matching the amount of calculation, improving processing efficiency and accuracy. Optionally, use the Sobel operator edge detection method to determine the edge of the grayscale image, and then obtain the target image.

S120、基于所述目标图像确定头部基准面和基准线。S120. Determine a head reference plane and a reference line based on the target image.

可选的,所述基于所述目标图像确定头部基准面和基准线,包括:Optionally, the determining the head reference plane and the reference line based on the target image includes:

根据所述预设头部模型的几何特征和先验知识,基于所述目标图像确定所述预设头部模型设定位置的第一特征点;根据所述第一特征点确定所述头部基准面和基准线。According to the geometric features and prior knowledge of the preset head model, determine the first feature point of the set position of the preset head model based on the target image; determine the head according to the first feature point Datum planes and datum lines.

可选的,根据直观的几何特征与先验知识,可采用手工方法初步标记出关键位置的特征点,或者基于预设的特征,通过特征匹配确定关键位置的特征点,然后在假人头部确定头部基准面和基准线。Optionally, according to the intuitive geometric features and prior knowledge, the feature points of the key positions can be preliminarily marked by manual methods, or based on the preset features, the feature points of the key positions can be determined through feature matching, and then placed on the dummy head. Determine the head datum and datum line.

其中,所述头部基准面包括矢状面、冠状面、正中矢状面、水平面和法兰克福平面。参考如图3所示的一种头部基准面的示意图,其中,矢状面310指按矢状轴方向与水平面和冠状面相垂直,将头部分成左右两部的纵切面,其中正中的称为正中矢状面,将头部分为左右两部分。冠状面320指按冠状轴方向与水平面和矢状面相垂直,将头部分为前后两部的纵切面。水平面330指与冠状面和矢状面同时垂直,将头部分为上下两部的横切面。法兰克福平面340指通过左、右眼眶下点并和头部制造基准面平行的平面。Wherein, the head reference plane includes sagittal plane, coronal plane, midsagittal plane, horizontal plane and Frankfurt plane. Referring to a schematic diagram of a head reference plane as shown in FIG. 3, the sagittal plane 310 refers to a longitudinal section that divides the head into left and right parts according to the direction of the sagittal axis, which is perpendicular to the horizontal plane and the coronal plane. For the midsagittal plane, the head is divided into left and right parts. The coronal plane 320 refers to a longitudinal section that divides the head into anterior and posterior parts according to the direction of the coronal axis, which is perpendicular to the horizontal plane and the sagittal plane. The horizontal plane 330 refers to a transverse plane that is perpendicular to both the coronal plane and the sagittal plane, dividing the head into upper and lower parts. The Frankfurt plane 340 refers to a plane passing through the left and right infraorbital points and parallel to the head manufacturing datum.

如图4所示的一部分头部基准线的示意图,所述基准线至少包括矢状轴410、冠状轴420和垂直轴430,其中,矢状轴410是由前向后与垂直轴和冠状轴相垂直的轴;冠状轴420是由左向右与矢状轴和冠状轴相垂直的轴;垂直轴430是与矢状轴和冠状轴相垂直且与水平面垂直的轴。A schematic diagram of a part of the head reference line shown in FIG. 4, the reference line includes at least a sagittal axis 410, a coronal axis 420 and a vertical axis 430, wherein the sagittal axis 410 is from front to back and the vertical axis and the coronal axis The coronal axis 420 is the axis perpendicular to the sagittal and coronal axes from left to right; the vertical axis 430 is the axis perpendicular to the sagittal and coronal axes and perpendicular to the horizontal plane.

如图5所示,头部基准线还可以包括:正中线510、外眼角点间线520、鼻下点线530、内眼角点垂线540、口角点间线(口裂线)550和颏下点线560。其中,正中线510是眉间点—鼻下点—颏下点构成的直线;外眼角点间线520是左眼外角点—右眼外角点构成的直线;鼻下点线530是通过鼻下点与外眼角点间线520平行且与正中线510垂直的线;内眼角点垂线540是分别起于左眼内角点、右眼内角点,且与正中线510平行、与外眼角点间线520垂直的线;口角点间线(口裂线)550是连结两侧口角点,且与正中线510垂直、与外眼角点间线520平行的线;颏下点线560是通过颏下点与正中线510垂直且与口角点间线550平行的线。As shown in FIG. 5 , the head reference line may also include: midline 510, line between outer canthus 520, line 530 under the nose, vertical line 540 between inner canthus, line between the corners of the mouth (oral cleft line) 550 and chin Dotted line 560 below. Wherein, the median line 510 is a straight line formed by the point between the eyebrows-the point under the nose-the point under the chin; the line 520 between the outer corner points is the straight line formed by the outer corner point of the left eye-the outer corner point of the right eye; the subnasal point line 530 is a line passing under the nose The line 520 between the point and the outer corner point is parallel to the midline 510; the inner corner point vertical line 540 starts from the inner corner of the left eye and the inner corner of the right eye, and is parallel to the midline 510 and between the outer corner of the eye. Line 520 is a vertical line; the line between the corners of the mouth (orifice fissure line) 550 is a line connecting the corner points on both sides, and is perpendicular to the midline 510 and parallel to the line 520 between the outer corners of the eye; the submental line 560 is a line through the A line that is perpendicular to the midline 510 and parallel to the line 550 between the points of the mouth.

S130、结合所述头部基准面和基准线,按照三停五部的比例关系进行面部区域划分,获得多个面部子区域。S130. Combine the head reference plane and the reference line, and divide the face area according to the proportional relationship of three stops and five parts, so as to obtain a plurality of face sub-regions.

通过对面部进行区域划分,有利于更好地确定更能表达面部特征的特征点。By dividing the face into regions, it is beneficial to better determine the feature points that can better express the facial features.

参考图6所示的一种头面部区域划分的示意图,其中,三停指的是额中点与眉间点之间的上停区域610、眉间点与鼻下点之间的中停区域620以及鼻下点与颏下点之间的下停区域630。五部指的是左头侧点与左眉弓突出点之间的四部640和右头侧点与右眉弓突出点之间的五部650,左眉弓突出点与左眼内角点之间的二部660,右眉弓突出点与右眼内角点之间的三部670以及左眼内角点与右眼内角点之间的一部680。Referring to the schematic diagram of a head and face area division shown in FIG. 6, the three stops refer to the upper stop area 610 between the mid-frontal point and the point between the eyebrows, and the stop area between the point between the eyebrows and the point below the nose. 620 and the stop area 630 between the subnasal and submental points. The five parts refer to the four parts 640 between the left cranial point and the protruding point of the left eyebrow arch, and the five parts 650 between the right cranial point and the protruding point of the right eyebrow arch, and between the protruding point of the left eyebrow arch and the inner corner of the left eye There are two parts 660, three parts 670 between the protruding point of the right eyebrow arch and the inner corner of the right eye, and one part 680 between the inner corner of the left eye and the inner corner of the right eye.

S140、按照预设选取条件从所述多个面部子区域中确定初始特征点。S140. Determine initial feature points from the multiple face sub-regions according to preset selection conditions.

具体的,参考《人体测量方法》及简化的头部模型,将符合以下要求的特征点确定为初始特征点:处于关键位置,如五官处,另外在曲率较大的地方,如眼角等处也要有标定;特征点尽量分布均匀;特征点分布的密度要适当,密度过小影响模型的精确度,密度过大则影响测量效率;不同图像中相应的特征点应有一一对应关系,即在不同的样本中对应的特征点应当标记同样的图像特征;涵盖《人体测量方法》中规定的主要测点和主要测量项目中的关键测点。Specifically, referring to the "Anthropometric Method" and the simplified head model, the feature points that meet the following requirements are determined as the initial feature points: in key positions, such as the facial features, and in places with large curvature, such as the corners of the eyes, etc. There should be calibration; the feature points should be distributed as evenly as possible; the density of the feature point distribution should be appropriate, if the density is too small, it will affect the accuracy of the model, and if the density is too large, it will affect the measurement efficiency; there should be a one-to-one correspondence between the corresponding feature points in different images, that is, Corresponding feature points in different samples should be marked with the same image features; covering the main measurement points specified in the "Anthropometric Method" and the key measurement points in the main measurement items.

按照预设选取条件从所述多个面部子区域中确定的初始特征点包括如下点:The initial feature points determined from the plurality of face sub-regions according to preset selection conditions include the following points:

头顶点P1、额中点P2、前囱点P3、发缘点/发际点P4、眉间点P5、眉间上点P6、鼻根点P7、鼻背点/鼻凹点P8、鼻中点P9、鼻尖点P10、鼻下点P11、鼻翼点P12、龈点P13、上唇点/上唇中点P14、口裂点P15、下唇点/下唇中点P16、口角点P17、颏上点P18、颏前点P19、颏下点P20、瞳孔点P21、眼突点P22、眼内角点P23、眼外角点P24、眶下点P25、眶上点P26、耳屏点P27、耳上附着点/耳根上点/耳上基点P28、耳下附着点/耳根下点/耳下基点P29、耳前点P30、耳后点P31、耳上点P32、耳下点P33、耳结节点P34、额颞点/颞嵴点P35、颧点P36、下颌角点P37、乳突点P38、头侧点/颅侧点P39、枕后点/头后点P40、后头顶点P41和枕外隆凸点P42。Head vertex P1, forehead midpoint P2, front chimney point P3, origin point/hairline point P4, eyebrow point P5, upper eyebrow point P6, nose root point P7, nasal dorsum point/nasal concave point P8, middle nose Point P9, nose tip point P10, nose point P11, alar point P12, gum point P13, upper lip point/upper lip midpoint P14, mouth cleft point P15, lower lip point/lower lip midpoint P16, mouth corner point P17, upper chin point P18, premental point P19, submental point P20, pupil point P21, protuberance point P22, inner corner point P23, outer corner point P24, infraorbital point P25, supraorbital point P26, tragus point P27, supraauricular attachment point / supra-auricular point/ supra-ear base point P28, below-ear attachment point/infra-auricular point/ below-ear base point P29, pre-auricular point P30, post-ear point P31, supra-ear point P32, sub-ear point P33, ear knot point P34, Frontotemporal point/temporal crest point P35, zygomatic point P36, mandibular angle point P37, mastoid point P38, cranial point/cranial lateral point P39, posterior occipital point/posterior point P40, occipital apex P41 and external occipital eminence point P42.

S150、至少结合所述预设头部模型缺失的面部部位对所述初始特征点进行筛选,获得多个特征点集,其中,不同的所述特征点集用于描述头面部不同部位的特征。S150. Screen the initial feature points in combination with at least the missing facial parts of the preset head model to obtain multiple feature point sets, wherein different feature point sets are used to describe the features of different parts of the head and face.

可选的,根据《人体测量方法》,结合Hybrid III假人头部模型的简化特点,剔除Hybrid III假人头部模型各缺失部位处的特征点。其中,Hybrid III假人头部模型的简化特点参考如下表1所示。Optionally, according to the "Anthropometric Method", combined with the simplified features of the Hybrid III dummy head model, remove the feature points at the missing parts of the Hybrid III dummy head model. Among them, the simplified characteristics of the Hybrid III dummy head model are shown in Table 1 below.

表1 Hybrid III假人头部模型简化特点Table 1 Simplified features of Hybrid III dummy head model

Figure 571001DEST_PATH_IMAGE002
Figure 571001DEST_PATH_IMAGE002

因Hybrid III假人头部模型缺失颅骨冠状缝与矢状缝,根据上述表1示出的特点6剔除前囱点P3。因Hybrid III假人头部模型缺失前额发际,根据上述表1示出的特点3剔除发缘点P4。因基于Hybrid III假人头部模型缺失左、右眉毛上缘,根据上述表1示出的特点2剔除眉间上点P6。因基于Hybrid III假人头部模型无法确定鼻背与前额转折处,根据上述表1示出的特点5剔除鼻背点P8。因基于Hybrid III假人头部模型缺失牙龈,根据上述表1示出的特点4剔除龈点P13。因基于Hybrid III假人头部模型缺失眼睛,根据上述表1示出的特点2剔除瞳孔点P21和眼突点P22。因基于Hybrid III假人头部模型缺失耳朵,根据上述表1示出的特点1剔除耳屏点P27、耳上附着点P28、耳下附着点P29、耳前点P30、耳后点P31、耳上点P32、耳下点P33和耳结节点P34。因基于Hybrid III假人头部模型缺失颞嵴,根据上述表1示出的特点6剔除额颞点/颞嵴点P35。因基于Hybrid III假人头部模型缺失乳突,根据上述表1示出的特点6剔除乳突点P38。因基于Hybrid III假人头部模型缺失枕外隆凸,根据上述表1示出的特点6剔除后头顶点P41和枕外隆凸点P42。Because the Hybrid III dummy head model lacks the coronal suture and sagittal suture of the skull, the bony chimney point P3 is eliminated according to feature 6 shown in Table 1 above. Because the head model of the Hybrid III dummy lacks the forehead hairline, the hairline point P4 is eliminated according to feature 3 shown in Table 1 above. Because the upper edge of the left and right eyebrows is missing based on the Hybrid III dummy head model, the upper point P6 between the eyebrows is eliminated according to feature 2 shown in Table 1 above. Since the transition point between the nasal dorsum and the forehead cannot be determined based on the Hybrid III dummy head model, the dorsum of the nose point P8 is excluded according to feature 5 shown in Table 1 above. Due to the lack of gingiva based on the Hybrid III dummy head model, the gingival point P13 was removed according to feature 4 shown in Table 1 above. Due to the lack of eyes based on the Hybrid III dummy head model, the pupil point P21 and the eye protrusion point P22 are eliminated according to feature 2 shown in Table 1 above. Due to the lack of ears based on the Hybrid III dummy head model, the tragus point P27, the attachment point P28 above the ear, the attachment point P29 under the ear, the preauricular point P30, the postauricular point P31, the The upper point P32, the lower ear point P33, and the ear knot node P34. Due to the lack of temporal ridge based on the Hybrid III dummy head model, the frontotemporal point/temporal ridge point P35 was excluded according to feature 6 shown in Table 1 above. Due to the absence of the mastoid based on the Hybrid III dummy head model, the mastoid point P38 was removed according to feature 6 shown in Table 1 above. Because the head model based on the Hybrid III dummy lacks the external occipital protuberance, the posterior head vertex P41 and the external occipital protuberance point P42 are removed according to feature 6 shown in Table 1 above.

通过上述结合Hybrid III假人头部模型缺失的面部部位以及其特点对所述初始特征点进行第一轮筛选之后获得第一特征点集,第一特征点集所包括的特征点为如下表2所示的特征点C1~ C23。The first feature point set is obtained after the first round of screening of the initial feature points by combining the missing facial parts and their characteristics of the Hybrid III dummy head model, and the feature points included in the first feature point set are as follows in Table 2 Feature points C1~C23 shown.

表2 假人头部模型的第一特征点集Table 2 The first feature point set of the dummy head model

Figure 508870DEST_PATH_IMAGE003
Figure 508870DEST_PATH_IMAGE003

Figure 554318DEST_PATH_IMAGE004
Figure 554318DEST_PATH_IMAGE004

Figure 60386DEST_PATH_IMAGE005
Figure 60386DEST_PATH_IMAGE005

根据特征点的定义,采用手工方法对假人头部模型的特征点进行粗标记,参考如图7、图8、图9和图10所示的假人头部模型特征点的示意图。According to the definition of the feature points, the feature points of the dummy head model are roughly marked by manual methods, refer to the schematic diagrams of the feature points of the dummy head model shown in Figure 7, Figure 8, Figure 9 and Figure 10.

进一步的,根据假人头部模型的头部轮廓对第一特征点集中的特征点继续进行筛选,将描述头部轮廓的特征点统计至第二特征点集。示例性的,所述将描述头部轮廓的特征点统计至第二特征点集,包括:Further, the feature points in the first feature point set are continuously screened according to the head contour of the dummy head model, and the feature points describing the head contour are counted into the second feature point set. Exemplarily, the statistics of the feature points describing the outline of the head to the second feature point set include:

根据所述第一特征点集中特征点的像素值基于海森Hessian矩阵生成尺度空间,并确定尺度空间中的局部极值点;基于预设策略根据所述局部极值点确定候选特征点;根据所述候选特征点,通过插值法确定所需的连续尺度空间内的候选特征点;若所述候选特征点与插值中心点之间的距离超过阈值,则将所述候选特征点从所述第一特征点集中剔除;将所述第一特征点集中剩余的特征点统计至第二特征点集。Generate a scale space based on the Hessian matrix according to the pixel values of the feature points in the first feature point set, and determine the local extremum points in the scale space; determine candidate feature points according to the local extremum points based on a preset strategy; For the candidate feature points, the candidate feature points in the required continuous scale space are determined by interpolation; if the distance between the candidate feature points and the interpolation center point exceeds the threshold, the candidate feature points are removed from the A feature point set is eliminated; the remaining feature points in the first feature point set are counted into a second feature point set.

具体的,结合简化的假人头部模型,采用加速稳健特征(Speeded UP RobustFeatures,SURF)算法继续进行特征点筛选,该算法速度快且具有鲁棒性,由上述表2可知,筛选区域只需在一部正中线附近、四部和五部边缘处。Specifically, combined with the simplified dummy head model, the Speeded UP Robust Features (SURF) algorithm is used to continue the feature point screening. The algorithm is fast and robust. It can be seen from Table 2 above that the screening area only needs to be Near the midline of one, at the edges of four and five.

首先,利用Hessian矩阵生成尺度空间。图像的尺度是指图像内容的粗细程度,尺度的概念是用来模拟观察者距离物体的远近程度。图像的尺度空间是一幅图像经过几个不同高斯核后形成的模糊图片的集合,用来模拟人眼看到物体的远近程度以及模糊程度。SURF算法选取海塞(Hessian)矩阵行列式特征值的近似值来生成尺度空间,令特征点函数为

Figure 807762DEST_PATH_IMAGE006
,即任何一个特征点均可定义为一个二维函数
Figure 533010DEST_PATH_IMAGE006
,其中x和y是特征点对应的空间(平面)坐标,而在任何一对空间坐标(x,y)处的函数值
Figure 354335DEST_PATH_IMAGE006
一般称为该特征点的像素值。海塞矩阵H是由
Figure 562463DEST_PATH_IMAGE007
和其偏导数共同构成。特征点的海塞矩阵如下式(2)所示:First, the scale space is generated using the Hessian matrix. The scale of the image refers to the thickness of the image content, and the concept of scale is used to simulate the distance between the observer and the object. The scale space of an image is a collection of blurred pictures formed by an image after passing through several different Gaussian kernels, which is used to simulate the distance and blur of objects seen by the human eye. The SURF algorithm selects the approximate value of the determinant eigenvalues of the Hessian matrix to generate the scale space, and the eigenpoint function is
Figure 807762DEST_PATH_IMAGE006
, that is, any feature point can be defined as a two-dimensional function
Figure 533010DEST_PATH_IMAGE006
, where x and y are the spatial (plane) coordinates corresponding to the feature points, and the function value at any pair of spatial coordinates (x, y)
Figure 354335DEST_PATH_IMAGE006
Generally referred to as the pixel value of the feature point. The Hessian matrix H is given by
Figure 562463DEST_PATH_IMAGE007
together with its partial derivatives. The Hessian matrix of the feature points is shown in the following formula (2):

Figure 593873DEST_PATH_IMAGE008
(2)
Figure 593873DEST_PATH_IMAGE008
(2)

各个特征点都能得到各自的H矩阵,H矩阵的判别式为下式(3)所示:Each feature point can get its own H matrix, and the discriminant of the H matrix is shown in the following formula (3):

Figure 483331DEST_PATH_IMAGE009
(3)
Figure 483331DEST_PATH_IMAGE009
(3)

上式(3)所得到的结果表示Hessian矩阵的特征值,根据判断上式(3)所得到的结果的正负将全部特征点进行划分,并由此进行判定对应特征点是否为尺度空间内的局部极值点。The result obtained by the above formula (3) represents the eigenvalues of the Hessian matrix. All the feature points are divided according to the positive and negative of the result obtained by the above formula (3), and then it is determined whether the corresponding feature point is in the scale space. the local extreme point.

具体的,当特征点的海塞矩阵为正定矩阵时,该特征点为局部极小值点。其中,正定矩阵判别方法为:若各阶主子式大于0,即

Figure 300109DEST_PATH_IMAGE010
>0,且det(H)>0,则矩阵为正定矩阵。Specifically, when the Hessian matrix of the feature point is a positive definite matrix, the feature point is a local minimum point. Among them, the positive definite matrix discrimination method is: if the main subform of each order is greater than 0, that is
Figure 300109DEST_PATH_IMAGE010
> 0, and det(H) > 0, the matrix is a positive definite matrix.

当特征点的海塞矩阵为负定矩阵时,该特征点为局部极大值点。其中,负定矩阵判别方法为:偶数阶主子式为正,奇数阶主子式为负,即

Figure 882400DEST_PATH_IMAGE010
<0,且det(H)>0,则矩阵为负定矩阵。When the Hessian matrix of a feature point is a negative definite matrix, the feature point is a local maximum point. Among them, the negative definite matrix discrimination method is: the even-order main subform is positive, and the odd-order main subform is negative, that is,
Figure 882400DEST_PATH_IMAGE010
<0, and det(H)>0, the matrix is negative definite.

可以理解的是,生成尺度空间的目的是从第一特征点集的全部特征点中提取出局部极值点(即极大值和极小值),将提取出的局部极值点标记为候选点。It can be understood that the purpose of generating the scale space is to extract local extreme points (ie, maximum and minimum values) from all the feature points in the first feature point set, and mark the extracted local extreme points as candidates. point.

接着,基于生成的尺度空间寻找最佳极值点。使用大小为3×3的滤波器,将由海塞矩阵行列式近似所得的所有所述候选点的像素值与同一尺度空间邻域和相邻尺度空间对应邻域的26个候选点的像素值(即

Figure 604368DEST_PATH_IMAGE006
)做对比。该26 个候选点为提取的关键点(该关键点是指所述候选点中的任意一个)和其所在的层级邻域内其它8个候选点以及与该关键点所在层级相邻的上下两个层级内的共18个候选点(相邻的上层级内的9个候选点和相邻的下层级内的9个候选点,如图11所示),将这些区域内(即图11所示的27个候选点)像素值表现为最大值和最小值的候选点筛选出来,并将其确定为候选特征点。Next, find the best extreme point based on the generated scale space. Using a filter of size 3 × 3, the pixel values of all the candidate points approximated by the Hessian matrix determinant are compared with the pixel values of the 26 candidate points in the same scale space neighborhood and the corresponding neighborhood in the adjacent scale space ( which is
Figure 604368DEST_PATH_IMAGE006
) for comparison. The 26 candidate points are the extracted key points (the key point refers to any one of the candidate points), the other 8 candidate points in the neighborhood of the level where it is located, and the upper and lower two adjacent to the level where the key point is located. There are a total of 18 candidate points in the level (9 candidate points in the adjacent upper level and 9 candidate points in the adjacent lower level, as shown in Figure 11). The 27 candidate points) whose pixel values represent the maximum and minimum candidate points are screened out and determined as candidate feature points.

最后,基于确定的候选特征点确定最终的特征点。运用插值的方法筛选出实际所需的连续尺度空间内的特征点,根据Taylor 级数运算公式得到近似值D(x)的二次方程式,如下式(4)所示:Finally, the final feature points are determined based on the determined candidate feature points. Use the interpolation method to filter out the feature points in the actual required continuous scale space, and obtain the quadratic equation of the approximate value D(x) according to the Taylor series operation formula, as shown in the following formula (4):

Figure 671419DEST_PATH_IMAGE011
(4)
Figure 671419DEST_PATH_IMAGE011
(4)

其中,D(x)表示连续尺度空间里候选特征点极值的近似值,x为候选特征点在尺度空间上的坐标,利用两相邻尺度空间上各个对应候选特征点极值的差分运算可以求得D(x)的各项系数。对D(x)求导,令

Figure 201758DEST_PATH_IMAGE012
得出极值,如下式(5)所示。Among them, D(x) represents the approximate value of the extreme value of the candidate feature point in the continuous scale space, x is the coordinate of the candidate feature point in the scale space, and the difference operation of the extreme value of each corresponding candidate feature point in the two adjacent scale spaces can be calculated. Get the various coefficients of D(x). Derivative with respect to D(x), let
Figure 201758DEST_PATH_IMAGE012
The extreme value is obtained, as shown in the following formula (5).

Figure 79584DEST_PATH_IMAGE013
(5)
Figure 79584DEST_PATH_IMAGE013
(5)

式中,

Figure 960952DEST_PATH_IMAGE014
表示尺度空间内候选特征点与插值中心点相偏移程度,若
Figure 333159DEST_PATH_IMAGE014
的值过大(比如大于阈值),代表着极值点已经偏离候选特征点所在尺度空间内的轨迹,这种情况应该将对应候选特征点删除。通过此种多次迭代的方式筛选出特征点的精准定位及尺度,校核并调整第一特征点集中的特征点,从一部正中线附近、四部和五部边缘处将能描述头部轮廓的特征点存入第二特征点集。示例性的,参考如表3示出的第二特征点集所包括的特征点。同时,参考图12和图13所示的假人头部轮廓特征点的示意图。In the formula,
Figure 960952DEST_PATH_IMAGE014
Indicates the degree of offset between the candidate feature points in the scale space and the interpolation center point, if
Figure 333159DEST_PATH_IMAGE014
The value of is too large (for example, greater than the threshold), which means that the extreme point has deviates from the trajectory in the scale space where the candidate feature point is located. In this case, the corresponding candidate feature point should be deleted. Through this method of multiple iterations, the precise positioning and scale of the feature points are screened, and the feature points in the first feature point set are checked and adjusted. The feature points of are stored in the second feature point set. Exemplarily, refer to the feature points included in the second feature point set as shown in Table 3. Meanwhile, refer to the schematic diagrams of the feature points of the dummy head profile shown in FIG. 12 and FIG. 13 .

表3 第二特征点集Table 3 The second feature point set

Figure 983583DEST_PATH_IMAGE015
Figure 983583DEST_PATH_IMAGE015

进一步的,根据面部五官对第一特征点集中的特征点继续进行筛选,将描述眼部特征的特征点统计至第三特征点集,将描述鼻部特征的特征点统计至第四特征点集,将描述口部特征的特征点统计至第五特征点集,将描述眉部特征的特征点统计至第六特征点集,将描述耳部特征的特征点统计至第七特征点集。Further, continue to screen the feature points in the first feature point set according to the facial features, count the feature points describing the eye features to the third feature point set, and count the feature points describing the nose feature to the fourth feature point set. , the feature points describing the features of the mouth are counted into the fifth feature point set, the feature points describing the features of the eyebrows are counted into the sixth feature point set, and the feature points describing the features of the ear are counted into the seventh feature point set.

具体的,结合简化的头部模型,采用SURF算法继续进行筛选,从中停的二、三、四、五部中将能描述眼部主要特征的特征点存入第三特征点集;从中停的一、二、三部中将能描述鼻部主要特征的特征点存入第四特征点集;从下停的一、二、三部中将能描述口部主要特征的特征点存入第五特征点集;将能描述眉部主要特征的特征点存入第六特征点集;将能描述耳部主要特征的特征点存入第七特征点集。其中,描述主要特征的特征点指能确定五官位置及比例大小的特征点。Specifically, combined with the simplified head model, the SURF algorithm is used to continue screening, and the feature points that can describe the main features of the eye are stored in the third feature point set from the second, third, fourth, and fifth parts of the pause; The feature points that can describe the main features of the nose are stored in the fourth feature point set from the first, second, and third parts; the feature points that can describe the main features of the mouth are stored in the fifth set from the first, second, and third parts. The feature point set; the feature points that can describe the main features of the eyebrows are stored in the sixth feature point set; the feature points that can describe the main features of the ear are stored in the seventh feature point set. Among them, the feature points that describe the main features refer to the feature points that can determine the position and scale of the facial features.

其中,第三特征点集中的特征点参考如下表4所示,第四特征点集中的特征点参考如下表5所示,第五特征点集中的特征点参考如下表6所示。Among them, the feature point reference in the third feature point set is shown in Table 4 below, the feature point reference in the fourth feature point set is shown in Table 5 below, and the feature point reference in the fifth feature point set is shown in Table 6 below.

表4:第三特征点集Table 4: The third feature point set

Figure 32310DEST_PATH_IMAGE016
Figure 32310DEST_PATH_IMAGE016

表5:第四特征点集Table 5: Fourth Feature Point Set

Figure 978139DEST_PATH_IMAGE017
Figure 978139DEST_PATH_IMAGE017

表6:第五特征点集Table 6: Fifth feature point set

Figure 13091DEST_PATH_IMAGE018
Figure 13091DEST_PATH_IMAGE018

结合上述表4、表5和表6,参考如图14所示的一种假人头部五官特征点示意图。In combination with the above Table 4, Table 5 and Table 6, refer to the schematic diagram of the facial features of a dummy head as shown in FIG. 14 .

概括性的,所述至少结合所述预设头部模型缺失的面部部位对所述初始特征点进行筛选,获得多个特征点集,包括:In general, the initial feature points are screened at least in combination with the missing facial parts of the preset head model to obtain multiple feature point sets, including:

将所述初始特征点中属于所述预设头部模型缺失的面部部位的特征点剔除,将剩余的初始特征点统计至第一特征点集;Eliminating the feature points belonging to the facial parts missing from the preset head model in the initial feature points, and counting the remaining initial feature points to the first feature point set;

根据所述预设头部模型,通过加速稳健特征SURF算法对所述第一特征点集中的特征点继续进行筛选,以将描述头部轮廓的特征点统计至第二特征点集,将描述眼部特征的初始特征点统计至第三特征点集,将描述鼻部特征的初始特征点统计至第四特征点集,将描述口部特征的初始特征点统计至第五特征点集,将描述眉部特征的初始特征点统计至第六特征点集,将描述耳部特征的初始特征点统计至第七特征点集。可选的,还可以参考如图15所示的一种特征点筛选流程示意图,具体包括:选取头部特征点-确定一具体的特征点是否满足测量要求,如果不满足,则将该特征点剔除,如果满足则进一步判断该特征点是否位于缺失部位,如果是,则将该特征点剔除,如果不是,一方面将该特征点存入第一特征点集(即图15所示的特征点集1),另一方面进一步判断该特征点能否描述头部轮廓,如果能,则将该特征点存入第二特征点集(即图15所示的特征点集2),如果不能,则继续判断该特征点能否描述面部五官,如果不能,则将该特征点剔除,如果能,则将该特征点存入第三特征点集(即图15所示的特征点集3)。According to the preset head model, the feature points in the first feature point set are continuously screened by the accelerated robust feature SURF algorithm, so as to count the feature points describing the outline of the head to the second feature point set, which will describe the eye contour. The initial feature points of the facial features are counted to the third feature point set, the initial feature points describing the nose features are counted to the fourth feature point set, and the initial feature points describing the mouth features are counted to the fifth feature point set. The initial feature points of the eyebrow feature are counted to the sixth feature point set, and the initial feature points describing the ear feature are counted to the seventh feature point set. Optionally, you can also refer to a schematic diagram of a feature point screening process as shown in Figure 15, which specifically includes: selecting a head feature point-determining whether a specific feature point meets the measurement requirements, if not, then the feature point. If it is satisfied, it will be further judged whether the feature point is located in the missing part. If so, the feature point will be eliminated. If not, the feature point will be stored in the first feature point set (that is, the feature point shown in Figure 15). On the other hand, it is further judged whether the feature point can describe the outline of the head. If so, the feature point is stored in the second feature point set (that is, the feature point set 2 shown in Figure 15). If not, Then continue to judge whether the feature point can describe the facial features, if not, remove the feature point, and if so, store the feature point in the third feature point set (ie, feature point set 3 shown in Figure 15).

综上,筛选后的特征点分类归入不同的特征点集。第一特征点集(C1~ C23)可描述五官几何特征细节、定位位置及头面部轮廓特征。第二特征点集(L1~ L11)可描述头部主要轮廓特征。第三特征点集(E1~ E4)可描述眼部主要特征。第四特征点集(N1~ N4)可描述鼻部主要特征。第五特征点集5(M1~ M3)可描述口部主要特征。第六特征点集(∅)可说明眉部特征均被简化;第七特征点集(∅)可说明耳部特征均被简化。In summary, the filtered feature points are classified into different feature point sets. The first feature point set (C1~C23) can describe the geometric features of facial features, location and contour features of the head and face. The second feature point set (L1~L11) can describe the main contour features of the head. The third feature point set (E1~E4) can describe the main features of the eye. The fourth feature point set (N1~N4) can describe the main features of the nose. The fifth feature point set 5 (M1~M3) can describe the main features of the mouth. The sixth feature point set (∅) can indicate that the eyebrow features are simplified; the seventh feature point set (∅) can indicate that the ear features are all simplified.

进一步的,所述方法还包括:Further, the method also includes:

根据目标需求,对所述多个特征点集中的部分特征点集进行并集操作,获得满足所述目标需求的目标特征点集。对应的,参考如图16所示的一种特征点提取的流程示意图,具体包括:输入需求-选取特征点集-交并集处理-提取特征点。According to the target requirement, a union operation is performed on part of the feature point sets in the plurality of feature point sets to obtain a target feature point set that meets the target requirement. Correspondingly, refer to a schematic flowchart of a feature point extraction as shown in FIG. 16 , which specifically includes: input requirements - selecting feature point sets - intersection and union processing - extracting feature points.

即按照测量要求,合并集合得到可描述不同头面部特征的特征点集。例如,想进行头部轮廓特征和鼻部主要特征的测量,对第二特征点集和第四特征点集取并集,即可得到筛选出的头部特征点,通过筛选出的特征点可测量头长、鼻长、头宽、鼻宽、头全高、鼻高、头冠状围等特征尺寸,从而得到头部轮廓特征和鼻部主要特征,进而证明仿生假人头部面貌特征符合对应人种面貌特征。对应的,参考如图17所示的一种头部轮廓特征和鼻部主要特征筛选示意图。That is, according to the measurement requirements, the feature point sets that can describe different head and face features are obtained by merging the sets. For example, if you want to measure the contour features of the head and the main features of the nose, take the union of the second feature point set and the fourth feature point set to get the selected head feature points. Measure the characteristic dimensions of head length, nose length, head width, nose width, head height, nose height, head coronal circumference, etc., so as to obtain the head contour features and main features of the nose, and then prove that the bionic dummy head features conform to the corresponding race facial features. Correspondingly, refer to a schematic diagram of screening of head contour features and main features of the nose as shown in FIG. 17 .

本实施例具有以下技术效果:This embodiment has the following technical effects:

能够快速简单地从碰撞假人头部众多特征点中筛选出具有代表性的特征点,避免过多描述,筛选得到的特征点能反映主要头面部信息,描述面貌特征。该方法原理清晰易懂,操作方法也相对简单,不需要昂贵的设备,时间周期短,工程成本低,能够快速得到测量所需要的头部特征点,为设计面貌与尺寸符合中国人特征的仿生假人头部模型提供指导。The representative feature points can be quickly and simply screened out from the numerous feature points on the head of the collision dummy, avoiding excessive description, and the screened feature points can reflect the main head and face information and describe the facial features. The principle of the method is clear and easy to understand, the operation method is relatively simple, no expensive equipment is required, the time period is short, the engineering cost is low, and the head feature points required for the measurement can be quickly obtained. The dummy head model provides guidance.

概括性的,参考如图18所示的一种特征点筛选的技术路线示意图,具体包括如下概括性步骤:简化的头部模型-进行图像预处理-确定头部基准面-画出头部基准线-划分头面部区域-输入缺失部位-筛选头部特征点-按需提取特征点-得到头部特征点。具体的,通过研究简化的头部模型,确定其头部基准面和基准线,研究其面部划分方法,根据《人体测量方法》和假人头面部特征筛选头部特征点,分类归入不同的特征点集,再按需求合并集合得到可描述头部几何特征的特征点集,达到用最简的特征点集描述头部模型的主要特征,最终得到一套完整的头面部特征点筛选方法,为仿生头部模型的简化设计提供参考。In general, refer to the schematic diagram of a technical route for feature point screening as shown in Figure 18, which specifically includes the following general steps: simplified head model - image preprocessing - determination of head reference plane - drawing head reference Line - divide head and face area - input missing parts - filter head feature points - extract feature points on demand - get head feature points. Specifically, by studying the simplified head model, determining its head reference plane and reference line, studying its face division method, screening head feature points according to the "Anthropometric Method" and the facial features of the dummy head, and classifying them into different features point set, and then merge the sets according to the requirements to obtain the feature point set that can describe the geometric features of the head, so as to use the simplest feature point set to describe the main features of the head model, and finally obtain a complete set of head and face feature point screening methods, which are The simplified design of the bionic head model provides reference.

进行图像预处理:通过灰度化、边缘检测等手段,对图像进行预处理,保证高质量高效识别,减少后续特征点筛选所花费的时间,提高图像特征提取的准确度。Image preprocessing: Preprocess images by means of grayscale, edge detection, etc. to ensure high-quality and efficient identification, reduce the time spent on subsequent feature point screening, and improve the accuracy of image feature extraction.

确定头部基准面:在简化的头部模型上确定矢状面,冠状面,正中矢状面,水平面和法兰克福平面。Determining the head datum plane: Determine the sagittal, coronal, midsagittal, horizontal and Frankfurt planes on the simplified head model.

画出头部基准线:通过绿光水平仪确定,并画出基准线:矢状轴,冠状轴,垂直轴。正中线:眉间点—鼻下点—颏下点;外眼角点间线:外眼角点—外眼角点;鼻下点线:通过鼻下点与外眼角点间线平行,与正中线垂直;内眼角点垂线:分别起于左右内眼角点,与正中线平行,与外眼角点间线垂直;口角点间线(口裂线):连结两侧口角点,与中线垂直,与外眼角点间线平行;颏下点线:此线通过颏下点与中线垂直,与口角点线平行。Draw the reference line of the head: Determined by a green light level, and draw the reference line: sagittal axis, coronal axis, vertical axis. Midline: point between the eyebrows - point under the nose - point under the chin; line between the outer canthus points: point between the outer corner of the eye - point of the outer corner of the eye; point line under the nose: parallel to the line between the points under the nose and the outer corner of the eye, and perpendicular to the midline ;Inner corner point vertical line: It starts from the left and right inner corner points, parallel to the midline, and perpendicular to the line between the outer corner points. The line between the corners of the eyes is parallel; the submental line: This line is perpendicular to the midline through the submental point and parallel to the corner of the mouth.

进行面部区域划分:不同人面部五官的位置存在共性,一般来说,大致都符合“三停五部”中的比例关系。“三停五部”是人的脸长与脸宽的一般标准比例,简单的三停五部是把头部上下分为三部分,把头部左右分为五部分。所谓“三停”是描述人脸的长度关系,把从下巴开始到发际线之间的距离,共分为3 等分。从下巴到鼻底、从鼻底到眉毛、从眉毛到前额中央的发际线共为“三停”。“五部”是描述人脸的宽度关系,是把左右两侧发际线的距离分为5 等分:两眼之间、左右两眼、左右两眼的外眼角到同侧发际线各为一“部”。“三停五部”的规则对简化的头部模型的面部区域划分具有较大的参考价值,本实施例使用相似的规则对面部区域进行划分。由于有些头部模型五官的位置关系与“三停五部”有些不同,但一般不会过于极端,因此在本实施例中也考虑到这点,在定位头部模型的五官时,相对于“三停五部”的特点,位置范围有所调整。按照如下方法,对面部进行区域划分,以更好地描述面部特征。Divide the facial area: There are commonalities in the positions of the facial features of different people. Generally speaking, they generally conform to the proportional relationship in the "Three Stops and Five Parts". "Three stops and five parts" is the general standard ratio of a person's face length to face width. The simple three stops and five parts divide the head into three parts up and down, and divide the head into five parts from left to right. The so-called "three stops" is to describe the length relationship of the face, and the distance from the chin to the hairline is divided into three equal parts. The hairline from the chin to the bottom of the nose, from the bottom of the nose to the eyebrows, and from the eyebrows to the center of the forehead is a total of "three stops". The "five parts" describe the width relationship of the human face. It divides the distance between the left and right hairlines into 5 equal parts: between the two eyes, the left and right eyes, the outer corners of the left and right eyes to the hairline on the same side. as a "department". The rule of "three stops and five parts" has great reference value for the face region division of the simplified head model, and this embodiment uses similar rules to divide the face region. Since the positional relationship of the facial features of some head models is somewhat different from that of "Three Stops and Five Parts", but generally not too extreme, this is also considered in this embodiment. When locating the facial features of the head model, relative to " Three stops and five departments” feature, the location range has been adjusted. According to the following method, the face is divided into regions to better describe the facial features.

三停:额中点—眉间点—鼻下点—颏下点。Three stops: the midpoint of the forehead - the point between the eyebrows - the point under the nose - the point under the chin.

五部:头侧点—眉弓突出点—内眼角点—内眼角点—眉弓突出点—头侧点。Five parts: head lateral point - eyebrow arch protruding point - inner canthus point - inner canthus point - eyebrow arch protruding point - head lateral point.

筛选头部特征点:目前,国内人体头面部测量点位的确定主要通过《人体测量方法》中描述的42个特征点与GB/T 38131-2019《服装用人体测量基准点的获取方法》中描述的2个特征点进行简化分析,提取特征点并进行相关测量项目的确定。《服装用人体测量基准点的获取方法》涉及到的特征点过少,不足以描述碰撞假人头部的几何结构和轮廓尺寸,《人体测量方法》涉及到的特征点过多,操作复杂,计算繁琐,碰撞假人头面部缺失部分特征,不需要涵盖所有特征点,故应对特征点进行筛选。Screening of head feature points: At present, the determination of domestic human head and face measurement points is mainly based on the 42 feature points described in "Anthropometric Methods" and GB/T 38131-2019 "Method for Obtaining Anthropometric Reference Points for Clothing" The 2 feature points described are simplified and analyzed, the feature points are extracted and the related measurement items are determined. "Methods for Obtaining Human Body Measurement Reference Points for Clothing" involves too few feature points to describe the geometric structure and contour size of the collision dummy's head. "Anthropometric Methods" involves too many feature points and complicated operations. The calculation is cumbersome, and some features of the head and face of the collision dummy are missing, and it is not necessary to cover all the feature points, so the feature points should be screened.

(一)根据测量要求进行筛选,特征点的筛选遵循以下原则:选取关键特征点,如五官处,另外在曲率较大的地方,如眼角等处也要有标定;关键特征点尽量分布均匀;特征点分布的密度要适当,密度过小影响模型的精确度,密度过大则影响测量效率;不同图像中相应的特征点应有一一对应关系,即在不同的样本中对应的特征点应当标记同样的图像特征;涵盖《人体测量方法》中规定的主要测点和主要测量项目中的关键测点。(1) Screening according to the measurement requirements. The screening of feature points follows the following principles: select key feature points, such as facial features, and also have calibration in places with large curvature, such as the corners of the eyes; the key feature points should be distributed as evenly as possible; The density of the distribution of feature points should be appropriate. If the density is too small, the accuracy of the model will be affected. If the density is too large, the measurement efficiency will be affected. There should be a one-to-one correspondence between the corresponding feature points in different images, that is, the corresponding feature points in different samples should be Mark the same image features; cover the main measurement points specified in the "Anthropometric Method" and the key measurement points in the main measurement items.

(二)根据缺失部位进行筛选,根据《人体测量方法》,结合简化的头部模型,对头部测量特征点进行筛选,剔除各缺失部位处的特征点,将筛选得到的特征点存入第一特征点集。(2) Screening according to the missing parts, according to the "Anthropometric Method", combined with the simplified head model, screen the cephalometric feature points, remove the feature points at each missing part, and store the screened feature points in the first A set of feature points.

(三)根据头部轮廓进行筛选,结合简化的头部模型,继续进行筛选,将能描述头部轮廓的特征点存入第二特征点集。(3) Screening according to the head profile, combined with the simplified head model, continue screening, and store the feature points that can describe the head profile into the second feature point set.

(四)根据面部五官进行筛选,结合简化的头部模型,继续进行筛选,将能描述眼部主要特征的特征点存入第三特征点集,将能描述鼻部主要特征的特征点存入第四特征点集,将能描述口部主要特征的特征点存入第五特征点集,将能描述眉部主要特征的特征点存入第六特征点集,将能描述耳部主要特征的特征点存入第七特征点集。(4) Screening according to the facial features, combined with the simplified head model, continue screening, and store the feature points that can describe the main features of the eyes into the third feature point set, and store the feature points that can describe the main features of the nose into the In the fourth feature point set, the feature points that can describe the main features of the mouth are stored in the fifth feature point set, the feature points that can describe the main features of the eyebrows are stored in the sixth feature point set, and the feature points that can describe the main features of the ear are stored in the sixth feature point set. The feature points are stored in the seventh feature point set.

最后按照测量要求,合并集合得到可描述不同头面部特征的特征点集。Finally, according to the measurement requirements, merge the sets to obtain feature point sets that can describe different head and face features.

本公开实施例提供的特征点筛选方法,由三停五部划分区域,以简化特点、头部轮廓特征、五官几何特征作为判定依据;针对简化的头部模型特征点筛选这一新的诉求,运用新方法了头面部信息提取,较现有技术复杂性低,工程代价小;通过此方法可筛选得到各类特征点,通过特征点可将头面部特征赋予到仿生假人头部模型,为设计面貌与尺寸符合中国人特征的仿生假人头部模型提供指导。The feature point screening method provided by the embodiment of the present disclosure divides the region by three stops and five parts, and uses simplified features, head contour features, and facial features geometric features as the judgment basis; for the new requirement of simplified head model feature point screening, The new method is used to extract the head and face information, which has lower complexity and lower engineering cost than the existing technology; through this method, various feature points can be screened, and the head and face features can be assigned to the bionic dummy head model through the feature points, which is Provide guidance for designing a bionic dummy head model whose appearance and size are in line with Chinese characteristics.

图19为本发明实施例提供的一种电子设备的结构示意图。如图19所示,电子设备400包括一个或多个处理器401和存储器402。FIG. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in FIG. 19 , electronic device 400 includes one or more processors 401 and memory 402 .

处理器401可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备400中的其他组件以执行期望的功能。Processor 401 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 400 to perform desired functions.

存储器402可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器401可以运行所述程序指令,以实现上文所说明的本发明任意实施例的用于头部模型的特征点筛选方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如初始外参、阈值等各种内容。Memory 402 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache). The non-volatile memory may include, for example, a read only memory (ROM), a hard disk, a flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 401 may execute the program instructions to implement the feature points for the head model in any of the embodiments of the present invention described above Screening methods and/or other desired functions. Various contents such as initial extrinsic parameters, thresholds and the like may also be stored in the computer-readable storage medium.

在一个示例中,电子设备400还可以包括:输入装置403和输出装置404,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。该输入装置403可以包括例如键盘、鼠标等等。该输出装置404可以向外部输出各种信息,包括预警提示信息、制动力度等。该输出装置404可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。In one example, the electronic device 400 may also include an input device 403 and an output device 404 interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 403 may include, for example, a keyboard, a mouse, and the like. The output device 404 can output various information to the outside, including early warning information, braking force, and the like. The output devices 404 may include, for example, displays, speakers, printers, and communication networks and their connected remote output devices, among others.

当然,为了简化,图19中仅示出了该电子设备400中与本发明有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备400还可以包括任何其他适当的组件。Of course, for simplicity, only some of the components in the electronic device 400 related to the present invention are shown in FIG. 19 , and components such as buses, input/output interfaces, and the like are omitted. Besides, the electronic device 400 may also include any other appropriate components according to the specific application.

除了上述方法和设备以外,本发明的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本发明任意实施例所提供的用于头部模型的特征点筛选方法的步骤。In addition to the methods and apparatuses described above, embodiments of the present invention may also be computer program products comprising computer program instructions that, when executed by a processor, cause the processor to perform the functions provided by any of the embodiments of the present invention. Steps of the feature point screening method for the head model.

所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本发明实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。The computer program product may be written in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as "C" language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user device, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.

此外,本发明的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本发明任意实施例所提供的用于头部模型的特征点筛选方法的步骤。In addition, an embodiment of the present invention may also be a computer-readable storage medium on which computer program instructions are stored, the computer program instructions, when executed by a processor, cause the processor to execute the functions provided by any embodiment of the present invention. The steps of the feature point screening method for the head model.

所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.

Claims (9)

1.一种用于头部模型的特征点筛选方法,其特征在于,包括:1. a feature point screening method for head model, is characterized in that, comprises: 针对预设头部模型的原始图像进行预处理,获得目标图像,其中,所述预设头部模型用于汽车碰撞假人;Perform preprocessing on an original image of a preset head model to obtain a target image, wherein the preset head model is used for a car crash dummy; 基于所述目标图像确定头部基准面和基准线;determining a head reference plane and a reference line based on the target image; 结合所述头部基准面和基准线,按照三停五部的比例关系进行面部区域划分,获得多个面部子区域;Combining the head datum plane and the datum line, the facial area is divided according to the proportional relationship of three stops and five parts, and a plurality of facial sub-areas are obtained; 按照预设选取条件从所述多个面部子区域中确定初始特征点;Determine initial feature points from the plurality of face sub-regions according to preset selection conditions; 至少结合所述预设头部模型缺失的面部部位对所述初始特征点进行筛选,获得多个特征点集,其中,不同的所述特征点集用于描述头面部不同部位的特征。The initial feature points are screened at least in combination with the missing facial parts of the preset head model to obtain multiple feature point sets, wherein the different feature point sets are used to describe the features of different parts of the head and face. 2.根据权利要求1所述的方法,其特征在于,所述针对预设头部模型的原始图像进行预处理,获得目标图像,包括:2. The method according to claim 1, wherein the preprocessing is performed on the original image of the preset head model to obtain the target image, comprising: 对所述原始图像进行灰度化处理,获得所述原始图像对应的灰度图像;Performing grayscale processing on the original image to obtain a grayscale image corresponding to the original image; 对所述灰度图像进行边缘检测,获得所述目标图像。Perform edge detection on the grayscale image to obtain the target image. 3.根据权利要求1所述的方法,其特征在于,所述基于所述目标图像确定头部基准面和基准线,包括:3. The method according to claim 1, wherein the determining a head reference plane and a reference line based on the target image comprises: 根据所述预设头部模型的几何特征和先验知识,基于所述目标图像确定所述预设头部模型设定位置的第一特征点;According to the geometric features and prior knowledge of the preset head model, based on the target image, determine the first feature point of the set position of the preset head model; 根据所述第一特征点确定所述头部基准面和基准线。The head reference plane and the reference line are determined according to the first feature point. 4.根据权利要求3所述的方法,其特征在于,所述头部基准面包括矢状面、冠状面、正中矢状面、水平面和法兰克福平面,所述基准线至少包括矢状轴、冠状轴和垂直轴。4. The method according to claim 3, wherein the head reference plane includes sagittal plane, coronal plane, midsagittal plane, horizontal plane and Frankfurt plane, and the reference line at least includes sagittal axis, coronal plane axis and vertical axis. 5.根据权利要求1所述的方法,其特征在于,所述至少结合所述预设头部模型缺失的面部部位对所述初始特征点进行筛选,获得多个特征点集,包括:5. The method according to claim 1, wherein the initial feature points are screened at least in combination with the missing facial parts of the preset head model to obtain multiple feature point sets, comprising: 将所述初始特征点中属于所述预设头部模型缺失的面部部位的特征点剔除,将剩余的初始特征点统计至第一特征点集;Eliminating the feature points belonging to the facial parts missing from the preset head model in the initial feature points, and counting the remaining initial feature points to the first feature point set; 根据所述预设头部模型,通过加速稳健特征SURF算法对所述第一特征点集中的特征点继续进行筛选,以将描述头部轮廓的特征点统计至第二特征点集,将描述眼部特征的初始特征点统计至第三特征点集,将描述鼻部特征的初始特征点统计至第四特征点集,将描述口部特征的初始特征点统计至第五特征点集,将描述眉部特征的初始特征点统计至第六特征点集,将描述耳部特征的初始特征点统计至第七特征点集。According to the preset head model, the feature points in the first feature point set are continuously screened by the accelerated robust feature SURF algorithm, so as to count the feature points describing the outline of the head to the second feature point set, which will describe the eye contour. The initial feature points of the facial features are counted to the third feature point set, the initial feature points describing the nose features are counted to the fourth feature point set, and the initial feature points describing the mouth features are counted to the fifth feature point set. The initial feature points of the eyebrow feature are counted to the sixth feature point set, and the initial feature points describing the ear feature are counted to the seventh feature point set. 6.根据权利要求5所述的方法,其特征在于,所述将描述头部轮廓的特征点统计至第二特征点集,包括:6 . The method according to claim 5 , wherein the statistics of the feature points describing the outline of the head to the second feature point set comprises: 6 . 根据所述第一特征点集中特征点的像素值基于海森Hessian矩阵生成尺度空间,并确定尺度空间中的局部极值点;Generate a scale space based on the Hessian matrix according to the pixel values of the feature points in the first feature point set, and determine the local extreme point in the scale space; 基于预设策略根据所述局部极值点确定候选特征点;Determine candidate feature points according to the local extreme points based on a preset strategy; 根据所述候选特征点,通过插值法确定所需的连续尺度空间内的候选特征点;According to the candidate feature points, the candidate feature points in the required continuous scale space are determined by interpolation; 若候选特征点与插值中心点之间的距离超过阈值,则将所述候选特征点从所述第一特征点集中剔除;If the distance between the candidate feature point and the interpolation center point exceeds a threshold value, remove the candidate feature point from the first feature point set; 将所述第一特征点集中剩余的特征点统计至第二特征点集。The remaining feature points in the first feature point set are counted into the second feature point set. 7.根据权利要求1-6任一项所述的方法,其特征在于,还包括:7. The method according to any one of claims 1-6, further comprising: 根据目标需求,对所述多个特征点集中的部分特征点集进行并集操作,获得满足所述目标需求的目标特征点集。According to the target requirement, a union operation is performed on part of the feature point sets in the plurality of feature point sets to obtain a target feature point set that meets the target requirement. 8.一种电子设备,其特征在于,所述电子设备包括:8. An electronic device, characterized in that the electronic device comprises: 处理器和存储器;processor and memory; 所述处理器通过调用所述存储器存储的程序或指令,用于执行如权利要求1至7任一项所述的用于头部模型的特征点筛选方法的步骤。The processor is configured to execute the steps of the feature point screening method for a head model according to any one of claims 1 to 7 by invoking a program or an instruction stored in the memory. 9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储程序或指令,所述程序或指令使计算机执行如权利要求1至7任一项所述的用于头部模型的特征点筛选方法的步骤。9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a program or an instruction, and the program or instruction causes a computer to execute the method for the head according to any one of claims 1 to 7 The steps of the feature point screening method of the model.
CN202210649441.9A 2022-06-10 2022-06-10 Feature point screening method, device and storage medium for head model Active CN114743252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210649441.9A CN114743252B (en) 2022-06-10 2022-06-10 Feature point screening method, device and storage medium for head model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210649441.9A CN114743252B (en) 2022-06-10 2022-06-10 Feature point screening method, device and storage medium for head model

Publications (2)

Publication Number Publication Date
CN114743252A true CN114743252A (en) 2022-07-12
CN114743252B CN114743252B (en) 2022-09-16

Family

ID=82287171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210649441.9A Active CN114743252B (en) 2022-06-10 2022-06-10 Feature point screening method, device and storage medium for head model

Country Status (1)

Country Link
CN (1) CN114743252B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740408A (en) * 2024-02-19 2024-03-22 中国汽车技术研究中心有限公司 A car collision dummy facial pressure detection device and its design method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877055A (en) * 2009-12-07 2010-11-03 北京中星微电子有限公司 Method and device for positioning key feature point
US20140110497A1 (en) * 2012-10-23 2014-04-24 American Covers, Inc. Air Freshener with Decorative Insert
CN103984920A (en) * 2014-04-25 2014-08-13 同济大学 Three-dimensional face identification method based on sparse representation and multiple feature points
US20150039552A1 (en) * 2013-08-05 2015-02-05 Applied Materials, Inc. Method and apparatus for optimizing profit in predictive systems
CN110826372A (en) * 2018-08-10 2020-02-21 浙江宇视科技有限公司 Method and device for detecting human face characteristic points
CN111402391A (en) * 2020-03-13 2020-07-10 深圳看到科技有限公司 User face image display method, display device and corresponding storage medium
CN112308043A (en) * 2020-11-26 2021-02-02 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN113111690A (en) * 2020-01-13 2021-07-13 北京灵汐科技有限公司 Facial expression analysis method and system and satisfaction analysis method and system
CN113807180A (en) * 2021-08-16 2021-12-17 常州大学 Face recognition method based on LBPH and feature points
CN114550278A (en) * 2022-04-28 2022-05-27 中汽研汽车检验中心(天津)有限公司 Method, equipment and storage medium for determining head and face feature point positions of collision dummy

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877055A (en) * 2009-12-07 2010-11-03 北京中星微电子有限公司 Method and device for positioning key feature point
US20140110497A1 (en) * 2012-10-23 2014-04-24 American Covers, Inc. Air Freshener with Decorative Insert
US20150039552A1 (en) * 2013-08-05 2015-02-05 Applied Materials, Inc. Method and apparatus for optimizing profit in predictive systems
CN103984920A (en) * 2014-04-25 2014-08-13 同济大学 Three-dimensional face identification method based on sparse representation and multiple feature points
CN110826372A (en) * 2018-08-10 2020-02-21 浙江宇视科技有限公司 Method and device for detecting human face characteristic points
CN113111690A (en) * 2020-01-13 2021-07-13 北京灵汐科技有限公司 Facial expression analysis method and system and satisfaction analysis method and system
CN111402391A (en) * 2020-03-13 2020-07-10 深圳看到科技有限公司 User face image display method, display device and corresponding storage medium
CN112308043A (en) * 2020-11-26 2021-02-02 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and computer-readable storage medium
CN113807180A (en) * 2021-08-16 2021-12-17 常州大学 Face recognition method based on LBPH and feature points
CN114550278A (en) * 2022-04-28 2022-05-27 中汽研汽车检验中心(天津)有限公司 Method, equipment and storage medium for determining head and face feature point positions of collision dummy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卢宗杰: "基于人脸特征点的头部位姿检测及运动跟踪控制研究", 《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117740408A (en) * 2024-02-19 2024-03-22 中国汽车技术研究中心有限公司 A car collision dummy facial pressure detection device and its design method

Also Published As

Publication number Publication date
CN114743252B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US11043011B2 (en) Image processing method, apparatus, terminal, and storage medium for fusing images of two objects
CN108229278B (en) Face image processing method and device and electronic equipment
CN103914699B (en) A kind of method of the image enhaucament of the automatic lip gloss based on color space
TWI396143B (en) Method and system for picture segmentation and method for image matting of a picture
CN105005765B (en) A kind of facial expression recognizing method based on Gabor wavelet and gray level co-occurrence matrixes
CN110287790B (en) A Learning State Hybrid Analysis Method for Static Multiplayer Scenarios
CN109376582A (en) An Interactive Face Cartoon Method Based on Generative Adversarial Networks
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN108460345A (en) A kind of facial fatigue detection method based on face key point location
CN101779218A (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN105046219A (en) Face identification system
CN109147024A (en) Expression replacing options and device based on threedimensional model
CN109102559A (en) three-dimensional model processing method and device
WO2022252737A1 (en) Image processing method and apparatus, processor, electronic device, and storage medium
CN111178271B (en) Face image feature enhancement method, face recognition method and electronic equipment
CN114743252B (en) Feature point screening method, device and storage medium for head model
CN109640787A (en) Measure the System and method for of interpupillary distance
CN114550278A (en) Method, equipment and storage medium for determining head and face feature point positions of collision dummy
CN110070057A (en) Interpupillary distance measurement method, device, terminal device and storage medium
CN109447031A (en) Image processing method, device, equipment and storage medium
JP2007175384A (en) Face classification method for cheek makeup, face classifier, map for determining classification, face classification program and recording medium having recorded program
CN113705466B (en) Face five sense organ shielding detection method for shielding scene, especially under high imitation shielding
WO2023103145A1 (en) Head pose truth value acquisition method, apparatus and device, and storage medium
JP5095182B2 (en) Face classification device, face classification program, and recording medium on which the program is recorded
CN116580445B (en) Large language model face feature analysis method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant