CN114743252A - Feature point screening method, device and storage medium for head model - Google Patents
Feature point screening method, device and storage medium for head model Download PDFInfo
- Publication number
- CN114743252A CN114743252A CN202210649441.9A CN202210649441A CN114743252A CN 114743252 A CN114743252 A CN 114743252A CN 202210649441 A CN202210649441 A CN 202210649441A CN 114743252 A CN114743252 A CN 114743252A
- Authority
- CN
- China
- Prior art keywords
- head
- feature
- feature point
- points
- head model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012216 screening Methods 0.000 title claims abstract description 54
- 230000001815 facial effect Effects 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 210000003128 head Anatomy 0.000 claims description 177
- 239000011159 matrix material Substances 0.000 claims description 19
- 210000004709 eyebrow Anatomy 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 abstract description 18
- 239000011664 nicotinic acid Substances 0.000 abstract description 8
- 210000001508 eye Anatomy 0.000 description 24
- 238000010586 diagram Methods 0.000 description 20
- 210000000697 sensory organ Anatomy 0.000 description 10
- 238000013461 design Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 210000001061 forehead Anatomy 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000000691 measurement method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 241000746998 Tragus Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001595 mastoid Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000000049 pigment Substances 0.000 description 2
- NOQGZXFMHARMLW-UHFFFAOYSA-N Daminozide Chemical compound CN(C)NC(=O)CCC(O)=O NOQGZXFMHARMLW-UHFFFAOYSA-N 0.000 description 1
- 210000001142 back Anatomy 0.000 description 1
- 230000003592 biomimetic effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 210000004195 gingiva Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000001179 pupillary effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of measurement, and discloses a method and equipment for screening feature points of a head model and a storage medium. The method comprises the following steps: preprocessing an original image of a preset head model to obtain a target image, wherein the preset head model is used for an automobile collision dummy; determining a head reference plane and a reference line based on the target image; combining a head reference surface and a reference line, and dividing a face region according to the proportional relation of the three parts, the three parts and the five parts to obtain a plurality of face subregions; determining initial feature points from the plurality of face sub-regions according to a preset selection condition; and screening the initial feature points at least by combining facial parts missing from a preset head model to obtain a plurality of feature point sets, wherein different feature point sets are used for describing features of different parts of the head and the face. The embodiment can quickly obtain the head characteristic points required by measurement, and provides guidance for designing the bionic dummy head model with the face and the size according with the Chinese characteristics.
Description
Technical Field
The present invention relates to the field of measurement technologies, and in particular, to a method, an apparatus, and a storage medium for feature point screening of a head model.
Background
With the continuous progress of the technology, the application field of the human body measurement data is more and more extensive, for example, the head design of an automobile crash dummy is closely related to the human body head and face measurement data. The facial features are unique and diverse, and are the most direct and critical external features to distinguish between different people. As a bionic device important for testing the safety of vehicles, the design of a collision dummy head requires accurate head-face size data as a design reference to describe a complex human head-face morphology.
However, collision dummies adopted by the current national automobile safety regulations are developed according to the measured data of the U.S. human body, so that the protection performance of the existing automobile safety design on the Chinese human body is questionable, and therefore, the measurement research needs to be carried out on the dummy head and face so as to apply the measured data of the Chinese head and face to the dummy head and face design, so that the head and face characteristics of the Chinese are met.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a method, a device, and a storage medium for screening feature points of a head model, which are used for realizing the feature point screening of the head model and providing a reference for simplifying the design of a bionic head model.
The embodiment of the invention provides a feature point screening method for a head model, which comprises the following steps:
preprocessing an original image of a preset head model to obtain a target image, wherein the preset head model is used for an automobile collision dummy;
determining a head reference plane and a reference line based on the target image;
combining the head reference surface and the reference line, and dividing the face area according to the proportional relation of the three parts, the rest five parts and the rest five parts to obtain a plurality of face sub-areas;
determining initial feature points from the plurality of face sub-regions according to a preset selection condition;
and screening the initial feature points at least in combination with the facial parts missing from the preset head model to obtain a plurality of feature point sets, wherein different feature point sets are used for describing features of different parts of the head and the face.
An embodiment of the present invention provides an electronic device, including:
a processor and a memory;
the processor is used for executing the steps of the feature point screening method for the head model according to any embodiment by calling the program or the instruction stored in the memory.
Embodiments of the present invention provide a computer-readable storage medium storing a program or instructions for causing a computer to execute the steps of the feature point screening method for a head model according to any one of the embodiments.
The embodiment of the invention has the following technical effects:
the characteristic point screening method provided by the embodiment of the invention can quickly and simply screen representative characteristic points from a plurality of characteristic points of the head of the collision dummy, avoid excessive description, and the screened characteristic points can reflect main head and face information and describe face features. The method has the advantages of clear and easily understood principle, relatively simple operation method, no need of expensive equipment, short time period and low engineering cost, can quickly obtain the head characteristic points required by measurement, and provides guidance for designing the bionic dummy head model with the face and the size conforming to the Chinese characteristics.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a feature point screening method for a head model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a Hybrid III dummy head model according to an embodiment of the present invention;
FIG. 3 is a schematic view of a head datum provided by an embodiment of the present invention;
FIG. 4 is a schematic view of a portion of a head reference line provided by an embodiment of the present invention;
FIG. 5 is a schematic view of a portion of a head reference line provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a head-face region partition provided by an embodiment of the invention;
FIG. 7 is a schematic diagram of feature points of a dummy head model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of feature points of a dummy head model according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of feature points of a dummy head model according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of feature points of a dummy head model according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating a local feature point search process according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a dummy head contour feature point according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a dummy head contour feature point according to an embodiment of the present invention;
FIG. 14 is a schematic view of the feature points of five sense organs of a dummy head according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of a feature point screening process according to an embodiment of the present invention;
fig. 16 is a schematic flow chart of feature point extraction according to an embodiment of the present invention;
FIG. 17 is a schematic view of a head contour feature and nose key feature screening provided by an embodiment of the present invention;
fig. 18 is a schematic diagram of a technical route of feature point screening according to an embodiment of the present invention;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a method for screening characteristic points of a head model, which is mainly suitable for overcoming the problem that the simplified head model characteristic points of a collision dummy head cannot be quickly and simply screened at present.
Fig. 1 is a flowchart of a feature point screening method for a head model according to an embodiment of the present invention. Referring to fig. 1, the method for screening feature points of a head model specifically includes:
s110, preprocessing an original image of a preset head model to obtain a target image, wherein the preset head model is used for an automobile collision dummy.
The present invention is further described in detail with a Hybrid iii dummy head model to enable those skilled in the art to practice the invention with reference to the description. The schematic diagram of the head model of the Hybrid III dummy can be referred to as shown in FIG. 2.
The acquisition equipment for the original image of the head model is generally a high-definition camera, a digital camera, a mobile phone camera and the like, so that the quality and the definition of the acquired original image are different, some original images are even accompanied by strong noise, and the color brightness is different, so that the characteristic information of the image is difficult to process, the extraction and the accuracy of the image characteristic are influenced, and the analysis is difficult. Therefore, the image preprocessing work is more meaningful in feature point screening, the color photos are processed into images with consistent grayness through preprocessing means such as grayness and the like, high-quality and high-efficiency identification is guaranteed, and the purpose of reducing the time spent on subsequent feature point screening can be achieved through edge detection of the images.
In summary, the preprocessing the original image of the preset head model to obtain the target image includes:
carrying out graying processing on the original image to obtain a grayscale image corresponding to the original image; and carrying out edge detection on the gray level image to obtain the target image.
Specifically, the color in the color image is formed by combining R, G, B three basic color element components, and the luminance of the three basic color elements is different due to the different specific gravity assigned to each basic color element component, thereby forming different colors. Therefore, when the gray level processing is performed on the original image, the gray level value of the pixel point is calculated according to the three primary color pigment components of the pixel point in the original image, then the gray level value is given to each primary color pigment component, and finally the gray level image is obtained. The calculation formula of the gray value of the pixel point is shown as the following formula (1).
Wherein,g(x,y) Is the gray value obtained after the conversion of a pixel point in the original image,R(x,y) Is the original sourceThe red color component of a pixel in the starting image,G(x,y) For the green component of a pixel in the original image,B(x,y) And the blue component of a pixel point in the original image.
An edge refers to the collection of pixels whose surrounding pixels change sharply in gray, which is an essential feature of an image. The edge exists among the target, the background and the region, is a mark of the position, is insensitive to the change of the gray scale, adopts a method of firstly carrying out edge detection on the gray scale image of the head model and then carrying out feature point extraction, and can achieve the effects of reducing the calculation amount of feature matching and improving the processing efficiency and accuracy. Optionally, the edge of the gray image is determined by using a Sobel operator edge detection method, so as to obtain the target image.
And S120, determining a head reference plane and a reference line based on the target image.
Optionally, the determining a head reference plane and a reference line based on the target image includes:
determining a first feature point of a preset head model setting position based on the target image according to the geometrical features and the prior knowledge of the preset head model; and determining the head reference plane and the reference line according to the first characteristic point.
Optionally, according to the intuitive geometric features and the priori knowledge, feature points of the key positions can be preliminarily marked by a manual method, or the feature points of the key positions are determined by feature matching based on preset features, and then the head reference plane and the reference line are determined at the head of the dummy.
Wherein the head reference plane comprises a sagittal plane, a coronal plane, a median sagittal plane, a horizontal plane, and a frankfurt plane. Referring to a schematic representation of a head reference plane as shown in FIG. 3, a sagittal plane 310 is a longitudinal section perpendicular to the horizontal plane and the coronal plane in the sagittal axis direction that divides the head into left and right portions, wherein the central portion is referred to as the median sagittal plane and divides the head into left and right portions. The coronal plane 320 is a longitudinal section perpendicular to the horizontal and sagittal planes along the coronal axis and dividing the head into anterior and posterior portions. Horizontal plane 330 refers to a transverse plane that is perpendicular to both the coronal and sagittal planes and divides the head into upper and lower portions. Frankfurt plane 340 refers to a plane passing through the left and right infraorbital points and parallel to the head-making reference plane.
FIG. 4 is a schematic representation of a portion of a head reference line, the reference line including at least a sagittal axis 410, a coronal axis 420, and a vertical axis 430, wherein the sagittal axis 410 is an axis perpendicular to the vertical axis and the coronal axis from anterior to posterior; the coronal axis 420 is an axis perpendicular to the sagittal axis and the coronal axis from left to right; vertical axis 430 is an axis perpendicular to the sagittal and coronal axes and perpendicular to the horizontal plane.
As shown in fig. 5, the head reference line may further include: median line 510, external eye corner line 520, subnasal point line 530, internal eye corner perpendicular line 540, oral corner line (cleft line) 550, and submental point line 560. Wherein, the median line 510 is a straight line formed by the glabellar point, the subnasal point and the submental point; the outer-eye corner line 520 is a straight line formed by the left-eye outer corner point and the right-eye outer corner point; the infranasal dotted line 530 is a line passing through the infranasal point, parallel to the line 520 between the outer corner of the eyes and perpendicular to the median line 510; the inner eye corner perpendicular 540 is a line which respectively starts from the inner corner of the left eye and the inner corner of the right eye, is parallel to the median line 510 and is perpendicular to the outer eye corner line 520; an interporonal line (oral fissure line) 550 is a line connecting the two side oral corners, perpendicular to the central line 510, and parallel to the interporonal line 520; the submental line 560 is a line that passes through the submental point, is perpendicular to the median line 510, and is parallel to the interpupillary line 550.
And S130, combining the head reference surface and the reference line, dividing the face area according to the proportional relation of the three parts to the five parts, and obtaining a plurality of face sub-areas.
By performing region division on the face, the feature points which can better express facial features can be better determined.
Referring to fig. 6, a schematic diagram of head-face region division is shown, wherein three stops refer to an upper stop region 610 between the forehead midpoint and the glabellar point, an intermediate stop region 620 between the glabellar point and the infranasal point, and a lower stop region 630 between the infranasal point and the submental point. The five parts refer to four parts 640 between the left lateral side and the left superior eyebrow arch and five parts 650 between the right lateral side and the right superior eyebrow arch, two parts 660 between the left superior eyebrow arch and the left inner corner, three parts 670 between the right superior eyebrow arch and the right inner corner, and one part 680 between the left inner corner and the right inner corner.
And S140, determining initial feature points from the plurality of face sub-regions according to preset selection conditions.
Specifically, referring to a human body measurement method and a simplified head model, the feature points meeting the following requirements are determined as initial feature points: the mark is also needed to be positioned at key positions, such as five sense organs, and in addition, the mark is also needed to be positioned at the position with larger curvature, such as canthus and the like; the characteristic points are distributed uniformly as much as possible; the density of the distribution of the characteristic points needs to be proper, the accuracy of the model is influenced when the density is too low, and the measurement efficiency is influenced when the density is too high; corresponding feature points in different images have a one-to-one correspondence relationship, namely the corresponding feature points in different samples should mark the same image features; the main measuring points specified in the human body measuring method and the key measuring points in the main measuring items are covered.
The initial feature points determined from the plurality of face sub-regions according to the preset selection condition comprise the following points:
head apex P1, forehead point P2, anterior chimney point P3, hair margin point/hairline point P4, glabellar point P5, glabellar upper point P6, nasion point P7, dorsum nasi/cavum nasi point P8, nasion point P9, nose cusp point P10, infranasal point P11, alar point P12, gingival point P13, upper lip point/upper lip point P14, cleft point P15, lower lip point/lower lip point P16, corner of mouth point P17, supramentum point P18, antennal point P19, submental point P20, pupil point P21, eminence point P22, intraocular point P23, extraocular point P24, inframandibular point P24, supraorbital point P24, tragus point P24, supraorbital point P24, periorbital point P24, periapical point P24, point P24 of inferior, Cephalad/cranial point P39, occipital/posterior point P40, posterior vertex P41, and inion point P42.
S150, screening the initial feature points at least in combination with the facial parts missing from the preset head model to obtain a plurality of feature point sets, wherein different feature point sets are used for describing features of different parts of the head and the face.
Optionally, according to a human body measurement method, the characteristic points of each missing part of the head model of the Hybrid III dummy are removed by combining the simplification characteristics of the head model of the Hybrid III dummy. The simplified features of the head model of the Hybrid III dummy are shown in the following table 1.
TABLE 1 simplified characteristics of Hybrid III dummy head model
Since the head model of the Hybrid III dummy lacks the coronal and sagittal sutures of the skull, the chimney point P3 was removed according to the characteristics 6 shown in Table 1. Because the head model of the Hybrid III dummy lacks forehead hairlines, the hair-origin point P4 is removed according to the characteristic 3 shown in the above Table 1. Because the upper edges of the left and right eyebrows are missing based on the Hybrid III dummy head model, the supraorbital point P6 is removed according to the feature 2 shown in the above Table 1. Since the transition between the nasal back and the forehead cannot be determined based on the Hybrid III dummy head model, the nasal back point P8 is removed according to the feature 5 shown in table 1. Due to missing gingiva based on the Hybrid III dummy head model, the gingival point P13 was removed according to feature 4 shown in table 1 above. Due to the missing eyes based on the Hybrid III dummy head model, pupillary point P21 and the glabellar point P22 were rejected according to feature 2 shown in table 1 above. Due to the lack of ears based on the Hybrid III dummy head model, tragus point P27, supra-aural attachment point P28, sub-aural attachment point P29, pre-aural point P30, post-aural point P31, supra-aural point P32, sub-aural point P33, and ear junction point P34 were removed according to feature 1 shown in table 1 above. Because the temporal ridge is missing based on the Hybrid III dummy head model, the frontal temporal point/temporal ridge point P35 is removed according to the characteristics 6 shown in the table 1. Due to the deletion of mastoid based on the Hybrid III dummy head model, the mastoid point P38 was removed according to feature 6 shown in table 1 above. Due to the absence of the inion based on the Hybrid III dummy head model, the posterior vertex P41 and inion P42 were removed according to feature 6 shown in table 1 above.
And obtaining a first feature point set after performing a first round of screening on the initial feature points by combining the missing facial parts of the head model of the Hybrid III dummy and the features of the facial parts, wherein the feature points included in the first feature point set are feature points C1-C23 shown in the following table 2.
TABLE 2 first feature Point set of dummy head model
According to the definition of the characteristic points, the characteristic points of the dummy head model are roughly marked by a manual method, and the schematic diagrams of the characteristic points of the dummy head model shown in fig. 7, fig. 8, fig. 9 and fig. 10 are referred.
Further, the feature points in the first feature point set are continuously screened according to the head contour of the dummy head model, and the feature points describing the head contour are counted to the second feature point set. Illustratively, the counting the feature points describing the head contour to a second feature point set includes:
generating a scale space based on a Hessian matrix according to the pixel value of the characteristic point in the first characteristic point set, and determining a local extreme point in the scale space; determining candidate feature points according to the local extreme points based on a preset strategy; determining candidate feature points in a required continuous scale space by an interpolation method according to the candidate feature points; if the distance between the candidate feature point and the interpolation center point exceeds a threshold value, the candidate feature point is removed from the first feature point set; and counting the rest characteristic points in the first characteristic point set to a second characteristic point set.
Specifically, in combination with the simplified dummy head model, the Speeded UP Robust Features (SURF) algorithm is used to continue feature point screening, which is fast and Robust, and as can be seen from table 2, the screening regions only need to be near one median line, at four edges and at five edges.
First, a scale space is generated using a Hessian matrix. The scale of the image refers to the thickness of the content of the image, and the concept of the scale is used to simulate the distance between an observer and an object. The scale space of the image is a set of blurred images formed by an image passing through a plurality of different Gaussian kernels, and is used for simulating the distance and the blurring degree of an object seen by human eyes. The SURF algorithm selects approximate values of characteristic values of sea plug (Hessian) matrix determinant to generate a scale space, and the characteristic point function is made to beThat is, any one feature point can be defined as a two-dimensional functionWhere x and y are the spatial (planar) coordinates of the feature point, and the function value at any pair of spatial coordinates (x, y)Generally referred to as the pixel value of the feature point. The sea plug matrix H is composed ofAnd its partial derivatives. The sea plug matrix of the feature points is shown in the following formula (2):
each feature point can obtain a respective H matrix, and the discriminant of the H matrix is shown in the following formula (3):
the result obtained by the above expression (3) represents the eigenvalue of the Hessian matrix, and all the feature points are divided according to the positive and negative of the result obtained by the above expression (3), and thereby whether the corresponding feature point is a local extreme point in the scale space is determined.
Specifically, when the sea plug matrix of the feature point is a positive definite matrix, the feature point is a local minimum value point. The positive definite matrix distinguishing method comprises the following steps: if the main formula of each step is greater than 0, that is> 0, and det (H) > 0, the matrix is positive definite.
When the sea plug matrix of the feature point is a negative definite matrix, the feature point is a local maximum value point. The negative definite matrix distinguishing method comprises the following steps: the even-order main sub-type being positive and the odd-order main sub-type being negative, i.e.< 0, and det (H) > 0, the matrix is a negative definite matrix.
It is understood that the purpose of generating the scale space is to extract local extreme points (i.e., maximum and minimum values) from all the feature points of the first feature point set, and mark the extracted local extreme points as candidate points.
Then, the optimal extreme point is found based on the generated scale space. Using a filter of size 3 x 3, the pixel values of all said candidate points obtained by determinant approximation of the sea-plug matrix are compared with the pixel values of the 26 candidate points of the neighbourhood of the same scale space and of the corresponding neighbourhood of the adjacent scale space (i.e. with the pixel values of the 26 candidate points of the neighbourhood of the same scale space) And (6) comparing. The 26 candidate points are extracted keypoints (the keypoint refers to any one of the candidate points) and other 8 candidate points in the neighborhood of the extracted keypoint and 18 candidate points in the two levels above and below the keypoint (9 candidate points in the adjacent upper level and 9 candidate points in the adjacent lower level, as shown in fig. 11), and the regions (namely, the 27 candidate points shown in fig. 11) are classified into the regionsPoint) pixel values are represented as a maximum value and a minimum value, candidate points are screened out and determined as candidate feature points.
Finally, final feature points are determined based on the determined candidate feature points. Screening out the actually required characteristic points in the continuous scale space by using an interpolation method, and obtaining a quadratic equation of an approximate value D (x) according to a Taylor series operation formula, wherein the quadratic equation is shown as the following formula (4):
wherein, D (x) represents the approximate value of the extreme value of the candidate feature point in the continuous scale space, x is the coordinate of the candidate feature point on the scale space, and each coefficient of D (x) can be obtained by utilizing the difference operation of each corresponding extreme value of the candidate feature point on two adjacent scale spaces. To D (x) derive, orderAn extreme value is obtained as shown in the following equation (5).
In the formula,representing the phase deviation degree of the candidate characteristic point and the interpolation center point in the scale space, if soIs too large (e.g., greater than a threshold value), which represents that the extreme point has deviated from the trajectory in the scale space where the candidate feature point is located, and in this case, the corresponding candidate feature point should be deleted. The precise positioning and the scale of the feature points are screened out through the multi-iteration mode, the feature points in the first feature point set are checked and adjusted, and the feature points capable of describing the head outline are stored into the second feature point set from the positions near the center line of one part, four parts and five edges. Exemplary, refer to the tableAnd 3, the second feature point set includes feature points. Meanwhile, reference is made to schematic diagrams of the dummy head contour feature points shown in fig. 12 and 13.
TABLE 3 second feature Point set
Further, feature points in the first feature point set are continuously screened according to facial features, feature points describing eye features are counted to a third feature point set, feature points describing nose features are counted to a fourth feature point set, feature points describing mouth features are counted to a fifth feature point set, feature points describing eyebrow features are counted to a sixth feature point set, and feature points describing ear features are counted to a seventh feature point set.
Specifically, screening is continuously performed by combining a simplified head model and adopting an SURF algorithm, and feature points capable of describing main features of the eyes are stored in a third feature point set from the second, third, fourth and fifth stopped parts; storing characteristic points capable of describing main characteristics of the nose into a fourth characteristic point set from the first part, the second part and the third part of the pause; storing feature points capable of describing main features of the mouth part into a fifth feature point set from the first part, the second part and the third part of the lower stop; storing the characteristic points capable of describing the main characteristics of the eyebrow part into a sixth characteristic point set; and storing the characteristic points capable of describing the main characteristics of the ears into a seventh characteristic point set. Wherein, the characteristic points describing the main characteristics refer to the characteristic points which can determine the position and the proportional size of the five sense organs.
The feature point references in the third feature point set are shown in table 4 below, the feature point references in the fourth feature point set are shown in table 5 below, and the feature point references in the fifth feature point set are shown in table 6 below.
Table 4: third feature point set
Table 5: fourth feature point set
Table 6: fifth feature point set
In combination with the above tables 4, 5 and 6, reference is made to a schematic diagram of the five sense organs characteristic of a dummy head as shown in fig. 14.
In summary, the screening the initial feature points at least in combination with the facial part missing from the preset head model to obtain a plurality of feature point sets includes:
removing the characteristic points belonging to the facial part missing from the preset head model in the initial characteristic points, and counting the remaining initial characteristic points to a first characteristic point set;
according to the preset head model, continuously screening the feature points in the first feature point set through an accelerated robust feature SURF algorithm to count the feature points describing the head outline to a second feature point set, counting the initial feature points describing the eye features to a third feature point set, counting the initial feature points describing the nose features to a fourth feature point set, counting the initial feature points describing the mouth features to a fifth feature point set, counting the initial feature points describing the eyebrow features to a sixth feature point set, and counting the initial feature points describing the ear features to a seventh feature point set. Optionally, a schematic diagram of a feature point screening process shown in fig. 15 may also be referred to, which specifically includes: selecting head feature points, determining whether a specific feature point meets measurement requirements, if not, rejecting the feature point, if so, further judging whether the feature point is located at a missing part, if so, rejecting the feature point, if not, storing the feature point into a first feature point set (namely, a feature point set 1 shown in fig. 15), and if so, further judging whether the feature point can describe the head contour, if so, storing the feature point into a second feature point set (namely, a feature point set 2 shown in fig. 15), if not, continuously judging whether the feature point can describe facial features, if not, rejecting the feature point, and if so, storing the feature point into a third feature point set (namely, a feature point set 3 shown in fig. 15).
In conclusion, the screened feature points are classified into different feature point sets. A first set of feature points (C1-C23) may describe details of geometric features of the five sense organs, localization locations, and facial contour features. The second set of feature points (L1-L11) may describe head dominant contour features. The third set of feature points (E1-E4) may describe eye dominant features. A fourth set of feature points (N1-N4) may describe the main features of the nose. The fifth feature point set 5 (M1-M3) may describe the main features of the mouth. The sixth set of feature points (∅) may illustrate that the brow features are all simplified; the seventh set of feature points (∅) may illustrate that the ear features are all simplified.
Further, the method further comprises:
and according to a target requirement, performing union operation on part of the feature point sets in the plurality of feature point sets to obtain a target feature point set meeting the target requirement. Correspondingly, referring to a schematic flow chart of feature point extraction shown in fig. 16, the method specifically includes: inputting a demand, selecting a feature point set, processing an intersection set, and extracting feature points.
That is, according to the measurement requirement, the sets are combined to obtain a feature point set which can describe different head and face features. For example, when the head contour characteristic and the main characteristic of the nose are measured, the second characteristic point set and the fourth characteristic point set are collected and collected to obtain the screened head characteristic points, and the screened characteristic points can be used for measuring the characteristic dimensions such as the head length, the nose length, the head width, the nose width, the head full height, the nose height, the crown circumference and the like, so that the head contour characteristic and the main characteristic of the nose are obtained, and the head feature of the biomimetic dummy is proved to be in accordance with the corresponding human style feature. Correspondingly, reference is made to a head contour feature and nose key feature screening diagram as shown in fig. 17.
The embodiment has the following technical effects:
representative feature points can be quickly and simply screened from a plurality of feature points of the head of the collision dummy, excessive description is avoided, and the screened feature points can reflect main head and face information and describe face features. The method has the advantages of clear and easily understood principle, relatively simple operation method, no need of expensive equipment, short time period and low engineering cost, can quickly obtain the head characteristic points required by measurement, and provides guidance for designing the bionic dummy head model with the face and the size conforming to the Chinese characteristics.
In summary, referring to the schematic technical route of feature point screening shown in fig. 18, the method specifically includes the following steps: the method comprises the steps of simplifying a head model, carrying out image preprocessing, determining a head reference plane, drawing a head reference line, dividing a head face region, inputting a missing part, screening head characteristic points, extracting characteristic points according to needs, and obtaining the head characteristic points. Specifically, head characteristic points are screened according to a human body measuring method and the head and face characteristics of a dummy, different characteristic point sets are classified and classified, and then the sets are combined according to requirements to obtain a characteristic point set capable of describing head geometric characteristics, so that the purpose of describing the main characteristics of the head model by using the simplest characteristic point set is achieved, a complete head and face characteristic point screening method is finally obtained, and reference is provided for the simplified design of the bionic head model.
Carrying out image preprocessing: the image is preprocessed by means of graying, edge detection and the like, high-quality and high-efficiency identification is guaranteed, time spent on screening of subsequent feature points is reduced, and accuracy of image feature extraction is improved.
Determining a head reference plane: the sagittal, coronal, median sagittal, horizontal and frankfurt planes are determined on the simplified head model.
Drawing a head reference line: determined by a green light level, and drawing a reference line: sagittal axis, coronal axis, vertical axis. A median line: glabellar point-subnasal point-submental point; line between outer corner points: outer eye corner-outer eye corner point; nasal point line: the line passing through the nasal point is parallel to the interocular corner line and is vertical to the median line; perpendicular to the inner eye corner: the central line is parallel to the central line and is vertical to the line between the outer eye corner points; line between corner points (vent line): connecting the angular points of the two side openings, being vertical to the central line and being parallel to the line between the angular points of the outer eye; submental line: this line is perpendicular to the midline through the submental point and parallel to the point-of-mouth-angle line.
Face region division is performed: the positions of five sense organs of different human faces have commonality, and generally, the proportion relationship in the 'three stops and five parts' is approximately met. The three-stop five parts are the general standard proportion of the face length to the face width of a person, and the simple three-stop five parts divide the head into three parts from top to bottom and five parts from left to right. The three stops are the length relations of the human face, and the distance from the chin to the hairline is divided into 3 equal parts. The hairline from chin to bottom of nose, from bottom of nose to eyebrow, from eyebrow to center of forehead is "three stops". The five parts describe the width relation of the face, and divide the distance of the hairline on the left side and the right side into 5 equal parts: the part from the external canthus of the left and right eyes to the hairline on the same side is between the two eyes, the left and right eyes and the external canthus of the left and right eyes. The rule of "three stops and five parts" has a great reference value for the face region division of the simplified head model, and the embodiment uses the similar rule to divide the face region. Since the position relationship of some head model five sense organs is slightly different from that of the "three stop five parts", but generally not too extreme, this point is also considered in the present embodiment, and the position range is adjusted relative to the feature of the "three stop five parts" when the head model five sense organs are positioned. The face is regionally divided to better describe facial features as follows.
Thirdly, stopping: midpoint of forehead-point between eyebrows-point under nose-point under chin.
And five parts: head side point-eyebrow arch projecting point-inner eye angular point-eyebrow arch projecting point-head side point.
Screening head characteristic points: at present, the domestic human head and face measurement point location is determined by carrying out simplified analysis on 42 feature points described in the human body measurement method and 2 feature points described in GB/T38131 and 2019 the acquisition method of the clothing human body measurement reference points, extracting the feature points and determining related measurement items. The method for acquiring the human body measuring reference points for the clothes relates to too few characteristic points which are not enough for describing the geometric structure and the outline size of the head of the collision dummy, the method for measuring the human body relates to too many characteristic points which are complex to operate and tedious to calculate, part of characteristics of the head and the face of the collision dummy are lost, all the characteristic points do not need to be covered, and therefore the characteristic points are screened.
And (I) screening according to measurement requirements, wherein the screening of the characteristic points follows the following principle: selecting key characteristic points, such as five sense organs, and calibrating positions with larger curvature, such as canthus and the like; key characteristic points are distributed uniformly as much as possible; the density of the distribution of the characteristic points needs to be proper, the accuracy of the model is influenced when the density is too low, and the measurement efficiency is influenced when the density is too high; corresponding feature points in different images have a one-to-one correspondence relationship, namely the corresponding feature points in different samples should mark the same image features; the main measuring points specified in the human body measuring method and the key measuring points in the main measuring items are covered.
And (II) screening according to the missing parts, screening head measurement characteristic points according to a human body measurement method by combining a simplified head model, eliminating the characteristic points at the missing parts, and storing the screened characteristic points into a first characteristic point set.
And (III) screening according to the head contour, combining the simplified head model, continuing to screen, and storing the feature points capable of describing the head contour into a second feature point set.
And (IV) screening according to facial features, combining a simplified head model, continuing to screen, storing feature points capable of describing main features of eyes into a third feature point set, storing feature points capable of describing main features of a nose into a fourth feature point set, storing feature points capable of describing main features of an oral part into a fifth feature point set, storing feature points capable of describing main features of an eyebrow part into a sixth feature point set, and storing feature points capable of describing main features of an ear part into a seventh feature point set.
And finally, according to the measurement requirements, combining the sets to obtain a feature point set capable of describing different head and face features.
The feature point screening method provided by the embodiment of the disclosure divides an area by three stops and five parts, and takes the simplified characteristics, the head outline characteristics and the geometric characteristics of five sense organs as judgment bases; aiming at the new appeal of screening the simplified head model feature points, the new method is used for extracting the head and face information, and compared with the prior art, the method is low in complexity and engineering cost; various feature points can be obtained by screening through the method, and head and face features can be endowed to the head model of the bionic dummy through the feature points, so that guidance is provided for designing the head model of the bionic dummy, the face and the size of which accord with the features of Chinese.
Fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 19, the electronic device 400 includes one or more processors 401 and memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities and may control other components in the electronic device 400 to perform desired functions.
In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 403 may include, for example, a keyboard, a mouse, and the like. The output device 404 can output various information to the outside, including warning prompt information, braking force, etc. The output devices 404 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components of the electronic device 400 related to the present invention are shown in fig. 19, and components such as a bus, an input/output interface, and the like are omitted. In addition, electronic device 400 may include any other suitable components depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the feature point screening method for a head model provided by any of the embodiments of the invention.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present invention may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps of the feature point screening method for a head model provided by any of the embodiments of the present invention.
The computer readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Claims (9)
1. A feature point screening method for a head model, comprising:
preprocessing an original image of a preset head model to obtain a target image, wherein the preset head model is used for an automobile collision dummy;
determining a head reference plane and a reference line based on the target image;
combining the head reference surface and the reference line, and dividing the face area according to the proportional relation of the three parts, the rest five parts and the rest five parts to obtain a plurality of face sub-areas;
determining initial feature points from the plurality of face sub-regions according to a preset selection condition;
and screening the initial feature points at least in combination with the facial parts missing from the preset head model to obtain a plurality of feature point sets, wherein different feature point sets are used for describing features of different parts of the head and the face.
2. The method according to claim 1, wherein the preprocessing the original image of the preset head model to obtain the target image comprises:
performing graying processing on the original image to obtain a grayscale image corresponding to the original image;
and carrying out edge detection on the gray level image to obtain the target image.
3. The method of claim 1, wherein determining a head reference plane and a reference line based on the target image comprises:
determining a first feature point of a preset head model setting position based on the target image according to the geometrical features and the prior knowledge of the preset head model;
and determining the head reference surface and the reference line according to the first characteristic point.
4. The method of claim 3, wherein the head reference planes include a sagittal plane, a coronal plane, a median sagittal plane, a horizontal plane, and a Frankfurt plane, and the reference lines include at least a sagittal axis, a coronal axis, and a vertical axis.
5. The method according to claim 1, wherein the screening the initial feature points at least in combination with the facial part missing from the preset head model to obtain a plurality of feature point sets comprises:
removing the characteristic points belonging to the facial part missing from the preset head model in the initial characteristic points, and counting the remaining initial characteristic points to a first characteristic point set;
according to the preset head model, continuously screening the feature points in the first feature point set through an accelerated robust feature SURF algorithm to count the feature points describing the head outline to a second feature point set, counting the initial feature points describing the eye features to a third feature point set, counting the initial feature points describing the nose features to a fourth feature point set, counting the initial feature points describing the mouth features to a fifth feature point set, counting the initial feature points describing the eyebrow features to a sixth feature point set, and counting the initial feature points describing the ear features to a seventh feature point set.
6. The method of claim 5, wherein said counting feature points describing a head contour to a second set of feature points comprises:
generating a scale space based on a Hessian matrix of Hessian according to the pixel values of the feature points in the first feature point set, and determining local extreme points in the scale space;
determining candidate feature points according to the local extreme points based on a preset strategy;
determining candidate feature points in a required continuous scale space by an interpolation method according to the candidate feature points;
if the distance between the candidate characteristic point and the interpolation central point exceeds a threshold value, the candidate characteristic point is removed from the first characteristic point set;
and counting the remaining characteristic points in the first characteristic point set to a second characteristic point set.
7. The method of any one of claims 1-6, further comprising:
and according to a target requirement, performing union operation on part of the feature point sets in the plurality of feature point sets to obtain a target feature point set meeting the target requirement.
8. An electronic device, characterized in that the electronic device comprises:
a processor and a memory;
the processor is adapted to perform the steps of the feature point screening method for a head model according to any one of claims 1 to 7 by calling a program or instructions stored in the memory.
9. A computer-readable storage medium characterized by storing a program or instructions for causing a computer to execute the steps of the feature point screening method for a head model according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210649441.9A CN114743252B (en) | 2022-06-10 | 2022-06-10 | Feature point screening method, device and storage medium for head model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210649441.9A CN114743252B (en) | 2022-06-10 | 2022-06-10 | Feature point screening method, device and storage medium for head model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114743252A true CN114743252A (en) | 2022-07-12 |
CN114743252B CN114743252B (en) | 2022-09-16 |
Family
ID=82287171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210649441.9A Active CN114743252B (en) | 2022-06-10 | 2022-06-10 | Feature point screening method, device and storage medium for head model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114743252B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117740408A (en) * | 2024-02-19 | 2024-03-22 | 中国汽车技术研究中心有限公司 | Automobile collision dummy face pressure detection device and design method thereof |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
US20140110497A1 (en) * | 2012-10-23 | 2014-04-24 | American Covers, Inc. | Air Freshener with Decorative Insert |
CN103984920A (en) * | 2014-04-25 | 2014-08-13 | 同济大学 | Three-dimensional face identification method based on sparse representation and multiple feature points |
US20150039552A1 (en) * | 2013-08-05 | 2015-02-05 | Applied Materials, Inc. | Method and apparatus for optimizing profit in predictive systems |
CN110826372A (en) * | 2018-08-10 | 2020-02-21 | 浙江宇视科技有限公司 | Method and device for detecting human face characteristic points |
CN111402391A (en) * | 2020-03-13 | 2020-07-10 | 深圳看到科技有限公司 | User face image display method, display device and corresponding storage medium |
CN112308043A (en) * | 2020-11-26 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
CN113111690A (en) * | 2020-01-13 | 2021-07-13 | 北京灵汐科技有限公司 | Facial expression analysis method and system and satisfaction analysis method and system |
CN113807180A (en) * | 2021-08-16 | 2021-12-17 | 常州大学 | Face recognition method based on LBPH and feature points |
CN114550278A (en) * | 2022-04-28 | 2022-05-27 | 中汽研汽车检验中心(天津)有限公司 | Method, equipment and storage medium for determining head and face feature point positions of collision dummy |
-
2022
- 2022-06-10 CN CN202210649441.9A patent/CN114743252B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
US20140110497A1 (en) * | 2012-10-23 | 2014-04-24 | American Covers, Inc. | Air Freshener with Decorative Insert |
US20150039552A1 (en) * | 2013-08-05 | 2015-02-05 | Applied Materials, Inc. | Method and apparatus for optimizing profit in predictive systems |
CN103984920A (en) * | 2014-04-25 | 2014-08-13 | 同济大学 | Three-dimensional face identification method based on sparse representation and multiple feature points |
CN110826372A (en) * | 2018-08-10 | 2020-02-21 | 浙江宇视科技有限公司 | Method and device for detecting human face characteristic points |
CN113111690A (en) * | 2020-01-13 | 2021-07-13 | 北京灵汐科技有限公司 | Facial expression analysis method and system and satisfaction analysis method and system |
CN111402391A (en) * | 2020-03-13 | 2020-07-10 | 深圳看到科技有限公司 | User face image display method, display device and corresponding storage medium |
CN112308043A (en) * | 2020-11-26 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
CN113807180A (en) * | 2021-08-16 | 2021-12-17 | 常州大学 | Face recognition method based on LBPH and feature points |
CN114550278A (en) * | 2022-04-28 | 2022-05-27 | 中汽研汽车检验中心(天津)有限公司 | Method, equipment and storage medium for determining head and face feature point positions of collision dummy |
Non-Patent Citations (1)
Title |
---|
卢宗杰: "基于人脸特征点的头部位姿检测及运动跟踪控制研究", 《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117740408A (en) * | 2024-02-19 | 2024-03-22 | 中国汽车技术研究中心有限公司 | Automobile collision dummy face pressure detection device and design method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN114743252B (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11423556B2 (en) | Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional | |
CN108229278B (en) | Face image processing method and device and electronic equipment | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
US20200034996A1 (en) | Image processing method, apparatus, terminal, and storage medium | |
KR101259662B1 (en) | Face classifying method face classifying device classification map face classifying program recording medium where this program is recorded | |
CN105046246A (en) | Identification photo camera capable of performing human image posture photography prompting and human image posture detection method | |
CN106909875A (en) | Face shape of face sorting technique and system | |
WO2022257456A1 (en) | Hair information recognition method, apparatus and device, and storage medium | |
CN105593896B (en) | Image processing apparatus, image display device, image processing method | |
KR20190043925A (en) | Method, system and non-transitory computer-readable recording medium for providing hair styling simulation service | |
CN114743252B (en) | Feature point screening method, device and storage medium for head model | |
CN112734633A (en) | Virtual hair style replacing method, electronic equipment and storage medium | |
CN109685892A (en) | A kind of quick 3D face building system and construction method | |
CN113344837B (en) | Face image processing method and device, computer readable storage medium and terminal | |
CN112507766B (en) | Face image extraction method, storage medium and terminal equipment | |
KR102391087B1 (en) | Image processing methods, devices and electronic devices | |
JP2009251634A (en) | Image processor, image processing method, and program | |
KR20020085669A (en) | The Apparatus and Method for Abstracting Peculiarity of Two-Dimensional Image & The Apparatus and Method for Creating Three-Dimensional Image Using Them | |
CN108154088B (en) | Method and system for detecting side face of shopping guide machine | |
CN109241911A (en) | Human face similarity degree calculation method and device | |
JP2767814B2 (en) | Face image detection method and apparatus | |
US20240070885A1 (en) | Skeleton estimating method, device, non-transitory computer-readable recording medium storing program, system, trained model generating method, and trained model | |
WO2023276271A1 (en) | Information processing device, information processing method, and recording medium | |
CN116486054B (en) | AR virtual cosmetic mirror and working method thereof | |
CN116309010A (en) | Virtual image aesthetic feeling evaluation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |