WO2021088867A1 - 人脸数据的处理方法、设备和计算机可读存储介质 - Google Patents
人脸数据的处理方法、设备和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021088867A1 WO2021088867A1 PCT/CN2020/126487 CN2020126487W WO2021088867A1 WO 2021088867 A1 WO2021088867 A1 WO 2021088867A1 CN 2020126487 W CN2020126487 W CN 2020126487W WO 2021088867 A1 WO2021088867 A1 WO 2021088867A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- mapping
- harmonic
- dimensional
- weight
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title abstract description 3
- 238000013507 mapping Methods 0.000 claims abstract description 136
- 238000000034 method Methods 0.000 claims description 58
- 230000009977 dual effect Effects 0.000 claims description 45
- 238000010586 diagram Methods 0.000 claims description 40
- 238000012545 processing Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 10
- 230000008439 repair process Effects 0.000 claims description 10
- 230000005484 gravity Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 238000013500 data storage Methods 0.000 abstract 1
- 230000001815 facial effect Effects 0.000 abstract 1
- 230000003287 optical effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/073—Transforming surfaces of revolution to planar images, e.g. cylindrical surfaces to planar images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/037—Emission tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/021—Flattening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2008—Assembling, disassembling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Definitions
- the present invention relates generally to the field of imaging, and more specifically to the field of three-dimensional data.
- a three-dimensional object its corresponding three-dimensional data usually includes three-dimensional coordinates (X, Y, Z). In addition, it can also include color (RGB) and/or brightness (Intensity) information. Therefore, in order to store three-dimensional data, a more complex storage format and a larger storage space are usually required, which is inconvenient for data retrieval. storage.
- RGB color
- Intensity brightness
- the analysis of two-dimensional data is relatively simple, and the amount of calculation is less. Therefore, if the three-dimensional data can be reduced to two-dimensional, this is conducive to further research and analysis of the data.
- the two-dimensional data is restored to three-dimensional, it is necessary to preserve the original three-dimensional data as much as possible. If too much information is lost in the process of dimensionality reduction, the result is formed when the two-dimensional data is restored to the three-dimensional data.
- the three-dimensional image will have distortion and other problems, which is not conducive to the further use of the three-dimensional image.
- An object of the present disclosure is to provide a method and device capable of optimizing face data for subsequent further use.
- the present invention provides a method for processing face data, which includes: acquiring point cloud information of a face through a scanning device to obtain a three-dimensional model of the face; mapping the three-dimensional model onto a circular plane in an area-preserving manner, To form a two-dimensional face map.
- it further includes: performing topology repair on the obtained three-dimensional model.
- performing topological repair on the obtained three-dimensional model includes: determining the position of the genus in the three-dimensional model; eliminating the genus to reduce the number of genus in the three-dimensional model.
- mapping of the three-dimensional model onto a circular plane in an area-preserving manner includes: determining the boundary of the two-dimensional plane; and mapping the three-dimensional data to the inside of the boundary to Form a harmonic map point; calculate the second weight of the harmonic map point, and then calculate the weighted Voronoi diagram of the harmonic map point; according to the weighted Voronoi diagram, map the three-dimensional model to a two-dimensional plane in an area-preserving manner .
- the harmonically mapping the three-dimensional data to the inside of the boundary to form a harmonically mapped point includes: initializing the three-dimensional data to form a mapped point in the two-dimensional plane; Calculate the harmonic energy between the mapping points in the two-dimensional plane; when the harmonic energy is greater than a preset energy gradient threshold, adjust the coordinates of the mapping point, and adjust the coordinates of the mapping point according to the adjusted coordinates of the mapping point.
- the harmonic energy when the harmonic energy is less than a preset energy gradient threshold, stop the adjustment; and use the coordinates of the mapping point when the adjustment is stopped as the harmonic mapping point.
- calculating the harmonic energy between the mapping points in a two-dimensional plane includes: calculating the square value of the difference between the positions of adjacent mapping points; calculating the square value and the phase The first product of the first weights of the edges formed by adjacent mapping points; and calculating the sum of the first products for all the mapping points.
- the first weight of the side formed by adjacent mapping points is calculated by determining the angle of the triangle corresponding to the side; if the side is a side common to two triangles, then The first weight of the side is equal to half of the sum of the cotangent trigonometric functions of the angles between the two triangles and the side; if the side is an edge on the boundary, the first weight of the side is equal to the side Half of the trigonometric function of the cotangent of the angle relative to the side in the triangle.
- calculating the second weight of the harmonic mapping point, and then calculating the weighted Voronoi diagram of the harmonic mapping point includes: initializing the second weight of each harmonic mapping point, of which at least three harmonic mapping points are The mapping points constitute an initial surface; the weighted dual point of each initial surface is determined, and the weighted distance from the weighted dual point to the vertex of each initial surface is equal, wherein the weighted dual points are connected to form the weighted dual point
- multiple dual planes determine the weighted Voronoi diagram; update the second weight of each harmonic mapping point, and readjust the weighted Voronoi diagram according to the updated second weight.
- mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner includes: determining the center of gravity of each dual plane in the weighted Voronoi diagram; The two harmonic mapping points are mapped to the center of gravity of each dual plane, so that the three-dimensional model is mapped to a two-dimensional plane in an area-preserving manner.
- a processing device for face data including: a processor; a memory connected to the processor, and computer program codes are stored in the memory.
- the processor is caused to execute the method as described above.
- a computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by one or more processors, the method described above is implemented.
- the beneficial effects of the technical solution of the present invention include, but are not limited to, when converting three-dimensional data into two-dimensional data, it is beneficial to the storage of the data; in addition, the three-dimensional data of the present invention adopts an area-preserving method, so that the two-dimensional data is restored When it is three-dimensional, it has better restoration quality, which is conducive to the reuse of three-dimensional images.
- Fig. 1 shows a flowchart of a method for processing face data according to an aspect of the present invention
- Figure 2 shows a flowchart of a method for processing face data according to another aspect of the present invention
- Fig. 3 shows a flow chart of a method for topological restoration of a formed three-dimensional model according to an aspect of the present invention
- Fig. 4 shows a flowchart of a method for mapping the three-dimensional model onto a circular plane in an area-preserving manner according to an embodiment of the present disclosure
- Figure 5 shows a flow chart of harmonically mapping three-dimensional data to the inside of the boundary to form a harmonic mapping point
- Figure 6 shows a schematic diagram of calculating the weight of each edge
- FIG. 7 shows a flowchart of calculating the second weight of the harmonic mapping point, and then calculating the weighted Voronoi diagram of the harmonic mapping point according to an embodiment of the present invention
- Figure 8 shows an example of a Voronoi diagram
- Fig. 9 shows a flowchart of mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner according to an embodiment of the present invention.
- Fig. 10a, Fig. 10b and Fig. 10c respectively show the original image of the three-dimensional image, the two-dimensional image after harmonic mapping and the two-dimensional image after area-preserving mapping.
- the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context.
- the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
- Fig. 1 shows a flowchart of a method for processing face data according to an aspect of the present invention.
- the method of the present invention includes: operation S110, acquiring a photo of a human face through a scanning device, obtaining point cloud information through structured light stripes, and then obtaining a three-dimensional model of the human face; in operation S130, combining the three-dimensional model It is mapped onto a circular plane in an area-preserving manner to form a two-dimensional face map.
- the point data collection of the appearance surface of an object obtained by measuring instruments is also called a point cloud.
- the number of points obtained by using a three-dimensional coordinate measuring machine is relatively small, and the distance between points is relatively large, which is called sparse point cloud;
- the point cloud obtained by using a three-dimensional laser scanner or a photographic scanner has a relatively large number of points and is relatively dense, which is called a dense point cloud.
- the point cloud can generally include three-dimensional coordinates (XYZ) and reflection intensity (Intensity).
- the point cloud obtained by the photogrammetric principle may include color information (RGB) in addition to three-dimensional coordinates (XYZ).
- the point cloud is obtained by combining the principles of laser measurement and photogrammetry, which may include three-dimensional coordinates (XYZ), laser reflection intensity (Intensity), and color information (RGB).
- the point cloud information can well represent more detailed face information.
- the two-dimensional plane to which the three-dimensional model of the human face is mapped can have various shapes, but preferably, the two-dimensional plane can be a circle, because the shape of the human face is more similar to a circle, which helps to reduce the mapping process In the distortion, shorten the convergence time of the algorithm.
- the present invention maps in an area-preserving manner, thereby effectively retaining the area information in the face, thereby reducing the loss of information when restoring to the three-dimensional face data, and making the usability of the three-dimensional face image better.
- the present invention may further include, in operation S120, performing topology repair on the three-dimensional model.
- Topology repair is geometric shape repair, so that the imported model is repaired into a closed surface, so that the model becomes a whole.
- the geometry that has not been topologically repaired may have missing faces or lines, or the connection of faces may be wrong.
- performing topology repair on the obtained three-dimensional model includes: in operation S1201, determining the position of the genus in the three-dimensional model ; In operation S1203, the determined genus is eliminated to reduce the number of genus in the three-dimensional model.
- the above-mentioned reduction of the number of genus in the three-dimensional model is preferably to reduce the number of genus to zero, that is, to achieve a three-dimensional model of zero genus, which will help improve the accuracy of the three-dimensional to two-dimensional plane mapping.
- the three-dimensional model can be mapped to a two-dimensional plane in a conformal manner, but this method has certain defects, because the conformal method will cause the area information of the three-dimensional object to be lost, so that the two-dimensional image is restored to the three-dimensional Lost information at the time.
- the three-dimensional object is mapped to the two-dimensional plane through the area-preserving mapping method, so that the area of all parts of the three-dimensional object remains unchanged in the two-dimensional plane to facilitate subsequent further processing.
- FIG. 4 shows a flowchart of a method for mapping the three-dimensional model onto a circular plane in an area-preserving manner according to an embodiment of the present disclosure, including: in operation S410, determining the boundary of the two-dimensional plane; in operation S420, harmonically mapping the three-dimensional data to the inside of the boundary to form a harmonic mapping point; in operation S430, calculating a second weight of the harmonic mapping point, and then calculating a weighted Voronoi diagram of the harmonic mapping point; and In operation S440, according to the weighted Voronoi diagram, the three-dimensional model is mapped to a two-dimensional plane in an area-preserving manner.
- the present invention maps a three-dimensional human face onto a circular two-dimensional plane, so the two-dimensional boundary is a circle.
- the points in the non-boundary part of the three-dimensional data can be mapped to the inside of the two-dimensional plane defined by the boundary.
- These three-dimensional data can be mapped onto a two-dimensional plane by means of harmonic mapping.
- Fig. 5 shows a flow chart of the harmonic mapping of three-dimensional data to the inside of the boundary to form a harmonic mapping point.
- the harmonic mapping of the three-dimensional data to the inside of the boundary to form a harmonic mapping point S420 includes: in operation S4201, the three-dimensional data is initialized to form a mapping point in the two-dimensional plane. In operation S4203, calculate the harmonic energy between the mapping points in the two-dimensional plane; in operation S4205, when the harmonic energy is greater than a preset energy gradient threshold, adjust the coordinates of the mapping point, and according to the adjustment The coordinate of the subsequent mapping point is used to adjust the harmonic energy, and when the harmonic energy is less than the preset energy gradient threshold, the adjustment is stopped; and, in operation S4207, the coordinate of the mapping point when the adjustment is stopped is used as the harmonic map point.
- the energy gradient threshold ⁇ E is preset.
- all three-dimensional data points can be mapped to the above-mentioned two-dimensional plane. Initially, all three-dimensional points can be mapped to the position (0, 0). Of course, this is only an example. , It is also possible to evenly map all the three-dimensional points to the two-dimensional plane initially, that is, all the points have the same distance in the two-dimensional plane.
- E(f) represents the harmonic energy of all the mapping points. It is understandable that the initial harmonic energy may be the largest. After that, the position of each mapping point will be adjusted gradually, so that the harmonic energy will gradually decrease, and the final low At a preset energy gradient threshold. At this point, the state of reconciliation can be reached.
- the energy between all points belonging to the two-dimensional plane (excluding boundary points) and their adjacent points is calculated.
- the positions between adjacent mapped points are first calculated Calculate the first product of the square value and the first weight of the edge formed by the adjacent mapping points; and calculate the sum of the first products for all the mapping points to obtain the initial harmonic energy.
- the initial harmonic energy is greater than the aforementioned energy gradient threshold ⁇ E, adjust the position of the corresponding point, and recalculate the new harmonic energy E, and set the harmonic energy calculated in the previous round as E 0 .
- v i is the i-th point
- v j i is adjacent to the j-th point
- f (v i) represents the position of the points V i
- M is a triangular mesh
- k ij is The weight of the edge [v i ,v j ].
- the first weight of the side formed by adjacent mapping points is calculated by determining the angle of the triangle corresponding to the side; if the side is a side common to two triangles, then the The first weight of the side is equal to half of the sum of the cotangent trigonometric functions of the angles between the two triangles and the side; if the side is an edge on the boundary, the first weight of the side is equal to the side where the side is located. Half of the trigonometric function of the cotangent of the angle relative to the side in a triangle.
- the sides determined by points i and j are the common sides of the two triangles, and the sides determined by i and l are the edges of the boundary.
- the angles of the two triangles corresponding to the side e ij are ⁇ and ⁇ , and the sides The angle of the triangle corresponding to e il is ⁇ . Therefore, the weights of the two sides are calculated as follows:
- Fig. 7 shows a flowchart of calculating the second weight of the harmonic mapping point, and then calculating the weighted Voronoi diagram of the harmonic mapping point according to an embodiment of the present invention.
- calculating the second weight of the harmonic mapping point, and then calculating the weighted Voronoi diagram of the harmonic mapping point includes:
- the second weight of each harmonic mapping point is initialized, wherein at least three harmonic mapping points constitute an initial surface; in operation S4303, the weighted dual point of each initial surface is determined, and the weighted dual point reaches each initial surface.
- the weighted distances of the vertices of the faces are equal, wherein the weighted dual points are connected to form the dual faces of the weighted dual points, and the multiple dual faces determine the weighted Voronoi diagram; and, in operation S4305, update the value of each harmonic map point Second weight, and readjust the weighted Voronoi diagram according to the updated second weight.
- a weighted Voronoi diagram is determined on the basis of the formed harmonic map points.
- Fig. 8 shows an example of a Voronoi diagram.
- the Voronoi diagram is a dual form of the grid (not limited to the triangular grid). Take the triangular grid as an example. For each face in the grid, it corresponds to a voronoi diagram. Dual point (vertex of the dotted line), the distance from the dual point to the three vertices (that is, the above harmonic mapping point, the vertex of the solid line in Figure 8) is equal, and each point in the original mesh (the vertices in the present invention) The harmonic mapping point) corresponds to a dual plane in the voronoi diagram, as shown in Figure 8. The difference between the weighted Voronoi diagram and the ordinary Voronoi diagram is that each point in the original grid has a weight.
- d(q,v) is the weighted distance between q and v
- ⁇ v is the weight of point v.
- Fig. 9 shows a flowchart of mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner according to an embodiment of the present invention.
- mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner includes: in operation S4401, determining the center of gravity of each dual plane in the weighted Voronoi diagram; S4403: Map each of the harmonic mapping points to the center of gravity of each of the dual planes, thereby mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner.
- the three-dimensional data of the human face can be mapped to the two-dimensional plane in an area-preserving manner.
- Fig. 10a, Fig. 10b and Fig. 10c respectively show the original image of the three-dimensional image, the two-dimensional image after harmonic mapping and the two-dimensional image after area-preserving mapping.
- Figure 10a it is an image of an ordinary human face, from which the three-dimensional information of the human face can be extracted and a three-dimensional model can be constructed.
- Fig. 10b it is a two-dimensional image formed after the face of Fig. 10a is subjected to harmonic mapping. In this two-dimensional image, the angle is maintained, but the area of each part is different from the area of the three-dimensional image.
- FIG. 10c it is a two-dimensional image formed after the two-dimensional image of FIG. 10b is further subjected to area-preserving mapping.
- the points in Figure 10b have been further stretched and adjusted. It should be understood that in the image in Figure 10c, part of the angle information is retained, while part of the area and shape information is also saved.
- the present invention facilitates the preservation of three-dimensional data, and through the present invention, the area and shape information of the picture can be preserved, so that the area and shape information is not lost when the two-dimensional image is restored to the three-dimensional image.
- a method for processing face data including: acquiring point cloud information of a human face through a scanning device to obtain a three-dimensional model of the human face; mapping the three-dimensional model onto a circular plane in an area-preserving manner, To form a two-dimensional face map.
- Clause A2 The method according to clause A1, further comprising: performing topological repair on the formed three-dimensional model.
- mapping the three-dimensional model onto a circular plane in an area-preserving manner includes: determining the boundary of the two-dimensional plane; and reconciling the three-dimensional data Mapped to the inside of the boundary to form a harmonic map point; calculate the second weight of the harmonic map point, and then calculate the weighted Voronoi diagram of the harmonic map point; according to the weighted Voronoi diagram, the three-dimensional model is The area-preserving method is mapped to a two-dimensional plane.
- Clause A6 The method according to any one of clauses A1-A5, wherein the harmonic mapping of the three-dimensional data to the inside of the boundary to form a harmonic mapping point includes: initializing the three-dimensional data to be in the The mapping point is formed in the two-dimensional plane; the harmonic energy between the mapping points in the two-dimensional plane is calculated; when the harmonic energy is greater than a preset energy gradient threshold, the coordinates of the mapping point are adjusted, and according to the adjustment The coordinate of the subsequent mapping point is used to adjust the harmonic energy, and when the harmonic energy is less than a preset energy gradient threshold, the adjustment is stopped; and the coordinate of the mapping point when the adjustment is stopped is used as the harmonic mapping point.
- Clause A8 The method according to any one of clauses A1-A7, wherein the first weight of the side formed by adjacent mapping points is calculated by determining the angle of the triangle corresponding to the side; if the side Is a side common to two triangles, the first weight of the side is equal to half of the sum of the cotangent trigonometric functions of the angles between the two triangles and the side; if the side is an edge on the boundary, the The first weight of the side is equal to half of the cotangent trigonometric function of the angle opposite to the side in the triangle where the side is located.
- mapping the three-dimensional model to a two-dimensional plane in an area-preserving manner includes: determining the weighted Voronoi diagram The center of gravity of each dual plane; each of the harmonic mapping points is mapped to the center of gravity of each dual plane, so that the three-dimensional model is mapped to a two-dimensional plane in an area-preserving manner.
- a device for processing face data comprising: a processor; a memory connected to the processor, and computer program code is stored in the memory, and when the computer program code is executed, the The processor executes the method described in any one of clauses A1-A11.
- Clause A13 A computer-readable storage medium with computer-readable instructions stored thereon, and when the computer-readable instructions are executed by one or more processors, the method as described in any one of clauses A1-A11 is implemented.
- the apparatus for testing an application program may include at least one processing unit and at least one storage unit.
- the storage unit stores program code, and when the program code is executed by the processing unit, the processing unit is caused to execute the application program testing according to various exemplary embodiments of the present invention described in the "Exemplary Method" section of this specification. Steps in the method.
- various aspects of the present invention can also be implemented in the form of a program product, which includes program code.
- program product runs on a device
- the program code is used to make the device execute the program.
- the program product can adopt any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
- the program code contained on the readable medium can be transmitted by any suitable medium, including, but not limited to, wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
- the program code used to perform the operations of the present invention can be written in any combination of one or more programming languages.
- Programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming. Language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computing device, partly on the user's computing device and partly executed on the remote computing device, or entirely executed on the remote computing device or server.
- the remote computing device can be connected to the user computing device through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet services). Provider to connect via the Internet).
- LAN local area network
- WAN wide area network
- Internet services for example, using Internet services
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Architecture (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
一种人脸数据的处理方法,包括:通过扫描装置获取人脸的照片,通过结构光条纹求得点云信息,进而得到人脸的三维模型(S110);将所述三维模型以保面积的方式映射到圆形平面上,以形成二维人脸图(S130)。将三维数据转换为二维数据时,有利于数据的存储;此外,三维数据采用了保面积的方式,从而在将二维数据还原到三维时,具有更好的还原质量,有利于三维图像的再利用。
Description
相关申请的交叉引用
本申请要求如下专利申请的优先权:于2019年11月5日申请的、申请号为201911068702.2、名称为“检测褶皱体中突起物的方法”的中国专利申请;于2019年12月26日申请的,申请号为201911361346.3,名称为“人脸数据的处理方法、设备和计算机可读存储介质”的中国专利申请,在此将其全文引入作为参考。
本发明一般地涉及图像学领域,更具体地,涉及三维数据的领域。
通常,对于一个三维物体,其相应的三维数据通常会包括三维坐标(X,Y,Z)。除此之外,还可以包括颜色(RGB)和/或亮度(Intensity)信息,由此,为了存储三维数据,通常需要较为复杂的存储格式以及较大的存储空间,由此不便于进行数据的存储。
此外,相对于三维数据而言,对二维数据进行分析要相对简单,计算量也较少,因此,如果能够将三维数据降维到二维,这有利于对数据的进一步研究和分析。
在另一方面,二维数据如果还原到三维,则需要尽量保留原三维数据的信息,如果在降维过程中丢失过多的信息,则在从二维数据恢复到三维数据时,所形成的三维图像就会出现失真等问题,不利于三维图像的进一步使用。
发明内容
本公开的一个目的是提供一种能够对人脸数据进行优化处理,以便于后续进一步使用的方法和设备。
本发明提供一种人脸数据的处理方法,包括:通过扫描装置获取人脸的点云信息,以得到人脸的三维模型;将所述三维模型以保面积的方式映射到 圆形平面上,以形成二维人脸图。
根据本发明的一个实施方式,进一步包括:对所得到的三维模型进行拓扑修复。
根据本发明的一个实施方式,其中,对所得到的三维模型进行拓扑修复包括:确定所述三维模型中亏格的位置;消除所述亏格以降低所述三维模型中亏格的数量。
根据本发明的一个实施方式,其中,将所述三维模型以保面积的方式映射到圆形平面上包括:确定二维平面的边界;将所述三维数据调和映射到所述边界的内部,以形成调和映射点;计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图;根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面。
根据本发明的一个实施方式,其中,确定二维平面的边界包括:在所述三维模型中确定闭合曲线L;将L中的点存储到链表vlist中,其中vlist={v
0,v
1,…,v
{n-1}},v
0和v
n是同一个点;计算L的长度S:
根据本发明的一个实施方式,其中,将所述三维数据调和映射到所述边界的内部以形成调和映射点包括:将所述三维数据进行初始化,以在所述二维平面中形成映射点;计算二维平面中所述映射点之间的调和能量;当所述调和能量大于预设能量梯度阈值时,调整所述映射点的坐标,并根据所述调整后的映射点的坐标来调整所述调和能量,当所述调和能量小于预设能量梯度阈值时,停止所述调整;以及以停止调整时所述映射点的坐标作为调和映射点。
根据本发明的一个实施方式,其中,计算二维平面中所述映射点之间的调和能量包括:计算相邻映射点之间的位置之差的平方值;计算所述平方值 与所述相邻映射点形成的边的第一权重的第一乘积;以及计算针对所有映射点的第一乘积之和。
根据本发明的一个实施方式,其中,相邻映射点形成的边的第一权重通过如下方式来计算:确定所述边对应的三角形的角度;如果所述边为两个三角形公用的边,则所述边的第一权重等于两个三角形与所述边相对的角度的余切三角函数之和的一半;如果所述边为边界上的边,则所述边的第一权重等于所述边所处三角形中与该边相对的角度的余切三角函数的一半。
根据本发明的一个实施方式,其中,计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图包括:初始化每个调和映射点的第二权重,其中至少三个调和映射点构成一个初始面;确定每个初始面的加权对偶点,所述加权对偶点到每个初始面的顶点的加权距离相等,其中,所述加权对偶点连接起来形成所述加权对偶点的对偶面,多个对偶面确定了加权Voronoi图;更新每个调和映射点的第二权重,并根据所更新的第二权重来重新调整所述加权Voronoi图。
根据本发明的一个实施方式,其中更新每个调和映射点的第二权重包括:确定每个调和映射点的初始面的面积A
i;确定每个调和映射点的对偶面的面积A
i’;确定每个调和映射点的面积梯度g
i=A
i-A
i’;确定所有调和映射点的面积梯度的平方和;如果所述平方和大于预设的权重阈值时,减小所述第二权重,直至所述第二权重小于所述权重阈值。
根据本发明的一个实施方式,其中,根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面包括:确定所述加权Voronoi图中每个对偶面的重心;将每个所述调和映射点映射到所述每个对偶面的重心,从而将所述三维模型以保面积的方式映射到二维平面。
根据本发明的第二方面,提供一种用于人脸数据的处理设备,包括:处理器;与所述处理器相连接的存储器,所述存储器中存储有计算机程序代码,当所述计算机程序代码被执行时,使得所述处理器执行如上所述的方法。
根据本发明第三方面,提供一种计算机可读存储介质,其上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,实现如上所述的方法。
本发明的技术方案的有益效果包括但不限于,将三维数据转换为二维数据时,有利于数据的存储;此外,本发明的三维数据采用了保面积的方式,从而在将二维数据还原到三维时,具有更好的还原质量,有利于三维图像的再利用。
通过结合附图,可以更好地理解本发明的上述特征,并且其众多目的,特征和优点对于本领域技术人员而言是显而易见的,其中相同的附图标记表示相同的元件,并且其中:
图1示出了根据本发明的一个方面的人脸数据的处理方法的流程图;
图2示出了根据本发明的另一个方面的人脸数据的处理方法的流程图;
图3示出了根据本发明的一个方面的对形成三维模型进行拓扑修复的方法流程图;
图4示出了根据本公开的一个实施方式的将所述三维模型以保面积的方式映射到圆形平面上上的方法流程图;
图5示出了将三维数据调和映射到边界内部以形成调和映射点的流程图;
图6示出了计算每个边的权重的示意图;
图7示出了根据本发明一个实施方式的计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图的流程图;
图8示出了Voronoi图的一个示例;
图9示出了根据本发明一个实施方式的将所述三维模型以保面积的方式映射到二维平面的流程图;以及
图10a,图10b和图10c分别示出了三维图像的原图,经过调和映射之后的二维图像以及经过保面积映射之后的二维图像。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有做出 创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
应当理解,本披露的权利要求、说明书及附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。本披露的说明书和权利要求书中使用的术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本披露说明书中所使用的术语仅仅是出于描述特定实施例的目的,而并不意在限定本披露。如在本披露说明书和权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。还应当进一步理解,在本披露说明书和权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
下面结合附图对本发明的具体实施方式进行详细描述。
图1示出了根据本发明的一个方面的人脸数据的处理方法的流程图。
如图1所示,本发明的方法包括:操作S110,通过扫描装置获取人脸的照片,通过结构光条纹求得点云信息,进而得到人脸的三维模型;在操作S130,将所述三维模型以保面积的方式映射到圆形平面上,以形成二维人脸图。
通过测量仪器得到的一个物体的外观表面的点数据集合也称之为点云,通常使用三维坐标测量机所得到的点数量比较少,点与点的间距也比较大,叫稀疏点云;而使用三维激光扫描仪或照相式扫描仪得到的点云,点数量比较大并且比较密集,叫密集点云。
点云通常可以包括三维坐标(XYZ)和反射强度(Intensity)。在另 一个实施方式中,摄影测量原理得到的点云,除了包括三维坐标(XYZ)之外,还可以包括颜色信息(RGB)。在另一个实施方式中,结合激光测量和摄影测量原理得到点云,可以包括三维坐标(XYZ)、激光反射强度(Intensity)和颜色信息(RGB)。
由于存在上述信息,因此,点云信息可以很好地来表示较为详细的人脸信息,点云越密集,包含的信息越多,其在进行映射时,准确度越高。
将人脸的三维模型映射到的二维平面,可以是多种形状,但优选地,该二维平面可以是圆形,因为人脸的形状更近似于一个圆形,这有利于减少映射过程中的畸变,缩短算法的收敛时间。
此外,本发明以保面积的方式来映射,从而有效地保留了人脸中的面积信息,从而能够在恢复到三维人脸数据时减少信息的丢失,使得三维人脸图像的可用性更佳。
根据本发明的一个实施方式,如图2所示,本发明还可以进一步包括,在操作S120,对三维模型进行拓扑修复。拓扑修复就是几何外形修复,使导入模型修复成为外形封闭的曲面,这样模型就成为一个整体。没有经过拓扑修复的几何体可能有面缺失或线缺失,或者面的连接错误。
可以理解的是,由于图像分割的误差,通常三维模型中存在很多虚假的亏格(环柄)。需要检测并消除这些虚假亏格。
这些环柄非常微小,用肉眼无法直接检测。比较实用的方法就是通过计算拓扑方法得到,这些往往依赖于曲面的环柄圈和隧道圈的算法。得到这些虚假环柄之后,将他们沿着环柄圈切开,然后再填补洞隙来去除拓扑噪声。
由此,根据本发明的一个实施方式,如图3所示,根据本发明的一个实施方式,对所得到的三维模型进行拓扑修复包括:在操作S1201,确定所述三维模型中亏格的位置;在操作S1203,消除所确定的亏格以降低所述三维模型中亏格的数量。
上面所述的降低三维模型中亏格数量,优选是将亏格的数量减少为零,即实现零亏格的三维模型,这将有助于提高三维到二维平面映射时的准确性。
下面详细介绍将所述三维模型以保面积的方式映射到二维平面的方法。
需要理解的是,将三维模型映射到二维平面有多种方式。例如,可以以保角的方式将三维模型映射到二维平面,但这种方式存在一定的缺陷,因为保角的方式会使得三维物体的面积信息丢失,从而在再次将二维图像恢复到三维时丢失信息。
而在本发明中,是通过保面积的映射方式将三维物体映射到二维平面中,使得三维物体中所有的部分的面积在二维平面中仍然保持不变,以方便后续的进一步处理。
图4示出了根据本公开的一个实施方式的将所述三维模型以保面积的方式映射到圆形平面上上的方法流程图,包括:在操作S410,确定二维平面的边界;在操作S420,将所述三维数据调和映射到所述边界的内部,以形成调和映射点;在操作S430,计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图;以及,在操作S440,根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面。
从上文中可以获知,本发明是将三维人脸映射到圆形的二维平面上,因此二维边界即为圆形。
根据本发明的一个实施方式,确定二维平面的边界可以包括:在所述三维模型中确定闭合曲线L;将L中的点存储到链表vlist中,其中vlist={v
0,v
1,…,v
{n-1}},v
0和v
n是同一个点;计算L的长度S:
从上面可以看出,确定圆形的边界,实际上是一个多边形的边界,取的采样点数越多,则该多边形越接近于圆形。
从上面还可以看出,上面所述的点的坐标,实际上为一个极坐标。需要理解的是,极坐标仅仅是一种方式,还可以采用其他任何类型的坐标系。
在确定了边界之后,可以将三维数据中非边界部分的点映射到该边界所限定的二维平面内部。可以通过调和映射的方式将这些三维数据映射到二维平面上。
以通俗的方式来表达,在将三维模型映射到二维平面时,模型内部的部分由于边界的变形,其本身也会收到一定的拉力,然后向边界扩散,每个点扩散的方向是它周围所有点的合力的作用结果。直到每个部分都不再有变化,相当于达到了一种“调和”的状态。
图5示出了将三维数据调和映射到边界内部以形成调和映射点的流程图。
如图5所示,将所述三维数据调和映射到所述边界的内部以形成调和映射点S420包括:在操作S4201,将所述三维数据进行初始化,以在所述二维平面中形成映射点;在操作S4203,计算二维平面中所述映射点之间的调和能量;在操作S4205,当所述调和能量大于预设能量梯度阈值时,调整所述映射点的坐标,并根据所述调整后的映射点的坐标来调整所述调和能量,当所述调和能量小于预设能量梯度阈值时,停止所述调整;以及,在操作S4207,以停止调整时所述映射点的坐标作为调和映射点。
下面具体介绍上面的各个操作步骤。
对于一个网格M,预先设定能量梯度阈值δE。
对于非边界点,初始化
该
表示点在该二维图形中的位置。根据本发明的一个实施方式,可以将所有的三维数据点映射到上面所述的二维平面中,初始,可以将所有的三维点映射到位置(0,0),当然,这仅仅是一个示例,也可以在最初将所有的三维点平均地映射到二维平面中,即所有点在二维平面中的距离相等。
接下来,计算初始调和能量E,即计算二维平面中上述映射点之间的调和能量。调和能量计算公式如下:
在上面的公式中,E(f)表示所有映射点的调和能量,可以理解的是,初始调和能量可能是最大的,此后将逐渐调整每个映射点的位置,使得调和能量逐渐减少,最终低于一个预设的能量梯度阈值。此时即可以达到调 和状态。
在上面的公式中,对属于该二维平面(不包括边界点)的所有点与其相邻点之间的能量进行计算,根据本发明的一个实施方式,首先计算相邻映射点之间的位置之差的平方值;计算所述平方值与所述相邻映射点形成的边的第一权重的第一乘积;以及计算针对所有映射点的第一乘积之和,即可得到初始调和能量。
如果初始调和能量大于上述的能量梯度阈值δE,则调整相应点的位置,并重新计算新的调和能量E,而将上一轮计算的调和能量设为E
0。
接下来计算新的调和能量E与上一轮计算的调和能量E
0之间的差值,即|E-E
0|是否大于预设的调和能量梯度阈值δE。该循环持续进行,直到新的调和能量E与上一轮计算的调和能量E
0之间的差值不大于预设的调和能量梯度阈值δE为止。此时所有点之间的能量梯度最小,从而达到调和状态。
每个映射点的坐标通过如下公式来计算:
其中,v
i是第i个点的表示,v
j是与i相邻的第j个点的表示,f(v
i)表示v
i点的位置,M表示一个三角网格曲面,k
ij是边[v
i,v
j]的权重。
根据本公开的一个实施方式,相邻映射点形成的边的第一权重通过如下方式来计算:确定所述边对应的三角形的角度;如果所述边为两个三角形公用的边,则所述边的第一权重等于两个三角形与所述边相对的角度的余切三角函数之和的一半;如果所述边为边界上的边,则所述边的第一权重等于所述边所处三角形中与该边相对的角度的余切三角函数的一半。
对于三角形网格而言,三角形的边一般存在两种情形,一种是两个三角形公用的边,一个是边界的边,如图6所示。
在图6中,i和j点确定的边为两个三角形公用的边,而i和l确定的边是边界的边,边e
ij对应的两个三角形的角度分别为α和β,而边e
il所对应的三角形的角度为γ,由此,这两条边的权重分别根据如下方式计算:
由此可见,随着点的位置不断调整,每个三角形的角度也在不断发生变化,由此边的权重也在不断改变。但由于这种调整的收敛性,边的权重将逐渐保持恒定,从而图形的映射达到调和状态。
换言之,通过以上的描述可以看出,随着每个映射点的调整,调和能量逐渐降低,并最终达到小于特定调和能量梯度阈值,从而实现调和映射。
图7示出了根据本发明一个实施方式的计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图的流程图。
如图7所示,本发明中计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图包括:
在操作S4301,初始化每个调和映射点的第二权重,其中至少三个调和映射点构成一个初始面;在操作S4303,确定每个初始面的加权对偶点,所述加权对偶点到每个初始面的顶点的加权距离相等,其中,所述加权对偶点连接起来形成所述加权对偶点的对偶面,多个对偶面确定了加权Voronoi图;以及,在操作S4305,更新每个调和映射点的第二权重,并根据所更新的第二权重来重新调整所述加权Voronoi图。
首先,根据本发明的一个实施方式中,在所形成的调和映射点基础上,确定加权Voronoi图。图8示出了Voronoi图的一个示例。
如图8所示,Voronoi图是网格(不限于三角网格)的一种对偶形式,以三角网格为例,对于网格中的每个面来说,它都对应一个voronoi图中的对偶点(虚线的顶点),该对偶点到三个顶点(即上文中的调和映射点,图8中实线的顶点)的距离相等,而原网格中的每个点(本发明中的调和映射点)又对应voronoi图中的一个对偶面,如图8所示。而加权Voronoi图和普通的Voronoi图的区别就在于原网格中的每个点都有一个权重,在计算距离的时候,本来是d=||v-q||
2,加权之后则是d=||v-q||
2+ω,加入权重ω,就会使voronoi剖腔的大小随权重的大小而改变。某个面的顶点的权重越大,就会使这个面的外心到这个顶点的欧氏距离越远,如此该顶点对应的对偶面的面积就会变大。
下面具体解释图7的方法。
首先,初始化每个点的权重为ω
i=0,并给定一个权重阈值ε,如 ε=10
-3。
对于M中的每个面f
i=[v
a,v
b,v
c],使用如下方程组计算其加权对偶点q
i,其中v
a,v
b,v
c表示每个实线三角形的三个顶点
即q
i到这三个点的加权距离d相等,而d(q,v)=|v-q|
2+ω
v。
d(q,v)是q和v的加权距离,ω
v是v点的权重。
对于M中的每条实线边,将实线边两侧的加权对偶点q连接起来形成一条新的虚线边作为实线边的对偶边。
这些对偶边构成的新图即为加权Voronoi图Ω。每个调和映射点都在加权Voronoi图Ω中对应有一个对偶面,为一个剖腔Cell
i,则每个点当前的面积为A′
i=area(Cell
i)。
计算每个点的梯度g
i=A
i-A′
i,令G={g
0,g
1,…,g
n},若||G||
2<ε,则停止迭代更新,否则,令ω
i=ω
i-λg
i,λ为梯度下降法的步长,需要自己调,一般设置为一个小于1的值,也可以使用牛顿法等等来迭代计算新的权重。其中A
i为每个调和映射点的目标面积,即调和映射点在三维物体中对应的面积。
图9示出了根据本发明一个实施方式的将所述三维模型以保面积的方式映射到二维平面的流程图。
如图9所示,根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面包括:在操作S4401,确定所述加权Voronoi图中每个对偶面的重心;在操作S4403,将每个所述调和映射点映射到所述每个对偶面的重心,从而将所述三维模型以保面积的方式映射到二维平面。
由此,通过上述的方法,可以将人脸的三维数据以保面积的方式映射到二维平面中。
图10a,图10b和图10c分别示出了三维图像的原图,经过调和映射之后的二维图像以及经过保面积映射之后的二维图像。
如图10a所示,是一张普通人脸的图像,可以从该图像中提取人脸的三维信息,并构建出三维模型。
如图10b所示,是图10a的人脸经过调和映射之后形成的二维图像。在该二维图像中,角度得以保持,但各个部分的面积与三维图像的面积并不相同。
在图10c中,是图10b的二维图像经过进一步的保面积映射之后形成的二维图像。在该二维图像中,图10b中的各个点经过了进一步的拉伸和调整。需要理解的是,在图10c的图像中,部分角度信息被保留,而部分面积和形状信息也被保存。
通过本发明,有利于三维数据的保存,并且,通过本发明,可以保留图片的面积和形状信息,从而有利于在将二维图像恢复为三维图像时不丧失面积和形状信息。
依据以下条款可更好地理解前述内容:
条款A1.一种人脸数据的处理方法,包括:通过扫描装置获取人脸的点云信息,以得到人脸的三维模型;将所述三维模型以保面积的方式映射到圆形平面上,以形成二维人脸图。
条款A2.根据条款A1所述的方法,进一步包括:对所形成的三维模型进行拓扑修复。
条款A3.根据条款A1或A2所述的方法,其中,对所形成的三维模型进行拓扑修复包括:确定所述三维模型中亏格的位置;消除所述亏格以降低所述三维模型中亏格的数量。
条款A4.根据条款A1-A3中任意一项所述的方法,其中,将所述三维模型以保面积的方式映射到圆形平面上包括:确定二维平面的边界;将所述三维数据调和映射到所述边界的内部,以形成调和映射点;计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图;根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面。
条款A5.根据条款A1-A4中任意一项所述的方法,其中,确定二维平面的边界包括:在所述三维模型中确定闭合曲线L;将L中的点存储到链表vlist中,其中vlist={v
0,v
1,…,v
{n-1}},v
0和v
n是同一个点
计算L的长度S:
条款A6.根据条款A1-A5中任意一项所述的方法,其中,将所述三维数据调和映射到所述边界的内部以形成调和映射点包括:将所述三维数据进行初始化,以在所述二维平面中形成映射点;计算二维平面中所述映射点之间的调和能量;当所述调和能量大于预设能量梯度阈值时,调整所述映射点的坐标,并根据所述调整后的映射点的坐标来调整所述调和能量,当所述调和能量小于预设能量梯度阈值时,停止所述调整;以及以停止调整时所述映射点的坐标作为调和映射点。
条款A7.根据条款A1-A6中任意一项所述的方法,其中,计算二维平面中所述映射点之间的调和能量包括:计算相邻映射点之间的位置之差的平方值;计算所述平方值与所述相邻映射点形成的边的第一权重的第一乘积;以及计算针对所有映射点的第一乘积之和。
条款A8.根据条款A1-A7中任意一项所述的方法,其中,相邻映射点形成的边的第一权重通过如下方式来计算:确定所述边对应的三角形的角度;如果所述边为两个三角形公用的边,则所述边的第一权重等于两个三角形与所述边相对的角度的余切三角函数之和的一半;如果所述边为边界上的边,则所述边的第一权重等于所述边所处三角形中与该边相对的角度的余切三角函数的一半。
条款A9.根据条款A1-A8中任意一项所述的方法,其中,计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图包括:初始化每个调和映射点的第二权重,其中至少三个调和映射点构成一个初始面;确定每个初始面的加权对偶点,所述加权对偶点到每个初始面的顶点的加权距离相等,其中,所述加权对偶点连接起来形成所述加权对偶点的对偶面,多个对偶面确定了加权Voronoi图;更新每个调和映射点的第 二权重,并根据所更新的第二权重来重新调整所述加权Voronoi图。
条款A10.根据条款A1-A9中任意一项所述的方法,其中更新每个调和映射点的第二权重包括:确定每个调和映射点的初始面的面积A
i;确定每个调和映射点的对偶面的面积A
i’;确定每个调和映射点的面积梯度g
i=A
i-A
i’;确定所有调和映射点的面积梯度的平方和;如果所述平方和大于预设的权重阈值时,减小所述第二权重,直至所述第二权重小于所述权重阈值。
条款A11.根据条款A1-A10中任意一项所述的方法,其中,根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面包括:确定所述加权Voronoi图中每个对偶面的重心;将每个所述调和映射点映射到所述每个对偶面的重心,从而将所述三维模型以保面积的方式映射到二维平面。
条款A12.一种处理人脸数据的设备,包括:处理器;与所述处理器相连接的存储器,所述存储器中存储有计算机程序代码,当所述计算机程序代码被执行时,使得所述处理器执行如条款A1-A11中任意一项所述的方法。
条款A13.一种计算机可读存储介质,其上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,实现如条款A1-A11中任意一项所述的方法。
示例性装置
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
在一些可能的实施方式中,根据本发明实施方式的对应用程序进行测试的装置可以包括至少一个处理单元、以及至少一个存储单元。其中,存储单元存储有程序代码,当程序代码被处理单元执行时,使得处理单元执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实 施方式的对应用程序进行测试的方法中的步骤。
示例性程序产品
在一些可能的实施方式中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在设备上运行时,所述程序代码用于使设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施方式的对应用程序进行测试的方法中的步骤。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算设备,或 者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
应当注意,尽管在上文详细描述中提及了装置的若干单元或子单元,但是这种划分仅仅是示意性的并非强制性的。实际上,根据本发明的实施方式,上文描述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。
此外,尽管在附图中以特定顺序描述了本发明方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
虽然已经参考若干具体实施方式描述了本发明的精神和原理,但是应该理解,本发明并不限于所公开的具体实施方式,对各方面的划分也不意味着这些方面中的特征不能组合以进行受益,这种划分仅是为了表述的方便。本发明旨在涵盖所附权利要求的精神和范围内所包括的各种修改和等同布置。
Claims (13)
- 一种人脸数据的处理方法,包括:通过扫描装置获取人脸的照片,通过结构光条纹求得点云信息,进而得到人脸的三维模型。将所述三维模型以保面积的方式映射到圆形平面上,以形成二维人脸图。
- 根据权利要求1所述的方法,进一步包括:对所得到的三维模型进行拓扑修复。
- 根据权利要求2所述的方法,其中,对所得到的三维模型进行拓扑修复包括:确定所述三维模型中亏格的位置;消除所述亏格以降低所述三维模型中亏格的数量。
- 根据权利要求1所述的方法,其中,将所述三维模型以保面积的方式映射到圆形平面上包括:确定二维平面的边界;将所述三维数据调和映射到所述边界的内部,以形成调和映射点;计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图;根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面。
- 根据权利要求4所述的方法,其中,将所述三维数据调和映射到所述边界的内部以形成调和映射点包括:将所述三维数据进行初始化,以在所述二维平面中形成映射点;计算二维平面中所述映射点之间的调和能量;当所述调和能量大于预设能量梯度阈值时,调整所述映射点的坐标,并根据所述调整后的映射点的坐标来调整所述调和能量,当所述调和能量小于预设能量梯度阈值时,停止所述调整;以及以停止调整时所述映射点的坐标作为调和映射点。
- 根据权利要求6所述的方法,其中,计算二维平面中所述映射点之间的调和能量包括:计算相邻映射点之间的位置之差的平方值;计算所述平方值与所述相邻映射点形成的边的第一权重的第一乘积;以及计算针对所有映射点的第一乘积之和。
- 根据权利要求7所述的方法,其中,相邻映射点形成的边的第一权重通过如下方式来计算:确定所述边对应的三角形的角度;如果所述边为两个三角形公用的边,则所述边的第一权重等于两个三角形与所述边相对的角度的余切三角函数之和的一半;如果所述边为边界上的边,则所述边的第一权重等于所述边所处三角形中与该边相对的角度的余切三角函数的一半。
- 根据权利要求4所述的方法,其中,计算所述调和映射点的第二权重,进而计算所述调和映射点的加权Voronoi图包括:初始化每个调和映射点的第二权重,其中至少三个调和映射点构成一个初始面;确定每个初始面的加权对偶点,所述加权对偶点到每个初始面的顶点的加权距离相等,其中,所述加权对偶点连接起来形成所述加权对偶点的对偶 面,多个对偶面确定了加权Voronoi图;更新每个调和映射点的第二权重,并根据所更新的第二权重来重新调整所述加权Voronoi图。
- 根据权利要求9所述的方法,其中更新每个调和映射点的第二权重包括:确定每个调和映射点的初始面的面积A i;确定每个调和映射点的对偶面的面积A i’;确定每个调和映射点的面积梯度g i=A i-A i’;确定所有调和映射点的面积梯度的平方和;如果所述平方和大于预设的权重阈值时,减小所述第二权重,直至所述平方和小于所述权重阈值。
- 根据权利要求4所述的方法,其中,根据所述加权Voronoi图,将所述三维模型以保面积的方式映射到二维平面包括:确定所述加权Voronoi图中每个对偶面的重心;将每个所述调和映射点映射到所述每个对偶面的重心,从而将所述三维模型以保面积的方式映射到二维平面。
- 一种人脸数据的处理设备,包括:处理器;与所述处理器相连接的存储器,所述存储器中存储有计算机程序代码,当所述计算机程序代码被执行时,使得所述处理器执行如权利要求1-11中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,实现如权利要求1-11中任意一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/773,918 US11922632B2 (en) | 2019-11-05 | 2020-11-04 | Human face data processing method and device, and computer-readable storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911068702.2 | 2019-11-05 | ||
CN201911068702 | 2019-11-05 | ||
CN201911361346.3A CN110766808B (zh) | 2019-11-05 | 2019-12-26 | 人脸数据的处理方法、设备和计算机可读存储介质 |
CN201911361346.3 | 2019-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021088867A1 true WO2021088867A1 (zh) | 2021-05-14 |
Family
ID=69341569
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/126489 WO2021088869A1 (zh) | 2019-11-05 | 2020-11-04 | 对大脑的三维数据进行平面化处理的方法、设备和计算机可读存储介质 |
PCT/CN2020/126487 WO2021088867A1 (zh) | 2019-11-05 | 2020-11-04 | 人脸数据的处理方法、设备和计算机可读存储介质 |
PCT/CN2020/126488 WO2021088868A1 (zh) | 2019-11-05 | 2020-11-04 | 检测肠道中突起物的方法、终端和计算机可读存储介质 |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/126489 WO2021088869A1 (zh) | 2019-11-05 | 2020-11-04 | 对大脑的三维数据进行平面化处理的方法、设备和计算机可读存储介质 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/126488 WO2021088868A1 (zh) | 2019-11-05 | 2020-11-04 | 检测肠道中突起物的方法、终端和计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (3) | US20220351388A1 (zh) |
CN (4) | CN110766692B (zh) |
WO (3) | WO2021088869A1 (zh) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766692B (zh) * | 2019-11-05 | 2020-04-21 | 北京智拓视界科技有限责任公司 | 检测肠道中突起物的方法、终端和计算机可读存储介质 |
CN112287743B (zh) * | 2020-07-04 | 2021-06-22 | 滨州医学院附属医院 | 老年性痴呆预防娱乐一体系统 |
CN113240788A (zh) * | 2020-07-08 | 2021-08-10 | 北京智拓视界科技有限责任公司 | 三维数据的传输和接收方法、设备和计算机可读存储介质 |
CN112150612A (zh) * | 2020-09-23 | 2020-12-29 | 上海眼控科技股份有限公司 | 三维模型构建方法、装置、计算机设备及存储介质 |
CN113486825A (zh) * | 2021-07-12 | 2021-10-08 | 上海锐瞻智能科技有限公司 | 非接触式指纹采集装置及其方法、系统、介质 |
CN113538219A (zh) * | 2021-07-15 | 2021-10-22 | 北京智拓视界科技有限责任公司 | 三维人脸数据的传输和接收方法、设备和计算机可读存储介质 |
CN118644508B (zh) * | 2024-08-15 | 2024-10-15 | 顺通信息技术科技(大连)有限公司 | 基于机器视觉的肠镜图像智能识别方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017097439A (ja) * | 2015-11-18 | 2017-06-01 | 凸版印刷株式会社 | 導電性フィルム、タッチパネル、および、表示装置 |
CN108875813A (zh) * | 2018-06-04 | 2018-11-23 | 北京工商大学 | 一种基于几何图像的三维网格模型检索方法 |
CN110046543A (zh) * | 2019-02-27 | 2019-07-23 | 视缘(上海)智能科技有限公司 | 一种基于平面参数化的三维人脸识别方法 |
CN110288642A (zh) * | 2019-05-25 | 2019-09-27 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | 基于相机阵列的三维物体快速重建方法 |
CN110473459A (zh) * | 2018-05-11 | 2019-11-19 | 兰州交通大学 | 基于网络Voronoi图的点群选取 |
CN110766808A (zh) * | 2019-11-05 | 2020-02-07 | 北京智拓视界科技有限责任公司 | 人脸数据的处理方法、设备和计算机可读存储介质 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004044689A2 (en) | 2002-11-06 | 2004-05-27 | Geometric Informatics Inc. | Analysis of geometric surfaces by conformal structure |
CN101276484A (zh) * | 2008-03-31 | 2008-10-01 | 浙江大学 | 基于调和映射的网格生成方法 |
US8405659B2 (en) * | 2009-06-24 | 2013-03-26 | International Business Machines Corporation | System and method for establishing correspondence, matching and repairing three dimensional surfaces of arbitrary genus and arbitrary topology in two dimensions using global parameterization |
EP2714193B1 (en) * | 2011-06-03 | 2019-12-18 | Nexstim Oy | Method of overlaying nbs functional data on a live image of a brain |
US20130066219A1 (en) * | 2011-09-09 | 2013-03-14 | Jingfeng Jiang | Method for Assessing The Efficacy of a Flow-Diverting Medical Device in a Blood Vessel |
US9972128B2 (en) * | 2012-07-20 | 2018-05-15 | The University Of British Columbia | Methods and systems for generating polycubes and all-hexahedral meshes of an object |
US20140172377A1 (en) | 2012-09-20 | 2014-06-19 | Brown University | Method to reconstruct a surface from oriented 3-d points |
CN105261052B (zh) * | 2015-11-03 | 2018-09-18 | 沈阳东软医疗系统有限公司 | 管腔图像展开绘制方法及装置 |
CN105741270B (zh) * | 2016-05-16 | 2019-02-22 | 杭州职业技术学院 | 千兆电子计算机断层扫描的前列腺三维图像分割方法 |
CN107146287B (zh) * | 2017-03-22 | 2019-08-02 | 西北大学 | 二维投影图像至三维模型的映射方法 |
CN107679515A (zh) * | 2017-10-24 | 2018-02-09 | 西安交通大学 | 一种基于曲面调和形状图像深度表示的三维人脸识别方法 |
CN109416939B (zh) | 2017-12-05 | 2022-04-26 | 北京师范大学 | 面向群体应用的经颅脑图谱生成方法、预测方法及其装置 |
KR102015099B1 (ko) * | 2018-01-25 | 2019-10-21 | 전자부품연구원 | 거리정보를 이용한 랩어라운드뷰 영상제공장치 및 방법 |
US20190270118A1 (en) * | 2018-03-01 | 2019-09-05 | Jake Araujo-Simon | Cyber-physical system and vibratory medium for signal and sound field processing and design using dynamical surfaces |
CN108648231B (zh) * | 2018-05-14 | 2019-07-12 | 合肥融视信息科技有限公司 | 基于三维医学影像的管状结构长度测量系统及方法 |
CN109410195B (zh) * | 2018-10-19 | 2020-12-22 | 山东第一医科大学(山东省医学科学院) | 一种磁共振成像脑分区方法及系统 |
US10922884B2 (en) * | 2019-07-18 | 2021-02-16 | Sony Corporation | Shape-refinement of triangular three-dimensional mesh using a modified shape from shading (SFS) scheme |
-
2019
- 2019-12-26 CN CN201911361335.5A patent/CN110766692B/zh active Active
- 2019-12-26 CN CN201911361346.3A patent/CN110766808B/zh active Active
-
2020
- 2020-03-27 CN CN202010226115.8A patent/CN111127314A/zh active Pending
- 2020-10-20 CN CN202011126641.3A patent/CN112381706B/zh active Active
- 2020-11-04 WO PCT/CN2020/126489 patent/WO2021088869A1/zh active Application Filing
- 2020-11-04 US US17/774,204 patent/US20220351388A1/en active Pending
- 2020-11-04 US US17/773,918 patent/US11922632B2/en active Active
- 2020-11-04 WO PCT/CN2020/126487 patent/WO2021088867A1/zh active Application Filing
- 2020-11-04 US US17/773,932 patent/US11823390B2/en active Active
- 2020-11-04 WO PCT/CN2020/126488 patent/WO2021088868A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017097439A (ja) * | 2015-11-18 | 2017-06-01 | 凸版印刷株式会社 | 導電性フィルム、タッチパネル、および、表示装置 |
CN110473459A (zh) * | 2018-05-11 | 2019-11-19 | 兰州交通大学 | 基于网络Voronoi图的点群选取 |
CN108875813A (zh) * | 2018-06-04 | 2018-11-23 | 北京工商大学 | 一种基于几何图像的三维网格模型检索方法 |
CN110046543A (zh) * | 2019-02-27 | 2019-07-23 | 视缘(上海)智能科技有限公司 | 一种基于平面参数化的三维人脸识别方法 |
CN110288642A (zh) * | 2019-05-25 | 2019-09-27 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | 基于相机阵列的三维物体快速重建方法 |
CN110766808A (zh) * | 2019-11-05 | 2020-02-07 | 北京智拓视界科技有限责任公司 | 人脸数据的处理方法、设备和计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US11823390B2 (en) | 2023-11-21 |
US20220351388A1 (en) | 2022-11-03 |
US20220392015A1 (en) | 2022-12-08 |
WO2021088869A1 (zh) | 2021-05-14 |
WO2021088868A1 (zh) | 2021-05-14 |
CN110766692B (zh) | 2020-04-21 |
CN110766808B (zh) | 2020-03-27 |
CN112381706B (zh) | 2024-02-02 |
US11922632B2 (en) | 2024-03-05 |
CN110766692A (zh) | 2020-02-07 |
US20220366528A1 (en) | 2022-11-17 |
CN110766808A (zh) | 2020-02-07 |
CN112381706A (zh) | 2021-02-19 |
CN111127314A (zh) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021088867A1 (zh) | 人脸数据的处理方法、设备和计算机可读存储介质 | |
US10706567B2 (en) | Data processing method, apparatus, system and storage media | |
CN107909612B (zh) | 一种基于3d点云的视觉即时定位与建图的方法与系统 | |
US8711143B2 (en) | System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves | |
JP6807639B2 (ja) | 奥行きカメラを校正する方法 | |
WO2019075128A1 (en) | METHOD, APPARATUS, DEVICE AND COMPUTER STORAGE MEDIUM FOR SCORING POINT CLOUDING | |
Tagliasacchi et al. | Vase: Volume‐aware surface evolution for surface reconstruction from incomplete point clouds | |
JP5870011B2 (ja) | 点群解析装置、点群解析方法及び点群解析プログラム | |
Cenanovic et al. | Finite element procedures for computing normals and mean curvature on triangulated surfaces and their use for mesh refinement | |
EP4238063A1 (en) | Head position extrapolation based on a 3d model and image data | |
CN113393577B (zh) | 一种倾斜摄影地形重建方法 | |
WO2021222386A1 (en) | Photometric-based 3d object modeling | |
Hu et al. | Surface segmentation for polycube construction based on generalized centroidal Voronoi tessellation | |
CN112733641A (zh) | 物体尺寸测量方法、装置、设备及存储介质 | |
CN114998433A (zh) | 位姿计算方法、装置、存储介质以及电子设备 | |
US11783501B2 (en) | Method and apparatus for determining image depth information, electronic device, and media | |
Vančo et al. | Surface reconstruction from unorganized point data with quadrics | |
CN110570511A (zh) | 点云数据的处理方法、装置、系统和存储介质 | |
Wu et al. | Point cloud registration algorithm based on the volume constraint | |
WO2016045298A1 (zh) | 阴影体的建立方法及装置 | |
Li et al. | Multi-resolution representation of digital terrain models with terrain features preservation | |
CN114419250B (zh) | 点云数据矢量化方法及装置、矢量地图生成方法及装置 | |
Solem et al. | Variational surface interpolation from sparse point and normal data | |
CN116708995B (zh) | 摄影构图方法、装置及摄影设备 | |
Sun et al. | Filling holes in triangular meshes of plant organs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20884631 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/10/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20884631 Country of ref document: EP Kind code of ref document: A1 |