WO2020034743A1 - 三维模型处理方法、装置、电子设备及可读存储介质 - Google Patents

三维模型处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2020034743A1
WO2020034743A1 PCT/CN2019/091543 CN2019091543W WO2020034743A1 WO 2020034743 A1 WO2020034743 A1 WO 2020034743A1 CN 2019091543 W CN2019091543 W CN 2019091543W WO 2020034743 A1 WO2020034743 A1 WO 2020034743A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
dimensional model
normal vector
density
angle
Prior art date
Application number
PCT/CN2019/091543
Other languages
English (en)
French (fr)
Inventor
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19850109.0A priority Critical patent/EP3839894A4/en
Publication of WO2020034743A1 publication Critical patent/WO2020034743A1/zh
Priority to US17/173,722 priority patent/US11403819B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present disclosure relates to the technical field of mobile terminals, and in particular, to a method, an apparatus, an electronic device, and a readable storage medium for processing a three-dimensional model.
  • 3D model reconstruction is a mathematical model suitable for computer representation and processing. It is the basis for processing, manipulating, and analyzing its properties in a computer environment. It is also a key technology for establishing a virtual reality that expresses the objective world in a computer. Usually, the key points in the three-dimensional model are processed to realize the reconstruction of the model.
  • the same key point density is used for processing in the three-dimensional model, and the setting of the key point density has a great influence on the rendering of the three-dimensional model.
  • the present disclosure aims to solve at least one of the technical problems in the related art.
  • the present disclosure proposes a three-dimensional model processing method to achieve the adjustment of the density of key points in the three-dimensional model, so that different regions use different key point densities, which not only maintains the accuracy of the three-dimensional model, but also greatly reduces the accuracy of the three-dimensional model.
  • the number of key points of the entire model is reduced, thereby greatly reducing the memory footprint and improving the processing speed.
  • the present disclosure proposes a three-dimensional model processing device.
  • the present disclosure proposes an electronic device.
  • the present disclosure proposes a computer-readable storage medium.
  • An embodiment of one aspect of the present disclosure provides a three-dimensional model processing method, including:
  • the three-dimensional model includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices;
  • the keypoint density of the corresponding region in the three-dimensional model is adjusted.
  • the method for processing a three-dimensional model includes obtaining a three-dimensional model.
  • the three-dimensional model includes multiple key points, and includes multiple split planes obtained by connecting adjacent key points as vertices. For each area, the target keypoint density corresponding to each area is determined according to the angle information of the split plane in each area; the keypoint density of the corresponding area in the three-dimensional model is adjusted according to the target keypoint density corresponding to each area.
  • this method makes different regions use different key point densities, which not only maintains the accuracy of the details of the 3D model, but also greatly reduces the number of key points in the entire model, thereby greatly reducing memory consumption To improve processing speed.
  • Another embodiment of the present disclosure provides a three-dimensional model processing apparatus, including:
  • An acquisition module for acquiring a three-dimensional model; wherein the three-dimensional model includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices;
  • a determining module configured to determine the density of target keypoints corresponding to each region for each region in the three-dimensional model according to the angle information of the split plane in each region;
  • the adjusting module is configured to adjust the key point density of the corresponding region in the three-dimensional model according to the target key point density corresponding to each region.
  • the three-dimensional model processing device obtains a three-dimensional model.
  • the three-dimensional model includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices. For each area, the target keypoint density corresponding to each area is determined according to the angle information of the split plane in each area; the keypoint density of the corresponding area in the three-dimensional model is adjusted according to the target keypoint density corresponding to each area.
  • this method makes different regions use different key point densities, which not only maintains the accuracy of the details of the 3D model, but also greatly reduces the number of key points in the entire model, thereby greatly reducing memory consumption To improve processing speed.
  • An embodiment of another aspect of the present disclosure provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the implementation is implemented as in the present disclosure.
  • the three-dimensional model processing method according to the foregoing embodiment.
  • An embodiment of another aspect of the present disclosure provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the three-dimensional model processing method according to the foregoing embodiment is implemented.
  • FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of determining the flatness of each region according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another three-dimensional model processing method according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of another three-dimensional model processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of an internal structure of an electronic device in an embodiment
  • FIG. 7 is a schematic diagram of an image processing circuit as a possible implementation manner
  • FIG. 8 is a schematic diagram of an image processing circuit as another possible implementation manner.
  • FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure.
  • the electronic device may be a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
  • a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
  • the three-dimensional model processing method includes the following steps:
  • Step 101 Obtain a three-dimensional model.
  • the three-dimensional model includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices.
  • the three-dimensional model obtained in this embodiment includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices.
  • the key points and the cutting plane can be expressed in the form of three-dimensional coordinates.
  • the three-dimensional model obtained in this embodiment may be a three-dimensional model of a human face.
  • the three-dimensional model of the human face is obtained by performing three-dimensional reconstruction based on the depth information and the face image, instead of simply Get RGB data and depth data.
  • the depth information may be fused with the color information corresponding to the two-dimensional face image to obtain a three-dimensional model of the face.
  • the key points of the image are processed for registration and key point fusion.
  • a three-dimensional model of the face is generated based on the key points after fusion.
  • the key point is a conspicuous point on a human face or a point at a key position, for example, the key point may be a corner of an eye, a tip of a nose, a corner of a mouth, and the like.
  • keypoint recognition can be performed on the face image to obtain keypoints corresponding to the face image, so that adjacent keypoints can be used as the relative position of each keypoint in the three-dimensional space.
  • Multiple split planes obtained by connecting vertices.
  • Step 102 For each region in the three-dimensional model, determine the target keypoint density corresponding to each region according to the angle information of the split plane in each region.
  • the three-dimensional model may be divided into a plurality of regions according to a preset radius, and a plurality of split planes are included in each region, thereby obtaining angle information of each split plane.
  • the angle information of each splitting plane may be an angle with an adjacent splitting plane. After obtaining each splitting plane, the angle between the neighboring splitting planes may be used. Obtain the angle information of each split plane.
  • the angle information of the split plane in each region is larger, it means that the flatness of the region is lower; The smaller the angle of the split plane, the flatter the area. If the difference between two adjacent flatness levels is lower than the difference threshold, the two adjacent regions are merged.
  • the difference threshold is preset according to the overall structure of the three-dimensional model.
  • the degree of flatness of each region of the face can be determined by calculating the angle between two adjacent split planes in each region. For example, when two adjacent split planes in a certain area are located in the facial area of a human face, the included angle between the adjacent split planes may be 2 degrees, indicating that the facial area of the face is relatively flat; The two adjacent split planes in the area, when one is located in the facial area of the human face and the other is located on the nose, the included angle between the adjacent split planes may be 60 degrees.
  • the flatness is relatively low.
  • the corresponding target keypoint density in each area is further determined. Specifically, when the area is judged to be relatively flat, the corresponding target keypoints in the area can be set relatively small. ; When the level of flatness in the judgment area is relatively low, more key points can be set for the corresponding target key points in the area.
  • fewer target keypoints can identify the face model of the area. Therefore, for relatively flat faces, Forehead area, the density of target key points is relatively low. However, for areas with rich details such as eyes and lips, setting fewer target keypoints may not clearly identify the face model in this area, so the density of target keypoints set is relatively high.
  • Step 103 Adjust the keypoint density of the corresponding region in the three-dimensional model according to the target keypoint density corresponding to each region.
  • the key point density of each region in the three-dimensional model is adjusted.
  • the density of keypoints in an area in the 3D model is higher than the corresponding target keypoint density. It is necessary to delete some keypoints in the area, so that after deleting some keypoints, the density of keypoints in the area is less than Or equal to the corresponding target keypoint density. Then, the adjacent key points among the key points retained in the area are reconnected as vertices.
  • the method for processing a three-dimensional model includes obtaining a three-dimensional model.
  • the three-dimensional model includes multiple key points, and includes multiple split planes obtained by connecting adjacent key points as vertices. For each area, the target keypoint density corresponding to each area is determined according to the angle information of the split plane in each area; the keypoint density of the corresponding area in the three-dimensional model is adjusted according to the target keypoint density corresponding to each area.
  • different key point densities are adopted in different regions, which not only maintains the detail accuracy of the three-dimensional model, but also greatly reduces the number of key points of the entire model, thereby greatly reducing Memory usage improves processing speed.
  • the flatness of each region is determined according to the angle information of the division plane in each region, and then the density of the corresponding key points is determined.
  • Step 201 For each region in the three-dimensional model, determine a normal vector of a division plane in each region.
  • the three-dimensional model may be divided into a plurality of regions according to a preset radius, and adjacent key points are connected as vertices in each region to obtain a plurality of split planes.
  • the normal vector of each division plane is further determined, wherein the normal vector of the plane is an important vector for determining the position of the plane and refers to a non-zero vector perpendicular to the plane.
  • Step 202 Determine the normal vector of the same vertex according to the normal vector of the split planes containing the same vertex.
  • the normal vectors of the plurality of division planes including the same vertex are summed, and then the normal vectors obtained by the sum are the normal vectors of the vertices.
  • any vertex X in a three-dimensional model there are three split planes A, B, and C in the model that also contain vertex X. After determining the normal vectors of the split planes A, B, and C, the The normal vector is summed, and the vector obtained by the sum is the vector of vertex X.
  • the reflection of light depends on the setting of the vertex normal vector. If the vertex normal vectors are calculated correctly, the displayed three-dimensional model is smooth and shiny, otherwise, the displayed three-dimensional model will be There are sharp edges and ambiguities.
  • Step 203 Determine the flatness of each region according to the angle between the normal vectors of adjacent vertices in each region.
  • the normal vector of each vertex in the three-dimensional model is determined. For each vertex in each region in the three-dimensional model, determine the angle between the normal vector of each vertex and the normal vector of the adjacent vertex. Further, for the normal vector of each vertex and the adjacent vertex determined in the same region The angle between the normal vectors is calculated as the average of the angles. Finally, it is determined whether the average value of the included angle of each area is greater than a preset angle threshold, and then whether the area is flat.
  • the angle threshold is a value set in advance according to the overall structure of the three-dimensional model.
  • the region is not flat.
  • the region is flat.
  • the same threshold value can be set for each region in the 3D model to simplify the amount of calculation.
  • different thresholds can be set for each region in the three-dimensional model, thereby improving the precision of the model.
  • a normal vector of a division plane in each region is determined for each region in the three-dimensional model, and then a normal vector of the same vertex is determined according to a normal vector of a division plane including the same vertex.
  • the degree of flatness of each region is determined according to the angle between the normal vectors of adjacent vertices in each region. In this way, the flatness of each region of the three-dimensional model can be determined, thereby determining the number of vertices in each region, and further improving the processing efficiency of the three-dimensional model.
  • a three-dimensional model of a human face is used as an example to simplify the three-dimensional model of a human face, thereby obtaining a simplified three-dimensional model.
  • the three-dimensional model processing method includes:
  • Step 301 Obtain a three-dimensional face model with high density vertices.
  • the three-dimensional model includes a plurality of vertices and a plurality of split planes obtained by connecting adjacent vertices.
  • the method for acquiring a three-dimensional model of a human face is similar to the method for acquiring a three-dimensional model in step 101 in the foregoing embodiment, and details are not described herein again.
  • Step 302 Calculate the normal vector of each vertex in the three-dimensional face model.
  • the reflection of light depends on the settings of the normals of the vertices. If the normals of the vertices are calculated correctly, the displayed three-dimensional model is smooth and shiny, otherwise, the displayed The three-dimensional model will appear sharp and blurred.
  • the three-dimensional model is first divided into a plurality of regions according to a preset radius, and adjacent key points are connected as vertices in each region, thereby obtaining a plurality of split planes, and further determining each split plane. Fractal normal vector.
  • each vertex in the three-dimensional face model finds all the division planes containing the vertex, sum the normal vectors of the division planes, and then obtain the normal vectors obtained by the sum, which is the normal of the vertex. vector.
  • Step 303 For each region of the three-dimensional model of the face, calculate the included angle between the normal vectors of adjacent vertices.
  • an angle between a normal vector of each vertex and a normal vector of an adjacent vertex is obtained through calculation. Further, for an included angle between a normal vector of each vertex determined in the same region and a normal vector of an adjacent vertex, an average value of the included angle is calculated.
  • Step 304 Low-pass filter the vertices in each region in the three-dimensional model.
  • each vertex in each region in the three-dimensional face model after calculating the angle between the normal vector of each vertex and the normal vector of the adjacent vertex, low-pass filtering is performed according to the included angle value. Filter out vertices corresponding to unusually high angles. The larger the included angle, the greater the change in the angle of the cutting plane. By deleting the vertices with the larger included angle, each area is smoother.
  • Step 305 Determine the target keypoint density of each region according to the included angle between the normal vectors of adjacent vertices in each region.
  • the average value of the included angle in each region of the three-dimensional model is compared with a preset angle threshold to determine the target keypoint density of each region.
  • the target keypoint density here may be a preset vertex density.
  • the preset vertex density specification may include a high density vertex specification and a low density vertex specification.
  • the preset angle threshold can be flexibly adjusted according to the required effect, or multiple levels of threshold can be set.
  • the process of determining whether each area of the three-dimensional model is flat can be referred to as a process of performing a binarization process on each area of the three-dimensional model. Further, before the binarization process, if the difference between the flatness of two adjacent regions is lower than the difference threshold, morphological processing and connected domain solution methods can be used to perform the two adjacent regions. Merging, thereby reducing the amount of subsequent binarization and simplifying the calculation of key points of the model, and at the same time, can make the regions of the 3D model more coherent.
  • the target key point density is a region with a low density vertex specification, and key point simplification processing is performed to obtain a simplified three-dimensional model.
  • the key points in the area are simplified, for example, one is taken at every four key points. Because the simplification of key points in a relatively flat area will not only affect the imaging of the 3D model, but also greatly reduce the number of vertices in the entire 3D model. Deleting some keypoints in the corresponding area, so that after the keypoints are deleted, the keypoint density is less than or equal to the target keypoint density, and the adjacent keypoints among the keypoints retained in the corresponding area are reconnected as vertices, Thereby a simplified 3D model is obtained.
  • the method for processing a three-dimensional model obtains a three-dimensional model of a human face with a high density of vertices.
  • the three-dimensional model includes multiple vertices, and includes multiple split planes obtained by connecting adjacent vertices; calculating the normal vector of each vertex in the three-dimensional model of the face; The angle between the normal vectors of the vertices; then low-pass filtering the vertices in each region of the three-dimensional model, and the target keypoint density in each region is determined according to the angle between the normal vectors of adjacent vertices in each region, and finally Simplify the processing of key points in the area where the target key point density is low-density vertex specification, and get a simplified 3D model of the face.
  • the present disclosure also proposes a three-dimensional model processing device.
  • FIG. 4 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure.
  • the three-dimensional model processing apparatus 100 includes: an acquisition module 110, a determination module 120, and an adjustment module 130.
  • the obtaining module 110 is configured to obtain a three-dimensional model.
  • the three-dimensional model includes a plurality of key points and a plurality of division planes obtained by connecting adjacent key points as vertices.
  • a determining module 120 is configured to determine the target keypoint density corresponding to each region according to the angle information of the split plane in each region for each region in the three-dimensional model.
  • the adjusting module 130 is configured to adjust the key point density of the corresponding region in the three-dimensional model according to the target key point density corresponding to each region.
  • the determining module 120 is further configured to determine the flatness of each region according to the angle information of the split plane in each region for each region in the three-dimensional model;
  • the corresponding target keypoint density is determined.
  • the determination module 120 is further configured to determine a normal vector of a division plane in each region for each region in the three-dimensional model; and determine a method of the same vertex according to a normal vector of the division plane including the same vertex. Vector; determine the flatness of each region based on the angle between the normal vectors of adjacent vertices in each region.
  • the determining module 120 further includes:
  • a determining unit is configured to determine an included angle between a normal vector of each vertex and a normal vector of an adjacent vertex in each region.
  • a calculation unit configured to calculate an average value of the included angle according to the included angle between the normal vector of each vertex and the normal vector of the adjacent vertex;
  • a judging unit is configured to determine whether the included angle is flat according to whether the average value of the included angle is greater than a preset angle threshold.
  • the determining unit is further specifically configured to sum the normal vectors of the division planes containing the same vertex; and determine the normal vectors of the same vertex according to the normal vectors obtained by the summation.
  • the three-dimensional model processing apparatus 100 further includes:
  • the delimiting module 140 is configured to delimit various regions in the three-dimensional model according to a preset radius.
  • the merging module 150 is configured to merge the two adjacent regions if the difference between the flatness of the two adjacent regions is lower than the difference threshold.
  • the adjustment module 130 is further configured to delete a part of key points in a corresponding area for an area where the current key point density is higher than the target key point density, so that the key point density after deleting the key points is less than or equal to Target keypoint density; reconnect the neighboring keypoints among the keypoints retained in the corresponding area as vertices.
  • the three-dimensional model processing device of the embodiment of the present disclosure obtains a three-dimensional model.
  • the three-dimensional model includes a plurality of key points and a plurality of split planes obtained by connecting adjacent key points as vertices. For each area, the target keypoint density corresponding to each area is determined according to the angle information of the split plane in each area; the keypoint density of the corresponding area in the three-dimensional model is adjusted according to the target keypoint density corresponding to each area.
  • different key point densities are adopted in different regions, which not only maintains the detail accuracy of the three-dimensional model, but also greatly reduces the number of key points of the entire model, thereby greatly reducing Memory usage improves processing speed.
  • the present disclosure also proposes an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the implementation is implemented.
  • the three-dimensional model processing method according to the foregoing embodiment.
  • FIG. 6 is a schematic diagram of the internal structure of the electronic device 200 in one embodiment.
  • the electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected through a system bus 210.
  • the memory 230 of the electronic device 200 stores an operating system and computer-readable instructions.
  • the computer-readable instructions may be executed by the processor 220 to implement a face recognition method according to an embodiment of the present disclosure.
  • the processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200.
  • the display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, or a button, a trackball, or a touchpad provided on the housing of the electronic device 200. It can also be an external keyboard, trackpad, or mouse.
  • the electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (for example, a smart bracelet, a smart watch, a smart helmet, or smart glasses).
  • FIG. 6 is only a schematic diagram of a part of the structure related to the solution of the present disclosure, and does not constitute a limitation on the electronic device 200 to which the solution of the present disclosure is applied.
  • the specific electronic device 200 may include more or fewer components than shown in the figure, or some components may be combined, or have different component arrangements.
  • an image processing circuit according to an embodiment of the present disclosure is provided.
  • the image processing circuit may be implemented by using hardware and / or software components.
  • the image processing circuit specifically includes an image unit 310, a depth information unit 320, and a processing unit 330. among them,
  • the image unit 310 is configured to output a two-dimensional image.
  • the depth information unit 320 is configured to output depth information.
  • a two-dimensional image may be acquired through the image unit 310, and depth information corresponding to the image may be acquired through the depth information unit 320.
  • the processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and is configured to identify the target three-dimensional template matching the image according to the two-dimensional image obtained by the image unit and the corresponding depth information obtained by the depth information unit. , Output the information associated with the target 3D module.
  • the two-dimensional image obtained by the image unit 310 may be sent to the processing unit 330, and the depth information corresponding to the image obtained by the depth information unit 320 may be sent to the processing unit 330.
  • the processing unit 330 may be based on the image and the depth information. Identify the matching target 3D template in the image and output the information associated with the target 3D module.
  • the image processing circuit may further include:
  • the image unit 310 may specifically include: an electrically connected image sensor 311 and an image signal processing (Image Signal Processing, ISP) processor 312. among them,
  • ISP Image Signal Processing
  • the image sensor 311 is configured to output original image data.
  • the ISP processor 312 is configured to output an image according to the original image data.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • Information including images in YUV or RGB format.
  • the image sensor 311 may include a color filter array (such as a Bayer filter), and a corresponding photosensitive unit.
  • the image sensor 311 may obtain light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data.
  • an image in a YUV format or an RGB format is obtained and sent to the processing unit 330.
  • the ISP processor 312 when the ISP processor 312 processes the original image data, it can process the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data and collect statistical information about the image data. The image processing operations may be performed with the same or different bit depth accuracy.
  • the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322. among them,
  • the structured light sensor 321 is configured to generate an infrared speckle pattern.
  • the depth map generation chip 322 is configured to output depth information according to the infrared speckle map; the depth information includes a depth map.
  • the structured light sensor 321 projects speckle structured light onto a subject, obtains the structured light reflected by the subject, and images the structured light reflected by the subject to obtain an infrared speckle pattern.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth map), the depth map indicates the depth of each pixel in the infrared speckle map.
  • the depth map generation chip 322 sends the depth map to the processing unit 330.
  • the processing unit 330 includes a CPU 331 and a GPU (Graphics Processing Unit) 332 which are electrically connected. among them,
  • the CPU 331 is configured to align the image and the depth map according to the calibration data, and output a three-dimensional model according to the aligned image and the depth map.
  • the GPU 332 is configured to determine a matching target 3D template according to the 3D model, and output information related to the target 3D template.
  • the CPU 331 obtains an image from the ISP processor 312, and obtains a depth map from the depth map generation chip 322. In combination with the calibration data obtained in advance, the two-dimensional image can be aligned with the depth map, thereby determining each image in the image. Depth information corresponding to pixels. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the image to obtain a three-dimensional model.
  • the CPU 331 sends the three-dimensional model to the GPU 332, so that the GPU 332 executes the three-dimensional model processing method as described in the foregoing embodiment according to the three-dimensional model, to simplify key points, and obtain a simplified three-dimensional model.
  • the GPU 332 may determine a matching target three-dimensional template according to the three-dimensional model, and then perform annotation in the image according to the information associated with the target three-dimensional template, and output an image with the labeled information.
  • the image processing circuit may further include a display unit 340.
  • the display unit 340 is electrically connected to the GPU 332 and is configured to display an image with labeled information.
  • the beautified image processed by the GPU 332 may be displayed by the display 340.
  • the image processing circuit may further include: an encoder 350 and a memory 360.
  • the beautified image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
  • the memory 360 may be multiple or divided into multiple storage spaces.
  • the image data processed by the storage GPU312 may be stored in a dedicated memory, or a dedicated storage space, and may include DMA (Direct Memory Access, direct and direct). Memory access) feature.
  • the memory 360 may be configured to implement one or more frame buffers.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • the information including images in YUV format or RGB format, is sent to the CPU 331.
  • the structured light sensor 321 projects speckle structured light onto a subject, acquires the structured light reflected by the subject, and forms an infrared speckle pattern by imaging the reflected structured light.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth Map).
  • the depth map generation chip 322 sends the depth map to the CPU 331.
  • the CPU 331 obtains a two-dimensional image from the ISP processor 312, and obtains a depth map from the depth map generation chip 322. Combined with the calibration data obtained in advance, the face image can be aligned with the depth map, thereby determining the corresponding pixel points in the image. Depth information. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the two-dimensional image to obtain a simplified three-dimensional model.
  • the CPU 331 sends the three-dimensional model to the GPU 332, so that the GPU 332 executes the three-dimensional model processing method as described in the foregoing embodiment according to the three-dimensional model, thereby simplifying the three-dimensional model and obtaining a simplified three-dimensional model.
  • the simplified three-dimensional model processed by the GPU 332 may be displayed on the display 340, and / or stored in the memory 360 after being encoded by the encoder 350.
  • the present disclosure also proposes a computer-readable storage medium on which a computer program is stored, which is characterized in that when the program is executed by a processor, the three-dimensional model processing method proposed in the foregoing embodiment of the present disclosure is implemented.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present disclosure, the meaning of "a plurality” is at least two, for example, two, three, etc., unless it is specifically and specifically defined otherwise.
  • any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for implementing steps of a custom logic function or process
  • the scope of the preferred embodiments of the present disclosure includes additional implementations in which functions may be performed out of the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present disclosure belong.
  • Logic and / or steps represented in a flowchart or otherwise described herein, for example, a sequenced list of executable instructions that may be considered to implement a logical function, may be embodied in any computer-readable medium, For use by, or in combination with, an instruction execution system, device, or device (such as a computer-based system, a system that includes a processor, or another system that can fetch and execute instructions from an instruction execution system, device, or device) Or equipment.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
  • portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic circuits with logic gates for implementing logic functions on data signals Logic circuits, ASICs with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried by the methods in the foregoing embodiments can be implemented by a program instructing related hardware.
  • the program can be stored in a computer-readable storage medium.
  • the program is When executed, one or a combination of the steps of the method embodiment is included.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

本公开提出一种三维模型处理方法、装置、电子设备以及可读存储介质,其中,方法包括:通过获取三维模型;其中,所三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。该方法通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。

Description

三维模型处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本公开要求OPPO广东移动通信有限公司于2018年8月16日提交的、发明名称为“三维模型处理方法、装置、电子设备以及可读存储介质”的、中国专利申请号“201810934014.9”的优先权。
技术领域
本公开涉及移动终端技术领域,尤其涉及一种三维模型处理方法、装置、电子设备以及可读存储介质。
背景技术
三维模型重建是建立适合计算机表示和处理的数学模型,是在计算机环境下对其进行处理、操作和分析其性质的基础,也是在计算机中建立表达客观世界的虚拟现实的关键技术。通常通过对三维模型中关键点进行处理,实现模型的重建。
在实际操作中,对三维模型中各处均采用的是相同的关键点密度进行处理,关键点密度的设置对三维模型的呈现具有较大影响。
发明内容
本公开旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本公开提出一种三维模型处理方法,以实现通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。
本公开提出一种三维模型处理装置。
本公开提出一种电子设备。
本公开提出一种计算机可读存储介质。
本公开一方面实施例提出了一种三维模型处理方法,包括:
获取三维模型;其中,所述三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;
对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;
根据各区域对应的目标关键点密度,对所述三维模型中对应区域的关键点密度进行调整。
本公开实施例的三维模型处理方法,通过获取三维模型;其中,所三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。该方法通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。
本公开另一方面实施例提出了一种三维模型处理装置,包括:
获取模块,用于获取三维模型;其中,所述三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;
确定模块,用于对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;
调整模块,用于根据各区域对应的目标关键点密度,对所述三维模型中对应区域的关键点密度进行调整。
本公开实施例的三维模型处理装置,通过获取三维模型;其中,所三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。该方法通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。
本公开又一方面实施例提出了一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如本公开前述实施例所述的三维模型处理方法。
本公开又一方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如前述实施例所述的三维模型处理方法。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开实施例提供的一种三维模型处理方法的流程示意图;
图2为本公开实施例提供的确定各区域平坦程度的流程示意图;
图3为本公开实施例提供的另一种三维模型处理方法流程示意图;
图4为本公开实施例提供的一种三维模型处理装置的结构示意图;
图5为本公开实施例提供的另一种三维模型处理装置的结构示意图;
图6为一个实施例中电子设备的内部结构示意图;
图7为作为一种可能的实现方式的图像处理电路的示意图;
图8为作为另一种可能的实现方式的图像处理电路的示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的三维模型处理方法、装置、电子设备以及可读存储介质。
图1为本公开实施例所提供的一种三维模型处理方法的流程示意图。
本公开实施例中,电子设备可以为手机、平板电脑、个人数字助理、穿戴式设备等具有各种操作系统、触摸屏和/或显示屏的硬件设备。
如图1所示,该三维模型处理方法包括以下步骤:
步骤101,获取三维模型;其中,三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面。
本实施例中获取的三维模型,包括多个关键点以及将相邻关键点作为顶点进行连线得到的多个剖分平面。其中,关键点以及剖分平面可以采用三维坐标的形式表示出来。
作为一种示例,本实施例中获取到的三维模型可以是人脸的三维模型,人脸的三维模型的获取,是根据深度信息和人脸图像,进行三维重构得到的,而不是简单的获取RGB数据和深度数据。
作为一种可能的实现方式,可以将深度信息与二维人脸图像对应的色彩信息进行融合,得到人脸三维模型。具体地,可以基于人脸关键点检测技术,从深度信息提取人脸的关键点,以及从色彩信息中提取人脸的关键点,而后将从深度信息中提取的关键点和从色彩信息中提取的关键点,进行配准和关键点融合处理,最终根据融合后的关键点,生成人脸三 维模型。其中,关键点为人脸上显眼的点,或者为关键位置上的点,例如关键点可以为眼角、鼻尖、嘴角等。
进一步的,可以基于人脸关键点检测技术,对人脸图像进行关键点识别,得到人脸图像对应的关键点,从而可以根据各关键点在三维空间中的相对位置,将相邻关键点作为顶点进行连线得到的多个剖分平面。
步骤102,对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度。
本公开实施例中,可以根据预设的半径,将三维模型划分为多个区域,在各个区域内包含多个剖分平面,进而得到各剖分平面的角度信息。
作为一种可能的实现方式,各剖分平面的角度信息可以是与相邻剖分平面之间的夹角,获得各剖分平面后,通过相邻剖分平面之间的夹角,即可得到各剖分平面的角度信息。
进一步的说明,各区域内剖分平面的角度信息与各区域的平坦程度存在一定的对应关系,当各区域内剖分平面的角度越大时,说明该区域平坦程度越低;当各区域内剖分平面的角度越小时,说明该区域越平坦。若相邻两个一起的平坦程度之间的差异低于差异阈值,则将该相邻两个区域进行合并,其中,差异阈值,是根据三维模型整体的结构预先设定的。
作为一种示例,在人脸三维模型中,通过计算各区域内两个相邻剖分平面之间的夹角,可判断脸部各区域的平坦程度。例如,当某一区域内两个相邻剖分平面位于人脸的面部区域时,该相邻剖分平面之间的夹角可能为2度,说明人脸的面部区域比较平坦;当某一区域内的两个相邻剖分平面,一个位于人脸的面部区域时,另一个位于鼻子上时,该相邻剖分平面之间的夹角可能为60度,此时说明该区域内的平坦程度比较低。
根据确定的三维模型中各区域的平坦程度,进一步的确定各区域内对应的目标关键点密度,具体地,当判断区域内比较平坦时,该区域内对应的目标关键点可以设定相对少一些;当判断区域内平坦程度比较低时,该区域内对应的目标关键点可以设定较多的关键点。
作为一种示例,对于人脸三维模型中目标关键点密度的确定,在比较平坦的区域,较少的目标关键点即可识别出该区域的人脸模型,因此,对于相对比较平坦的面部、额头区域,设置的目标关键点密度相对比较低。然而,对于眼睛、嘴唇等细节比较丰富的区域,设置较少的目标关键点,可能不会清楚的识别出该区域的人脸模型,因此设置的目标关键点密度相对比较高。
步骤103,根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。
具体地,通过比对三维模型各区域中当前的关键点密度,与对应区域内的目标关键点密度的高低,进而对三维模型中各区域的关键点密度进行调整。
作为一种可能的实现方式,三维模型中某区域的关键点密度高于对应的目标关键点密度,需要删除该区域内的部分关键点,使得删除一些关键点后该区域内的关键点密度小于或等于对应的目标关键点密度。进而将该区域内保留的关键点中的相邻关键点作为顶点重新进行连线。
本公开实施例的三维模型处理方法,通过获取三维模型;其中,所三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。本公开中,通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。
作为一种可能的实现方式,对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度,进而确定对应的关键点密度。为了能够准确的确定三维模型中各区域的平坦程度,本公开实施例中,根据三维模型各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度,参见图2,步骤102具体可以包括以下子步骤:
步骤201,对三维模型中各区域,确定各区域内剖分平面的法向量。
本公开实施例中,可以根据预设的半径,将三维模型划分为多个区域,在各个区域内均将相邻关键点作为顶点进行连接,进而得到多个剖分平面。
进一步的,得到各区域的剖分平面后,进一步的确定各剖分平面的法向量,其中,平面的法向量是确定平面位置的重要向量,是指与平面垂直的非零向量。
步骤202,根据包含同一顶点的剖分平面的法向量,确定同一顶点的法向量。
具体地,当三维模型中多个剖分平面包含同一顶点时,对包含同一顶点的多个剖分平面的法向量进行求和,进而求和得到的法向量,即为该顶点的法向量。
例如,对于三维模型中的任意顶点X,在该模型中有三个剖分平面A、B、C同时包含顶点X,则确定剖分平面A、B、C的法向量后,对三个平面的法向量进行求和,求和得到的向量即为顶点X的向量。
需要说明的是,在三维模型中,对光照的反射取决于顶点法向量的设置,如果各顶点法向量计算正确,则显示出的三维模型比较光滑,而且有光泽,否则,显示的三维模型会出现棱角分明,而且模糊不清的情况。
步骤203,根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度。
具体地,通过步骤202中确定顶点法向量的方法,确定三维模型中各顶点的法向量。对于三维模型中每一区域内的各顶点,确定各顶点的法向量与相邻顶点的法向量之间的夹 角,进一步的,对于在同一区域内确定的各顶点的法向量与相邻顶点的法向量之间的夹角,计算夹角的平均值。最后判断得到的每一区域的夹角平均值是否大于预设的角度阈值,进而判断该区域是否平坦。其中,角度阈值是根据三维模型的整体结构提前设定的值。
当得到三维模型中某一区域内各顶点的法向量与相邻顶点的法向量的夹角平均值大于预设的角度阈值时,则说明该区域不平坦。当得到三维模型中某一区域内各顶点的法向量与相邻顶点的法向量的夹角平均值小于预设的角度阈值时,则说明该区域平坦。
作为一种可能的实现方式,三维模型中各区域可以设定相同的阈值,以简化运算量。
作为另一种可能的实现方式,三维模型中各区域还可以设定不同的阈值,从而提高模型精细度。
本公开实施例的三维模型处理方法,通过对三维模型中各区域,确定各区域内剖分平面的法向量,进而根据包含同一顶点的剖分平面的法向量,确定同一顶点的法向量,最终根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度。由此,可以确定三维模型各区域的平坦程度,从而确定各区域内顶点的数量,进一步的提高三维模型的处理效率。
作为一种示例,本公开实施例中以人脸三维模型为例,对人脸三维模型进行简化,进而得到简化后的三维模型,图3为本公开实施例提供的另一种三维模型处理方法流程示意图。
如图3所示,该三维模型处理方法包括:
步骤301,获取高密度顶点的人脸三维模型。其中,三维模型包括多个顶点,以及包括将相邻顶点进行连线得到的多个剖分平面。
本公开实施例中,获取人脸三维模型的方法,与前述实施例中步骤101中获取三维模型的方法相似,此处不再赘述。
步骤302,计算人脸三维模型中各顶点的法向量。
需要说明的是,在人脸三维模型中,对光照的反射取决于各顶点法向量的设置,如果各顶点法向量计算正确,则显示出的三维模型比较光滑,而且有光泽,否则,显示的三维模型会出现棱角分明,而且模糊不清的情况。
本公开实施例中,首先根据预设的半径,将三维模型划分为多个区域,在各个区域内均将相邻关键点作为顶点进行连接,从而得到多个剖分平面,进一步的确定各剖分平面的法向量。
具体地,对于人脸三维模型中的各顶点,找出所有包含该顶点的剖分平面,对各剖分平面的法向量进行求和,进而求和得到的法向量,即为该顶点的法向量。
步骤303,对人脸三维模型各区域,计算相邻顶点法向量之间的夹角。
具体地,对于人脸三维模型中每一区域内的各顶点,通过计算获得各顶点的法向量与相邻顶点的法向量之间的夹角。进一步的,对于在同一区域内确定的各顶点的法向量与相邻顶点的法向量之间的夹角,计算夹角的平均值。
作为一种可能的情况,计算得到某一区域的夹角平均值越大,则说明该区域越为精细,如眼睛、嘴唇等细节较为丰富的区域。
作为另一种可能的情况,计算得到某一区域的夹角平均值越小,则说明该区域较为平坦,如人脸的面部、额头等较为平坦的区域。
步骤304,对三维模型中各区域中的顶点进行低通滤波。
具体地,对于人脸三维模型中每一区域内的各顶点,通过计算获得各顶点的法向量与相邻顶点的法向量之间的夹角之后,根据夹角取值进行低通滤波处理,过滤掉异常高的夹角对应的顶点。由于夹角越大,说明剖分平面角度变化越大,通过删除掉夹角较大的顶点,使得各区域更加平滑。
步骤305,根据各区域内相邻顶点的法向量之间的夹角,确定各区域的目标关键点密度。
本公开实施例中,将三维模型中各区域内夹角的平均值与预先设定的角度阈值进行比较,确定各区域的目标关键点密度,这里的目标关键点密度可以是预设的顶点密度规格中的一个,预设的顶点密度规格可以包括高密度顶点规格和低密度顶点规格。其中,预设的角度阈值可以根据所需的效果进行灵活调节阈值的大小,或者设定多个等级的阈值。
在一种场景下,当得到三维模型中某一区域内各顶点的法向量与相邻顶点的法向量的夹角平均值大于预设的角度阈值时,说明该区域不够平坦,因此可采用高密度顶点规格对该区域内的顶点进行处理。
在另一种场景下,当得到三维模型中某一区域内各顶点的法向量与相邻顶点的法向量的夹角平均值小于预设的角度阈值时,说明该区域平坦,因此可采用低密度顶点规格对该区域内的顶点进行处理。
上述确定三维模型各区域是否平坦的过程,可以称为对三维模型的各区域进行二值化处理过程。进一步的,在进行二值化处理过程之前,若相邻两个区域的平坦程度之间的差异低于差异阈值,可以采用形态学处理和连通域求解方法,对所述相邻两个区域进行合并,从而减少后续二值化以及模型关键点简化的运算量,同时,也能够使得三维模型各区域更加连贯。
步骤306,目标关键点密度为低密度顶点规格的区域,进行关键点简化处理,得到简化后的三维模型。
具体地,对该区域内的关键点进行简化,例如,隔4个关键点取1个。由于相对平坦 的区域内,对关键点的简化,不仅不会影响三维模型的成像,还会大大降低整个三维模型的顶点数量。删除对应区域内的部分关键点,以使删除关键点后关键点密度小于或等于所述目标关键点密度之后,将对应区域内保留的关键点中的相邻关键点作为顶点重新进行连线,从而得到简化后的三维模型。
本公开实施例的三维模型处理方法,通过获取高密度顶点的人脸三维模型。其中,三维模型包括多个顶点,以及包括将相邻顶点进行连线得到的多个剖分平面;计算人脸三维模型中各顶点的法向量;进而对人脸三维模型各区域,计算相邻顶点法向量之间的夹角;然后对三维模型中各区域中的顶点进行低通滤波,根据各区域内相邻顶点的法向量之间的夹角,确定各区域的目标关键点密度,最后目标关键点密度为低密度顶点规格的区域进行关键点简化处理,得到简化后的人脸三维模型。由此,通过对人脸不同区域采用不同的顶点规格,保持人脸三维模型细节精确的同时,不仅降低了整个三维模型的顶点数量,还减小了内存,提高了人脸三维模型的处理速度。
为了实现上述实施例,本公开还提出一种三维模型处理装置。
图4为本公开实施例提供的一种三维模型处理装置的结构示意图。
如图4所示,该三维模型处理装置100包括:获取模块110、确定模块120,以及调整模块130。
获取模块110,用于获取三维模型;其中,三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面。
确定模块120,用于对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度。
调整模块130,用于根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。
作为一种可能的实现方式,确定模块120,还用于对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度;
根据各区域的平坦程度,确定对应的目标关键点密度。
作为一种可能的实现方式,确定模块120,还用于对三维模型中各区域,确定各区域内剖分平面的法向量;根据包含同一顶点的剖分平面的法向量,确定同一顶点的法向量;根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度。
作为一种可能的实现方式,确定模块120,还包括:
确定单元,用于在每一区域内,确定各顶点的法向量与相邻顶点的法向量之间的夹角。
计算单元,用于根据各顶点的法向量与相邻顶点的法向量之间的夹角,计算夹角平均值;
判断单元,用于根据夹角平均值是否大于预设角度阈值,判断是否平坦。
作为一种可能的实现方式,确定单元,具体还用于对包含同一顶点的剖分平面的法向量进行求和;根据求和得到的法向量,确定同一顶点的法向量。
作为一种可能的实现方式,参见图5,该三维模型处理装置100,还包括:
划定模块140,用于根据预设半径,在三维模型中划定各区域。
合并模块150,用于若相邻两个区域的平坦程度之间的差异低于差异阈值,对相邻两个区域进行合并。
作为一种可能的实现方式,调整模块130,还用于对于当前关键点密度高于目标关键点密度的区域,删除对应区域内的部分关键点,以使删除关键点后关键点密度小于或等于目标关键点密度;将对应区域内保留的关键点中的相邻关键点作为顶点重新进行连线。
本公开实施例的三维模型处理装置,通过获取三维模型;其中,所三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;对三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;根据各区域对应的目标关键点密度,对三维模型中对应区域的关键点密度进行调整。本公开中,通过对三维模型中关键点密度的调整,使得不同区域采用不同的关键点密度,不仅保持了三维模型的细节精确度,同时也大大降低了整个模型的关键点数量,从而大大减少内存占用,提高了处理速度。
需要说明的是,前述对三维模型处理方法实施例的解释说明也适用于该实施例的三维模型处理装置,此处不再赘述。
为了实现上述实施例,本公开还提出电子设备,其特征在于,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如前述实施例所述的三维模型处理方法。
图6为一个实施例中电子设备200的内部结构示意图。该电子设备200包括通过系统总线210连接的处理器220、存储器230、显示器240和输入装置250。其中,电子设备200的存储器230存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本公开实施方式的人脸识别方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的显示器240可以是液晶显示屏或者电子墨水显示屏等,输入装置250可以是显示器240上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。
本领域技术人员可以理解,图6中示出的结构,仅仅是与本公开方案相关的部分结构 的示意图,并不构成对本公开方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
作为一种可能的实现方式,请参阅图7,提供了本公开实施例的图像处理电路,图像处理电路可利用硬件和/或软件组件实现。
如图7,该图像处理电路具体包括:图像单元310、深度信息单元320和处理单元330。其中,
图像单元310,用于输出二维的图像。
深度信息单元320,用于输出深度信息。
本公开实施例中,可以通过图像单元310,获取二维的图像,以及通过深度信息单元320,获取图像对应的深度信息。
处理单元330,分别与图像单元310和深度信息单元320电性连接,用于根据图像单元获取的二维的图像,以及深度信息单元获取的对应的深度信息,识别与图像中匹配的目标三维模板,输出目标三维模块关联的信息。
本公开实施例中,图像单元310获取的二维图像可以发送至处理单元330,以及深度信息单元320获取的图像对应的深度信息可以发送至处理单元330,处理单元330可以根据图像以及深度信息,识别与图像中匹配的目标三维模板,输出目标三维模块关联的信息。具体的实现过程,可以参见上述图1至图3实施例中对三维模型处理的方法的解释说明,此处不做赘述。
进一步地,作为本公开一种可能的实现方式,参见图8,在图7所示实施例的基础上,该图像处理电路还可以包括:
作为一种可能的实现方式,图像单元310具体可以包括:电性连接的图像传感器311和图像信号处理(Image Signal Processing,简称ISP)处理器312。其中,
图像传感器311,用于输出原始图像数据。
ISP处理器312,用于根据原始图像数据,输出图像。
本公开实施例中,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后,得到YUV格式或者RGB格式的图像,并发送至处理单元330。
其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
作为一种可能的实现方式,深度信息单元320,包括电性连接的结构光传感器321和深度图生成芯片322。其中,
结构光传感器321,用于生成红外散斑图。
深度图生成芯片322,用于根据红外散斑图,输出深度信息;深度信息包括深度图。
本公开实施例中,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至处理单元330。
作为一种可能的实现方式,处理单元330,包括:电性连接的CPU331和GPU(Graphics Processing Unit,图形处理器)332。其中,
CPU331,用于根据标定数据,对齐图像与深度图,根据对齐后的图像与深度图,输出三维模型。
GPU332,用于根据三维模型,确定匹配的目标三维模板,输出目标三维模板关联的信息。
本公开实施例中,CPU331从ISP处理器312获取到图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将二维图像与深度图对齐,从而确定出图像中各像素点对应的深度信息。进而,CPU331根据深度信息和图像,进行三维重构,得到三维模型。
CPU331将三维模型发送至GPU332,以便GPU332根据三维模型执行如前述实施例中描述的三维模型处理方法,实现关键点简化,得到简化后的三维模型。
具体地,GPU332可以根据三维模型,确定匹配的目标三维模板,而后根据目标三维模板关联的信息,在图像中进行标注,输出标注信息的图像。
进一步地,图像处理电路还可以包括:显示单元340。
显示单元340,与GPU332电性连接,用于对标注信息的图像进行显示。
具体地,GPU332处理得到的美化后的图像,可以由显示器340显示。
可选地,图像处理电路还可以包括:编码器350和存储器360。
本公开实施例中,GPU332处理得到的美化后的图像,还可以由编码器350编码后存 储至存储器360,其中,编码器350可由协处理器实现。
在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU312处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。
下面结合图8,对上述过程进行详细说明。
如图8所示,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的图像,并发送至CPU331。
如图8所示,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map)。深度图生成芯片322将深度图发送至CPU331。
CPU331从ISP处理器312获取到二维图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出图像中各像素点对应的深度信息。进而,CPU331根据深度信息和二维图像,进行三维重构,得到简化的三维模型。
CPU331将三维模型发送至GPU332,以便GPU332根据三维模型执行如前述实施例中描述的三维模型处理方法,实现三维模型的简化,得到简化后的三维模型。GPU332处理得到的简化后的三维模型,可以由显示器340显示,和/或,由编码器350编码后存储至存储器360。
为了实现上述实施例,本公开还提出一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本公开前述实施例提出的三维模型处理方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性 或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既 可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种三维模型处理方法,其特征在于,所述方法包括以下步骤:
    获取三维模型;其中,所述三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;
    对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;
    根据各区域对应的目标关键点密度,对所述三维模型中对应区域的关键点密度进行调整。
  2. 根据权利要求1所述的三维模型处理方法,其特征在于,所述对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度,包括:
    对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度;
    根据各区域的平坦程度,确定对应的目标关键点密度。
  3. 根据权利要求2所述的三维模型处理方法,其特征在于,所述角度信息包括法向量,所述对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度,包括:
    对所述三维模型中各区域,确定各区域内剖分平面的法向量;
    根据包含同一顶点的剖分平面的法向量,确定所述同一顶点的法向量;
    根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度。
  4. 根据权利要求3所述的三维模型处理方法,其特征在于,所述根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度之前,还包括:
    对所述三维模型中各区域中的顶点进行低通滤波。
  5. 根据权利要求3或4所述的三维模型处理方法,其特征在于,所述根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度,包括:
    在每一区域内,确定各顶点的法向量与相邻顶点的法向量之间的夹角;
    根据各顶点的法向量与相邻顶点的法向量之间的夹角,计算夹角平均值;
    根据所述夹角平均值是否大于预设角度阈值,判断是否平坦。
  6. 根据权利要求3-5任一项所述的三维模型处理方法,其特征在于,所述根据包含同一顶点的剖分平面的法向量,确定所述同一顶点的法向量,包括:
    对包含同一顶点的剖分平面的法向量进行求和;
    根据求和得到的法向量,确定所述同一顶点的法向量。
  7. 根据权利要求2-6任一项所述的三维模型处理方法,其特征在于,所述对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度之前,还包括:
    根据预设半径,在所述三维模型中划定各区域。
  8. 根据权利要求7所述的三维模型处理方法,其特征在于,所述对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度之后,还包括:
    若相邻两个区域的平坦程度之间的差异低于差异阈值,对所述相邻两个区域进行合并。
  9. 根据权利要求1-8任一项所述的三维模型处理方法,其特征在于,所述根据各区域对应的目标关键点密度,对所述三维模型中对应区域的关键点密度进行调整,包括:
    对于当前关键点密度高于所述目标关键点密度的区域,删除对应区域内的部分关键点,以使删除关键点后关键点密度小于或等于所述目标关键点密度;
    将对应区域内保留的关键点中的相邻关键点作为顶点重新进行连线。
  10. 一种三维模型处理装置,其特征在于,所述装置包括:
    获取模块,用于获取三维模型;其中,所述三维模型包括多个关键点,以及包括将相邻关键点作为顶点进行连线得到的多个剖分平面;
    确定模块,用于对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域对应的目标关键点密度;
    调整模块,用于根据各区域对应的目标关键点密度,对所述三维模型中对应区域的关键点密度进行调整。
  11. 根据权利要求10所述的三位模型处理装置,其特征在于,所述确定模块,具体用于:
    对所述三维模型中各区域,根据各区域内剖分平面的角度信息,确定各区域的平坦程度;
    根据各区域的平坦程度,确定对应的目标关键点密度。
  12. 根据权利要求11所述的三位模型处理装置,其特征在于,所述角度信息包括法向量,所述确定模块,还具体用于:
    对所述三维模型中各区域,确定各区域内剖分平面的法向量;
    根据包含同一顶点的剖分平面的法向量,确定所述同一顶点的法向量;
    根据各区域内相邻顶点的法向量之间的夹角,确定各区域的平坦程度。
  13. 根据权利要求12所述的三位模型处理装置,其特征在于,所述确定模块,还具体用于:
    对所述三维模型中各区域中的顶点进行低通滤波。
  14. 根据权利要求12或13所述的三位模型处理装置,其特征在于,所述确定模块, 包括:
    确定单元,用于在每一区域内,确定各顶点的法向量与相邻顶点的法向量之间的夹角;
    计算单元,用于根据各顶点的法向量与相邻顶点的法向量之间的夹角,计算夹角平均值;
    判断单元,用于根据所述夹角平均值是否大于预设角度阈值,判断是否平坦。
  15. 根据权利要求12-14任一项所述的三位模型处理装置,其特征在于,所述确定单元,还用于:对包含同一顶点的剖分平面的法向量进行求和;根据求和得到的法向量,确定所述同一顶点的法向量。
  16. 根据权利要求11-15所述的三位模型处理装置,其特征在于,所述装置,还包括:
    划定模块,用于根据预设半径,在所述三维模型中划定各区域。
  17. 根据权利要求16所述的三位模型处理装置,其特征在于,所述装置,还包括:
    合并模块,用于若相邻两个区域的平坦程度之间的差异低于差异阈值,对所述相邻两个区域进行合并。
  18. 根据权利要求11-17任一项所述的三位模型处理装置,其特征在于,所述调整模块,具体用于:
    对于当前关键点密度高于所述目标关键点密度的区域,删除对应区域内的部分关键点,以使删除关键点后关键点密度小于或等于所述目标关键点密度;
    将对应区域内保留的关键点中的相邻关键点作为顶点重新进行连线。
  19. 一种电子设备,其特征在于,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-9中任一所述的三维模型处理方法。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-9中任一所述的三维模型处理方法。
PCT/CN2019/091543 2018-08-16 2019-06-17 三维模型处理方法、装置、电子设备及可读存储介质 WO2020034743A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19850109.0A EP3839894A4 (en) 2018-08-16 2019-06-17 METHOD AND DEVICE FOR THREE-DIMENSIONAL MODEL PROCESSING, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
US17/173,722 US11403819B2 (en) 2018-08-16 2021-02-11 Three-dimensional model processing method, electronic device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810934014.9 2018-08-16
CN201810934014.9A CN109191584B (zh) 2018-08-16 2018-08-16 三维模型处理方法、装置、电子设备及可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/173,722 Continuation US11403819B2 (en) 2018-08-16 2021-02-11 Three-dimensional model processing method, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2020034743A1 true WO2020034743A1 (zh) 2020-02-20

Family

ID=64918320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091543 WO2020034743A1 (zh) 2018-08-16 2019-06-17 三维模型处理方法、装置、电子设备及可读存储介质

Country Status (4)

Country Link
US (1) US11403819B2 (zh)
EP (1) EP3839894A4 (zh)
CN (1) CN109191584B (zh)
WO (1) WO2020034743A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882666A (zh) * 2020-07-20 2020-11-03 浙江商汤科技开发有限公司 三维网格模型的重建方法及其装置、设备、存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191584B (zh) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 三维模型处理方法、装置、电子设备及可读存储介质
CN109102559B (zh) * 2018-08-16 2021-03-23 Oppo广东移动通信有限公司 三维模型处理方法和装置
CN111028343B (zh) 2019-12-16 2020-12-11 腾讯科技(深圳)有限公司 三维人脸模型的生成方法、装置、设备及介质
CN111429568B (zh) * 2020-03-27 2023-06-06 如你所视(北京)科技有限公司 点云处理方法和装置、电子设备和存储介质
CN113470095B (zh) * 2021-09-03 2021-11-16 贝壳技术有限公司 室内场景重建模型的处理方法和装置
CN114358795B (zh) * 2022-03-18 2022-06-14 武汉乐享技术有限公司 一种基于人脸的支付方法和装置
CN117115392B (zh) * 2023-10-24 2024-01-16 中科云谷科技有限公司 模型图像压缩方法、装置、计算机设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285372B1 (en) * 1998-05-08 2001-09-04 Lawrence C. Cowsar Multiresolution adaptive parameterization of surfaces
CN102306396A (zh) * 2011-09-15 2012-01-04 山东大学 一种三维实体模型表面有限元网格自动生成方法
CN105469446A (zh) * 2014-09-05 2016-04-06 富泰华工业(深圳)有限公司 点云网格简化系统及方法
CN106408665A (zh) * 2016-10-25 2017-02-15 合肥东上多媒体科技有限公司 一种新的渐进网格生成方法
CN109102559A (zh) * 2018-08-16 2018-12-28 Oppo广东移动通信有限公司 三维模型处理方法和装置
CN109191584A (zh) * 2018-08-16 2019-01-11 Oppo广东移动通信有限公司 三维模型处理方法、装置、电子设备及可读存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7215810B2 (en) * 2003-07-23 2007-05-08 Orametrix, Inc. Method for creating single 3D surface model from a point cloud
CN101751689B (zh) * 2009-09-28 2012-02-22 中国科学院自动化研究所 一种三维人脸重建方法
JP5462093B2 (ja) * 2010-07-05 2014-04-02 株式会社トプコン 点群データ処理装置、点群データ処理システム、点群データ処理方法、および点群データ処理プログラム
JP6008323B2 (ja) * 2013-02-01 2016-10-19 パナソニックIpマネジメント株式会社 メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム
CN103236064B (zh) * 2013-05-06 2016-01-13 东南大学 一种基于法向量的点云自动配准方法
CN103729872B (zh) * 2013-12-30 2016-05-18 浙江大学 一种基于分段重采样和表面三角化的点云增强方法
CN106157373A (zh) * 2016-07-27 2016-11-23 中测高科(北京)测绘工程技术有限责任公司 一种建筑物三维模型构建方法及系统
CN107958481A (zh) 2016-10-17 2018-04-24 杭州海康威视数字技术股份有限公司 一种三维重建方法及装置
CN107122705B (zh) 2017-03-17 2020-05-19 中国科学院自动化研究所 基于三维人脸模型的人脸关键点检测方法
CN107729806A (zh) * 2017-09-05 2018-02-23 西安理工大学 基于三维人脸重建的单视图多姿态人脸识别方法
CN107784674B (zh) 2017-10-26 2021-05-14 浙江科澜信息技术有限公司 一种三维模型简化的方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285372B1 (en) * 1998-05-08 2001-09-04 Lawrence C. Cowsar Multiresolution adaptive parameterization of surfaces
CN102306396A (zh) * 2011-09-15 2012-01-04 山东大学 一种三维实体模型表面有限元网格自动生成方法
CN105469446A (zh) * 2014-09-05 2016-04-06 富泰华工业(深圳)有限公司 点云网格简化系统及方法
CN106408665A (zh) * 2016-10-25 2017-02-15 合肥东上多媒体科技有限公司 一种新的渐进网格生成方法
CN109102559A (zh) * 2018-08-16 2018-12-28 Oppo广东移动通信有限公司 三维模型处理方法和装置
CN109191584A (zh) * 2018-08-16 2019-01-11 Oppo广东移动通信有限公司 三维模型处理方法、装置、电子设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3839894A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882666A (zh) * 2020-07-20 2020-11-03 浙江商汤科技开发有限公司 三维网格模型的重建方法及其装置、设备、存储介质
CN111882666B (zh) * 2020-07-20 2022-06-21 浙江商汤科技开发有限公司 三维网格模型的重建方法及其装置、设备、存储介质

Also Published As

Publication number Publication date
US11403819B2 (en) 2022-08-02
CN109191584B (zh) 2020-09-18
EP3839894A4 (en) 2021-12-15
EP3839894A1 (en) 2021-06-23
US20210174587A1 (en) 2021-06-10
CN109191584A (zh) 2019-01-11

Similar Documents

Publication Publication Date Title
WO2020034743A1 (zh) 三维模型处理方法、装置、电子设备及可读存储介质
WO2020034785A1 (zh) 三维模型处理方法和装置
US11010967B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
WO2019228473A1 (zh) 人脸图像的美化方法和装置
WO2018188535A1 (zh) 人脸图像处理方法、装置和电子设备
US8126268B2 (en) Edge-guided morphological closing in segmentation of video sequences
US8873835B2 (en) Methods and apparatus for correcting disparity maps using statistical analysis on local neighborhoods
US11069151B2 (en) Methods and devices for replacing expression, and computer readable storage media
WO2022012085A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
CN111368717B (zh) 视线确定方法、装置、电子设备和计算机可读存储介质
WO2020034698A1 (zh) 基于三维模型的特效处理方法、装置和电子设备
CN109937434B (zh) 图像处理方法、装置、终端和存储介质
WO2020034786A1 (zh) 三维模型处理方法、装置、电子设备和存储介质
WO2019041967A1 (zh) 手和图像检测方法和系统、手分割方法、存储介质和设备
WO2020034738A1 (zh) 三维模型的处理方法、装置、电子设备及可读存储介质
Hernandez et al. Near laser-scan quality 3-D face reconstruction from a low-quality depth stream
CN109242760B (zh) 人脸图像的处理方法、装置和电子设备
US9959672B2 (en) Color-based dynamic sub-division to generate 3D mesh
US20180137623A1 (en) Image segmentation using user input speed
CN112767278A (zh) 基于非均匀大气光先验的图像去雾方法及相关设备
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN111667553A (zh) 头像像素化的面部颜色填充方法、装置及电子设备
US11551368B2 (en) Electronic devices, methods, and computer program products for controlling 3D modeling operations based on pose metrics
CN107742316B (zh) 图像拼接点获取方法及获取装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019850109

Country of ref document: EP

Effective date: 20210316