WO2020034738A1 - Three-dimensional model processing method and apparatus, electronic device and readable storage medium - Google Patents

Three-dimensional model processing method and apparatus, electronic device and readable storage medium Download PDF

Info

Publication number
WO2020034738A1
WO2020034738A1 PCT/CN2019/090557 CN2019090557W WO2020034738A1 WO 2020034738 A1 WO2020034738 A1 WO 2020034738A1 CN 2019090557 W CN2019090557 W CN 2019090557W WO 2020034738 A1 WO2020034738 A1 WO 2020034738A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional model
information
acceleration
key point
human body
Prior art date
Application number
PCT/CN2019/090557
Other languages
French (fr)
Chinese (zh)
Inventor
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020034738A1 publication Critical patent/WO2020034738A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the present disclosure relates to the technical field of electronic devices, and in particular, to a method, an apparatus, an electronic device, and a readable storage medium for processing a three-dimensional model.
  • 3D model reconstruction is a mathematical model suitable for computer representation and processing. It is the basis for processing, manipulating, and analyzing its properties in a computer environment. It is also a key technology for establishing a virtual reality that expresses the objective world in a computer. Usually, the key points in the three-dimensional model are processed to realize the reconstruction of the model.
  • the three-dimensional model also changes accordingly.
  • the three-dimensional model of the face will also change accordingly.
  • bangs and facial muscles will swing at the same time as the electronic device shakes, and the light on the face will also change with the ambient light.
  • the present disclosure aims to solve at least one of the technical problems in the related art.
  • the present disclosure proposes a method for processing a three-dimensional model, so as to solve the current situation that the electronic device associated with the three-dimensional model shakes or the light in the environment changes, the three-dimensional model also changes accordingly, so that the three-dimensional model of the human body cannot be real Technical issues that respond to environmental effects.
  • the present disclosure proposes a processing device for a three-dimensional model.
  • the present disclosure proposes an electronic device.
  • the present disclosure proposes a computer-readable storage medium.
  • An embodiment of one aspect of the present disclosure provides a method for processing a three-dimensional model, including:
  • the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame;
  • position adjustments are performed on some key points in the three-dimensional model, and / or rendering the three-dimensional model texture information.
  • the method for processing a three-dimensional model in the embodiment of the present disclosure obtains a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the model is located; according to the environmental information, position adjustments of some key points in the 3D model, and / or rendering texture information of the 3D model. Therefore, by adjusting the position of some key points of the 3D human body model and rendering the texture information of the 3D human body model, the 3D human body model more realistically reflects the environment effect, and the fidelity of the 3D model display special effects is improved.
  • An embodiment of another aspect of the present disclosure provides a three-dimensional model processing device, including:
  • a first acquisition module for acquiring a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by a plurality of key points connected, and texture information covering the model frame;
  • a second acquisition module configured to acquire environmental information of an environment in which the three-dimensional model is constructed
  • a processing module configured to adjust position of some key points in the three-dimensional model and / or render texture information of the three-dimensional model according to the environment information.
  • the apparatus for processing a three-dimensional model obtains a three-dimensional model of a human body; wherein the three-dimensional model includes multiple key points, a model frame formed by the connection of multiple key points, and texture information covering the model frame; Environmental information of the environment in which the model is located; according to the environmental information, position adjustments of some key points in the 3D model, and / or rendering texture information of the 3D model. Therefore, by adjusting the position of some key points of the 3D human body model and rendering the texture information of the 3D human body model, the 3D human body model more realistically reflects the environment effect, and the fidelity of the 3D model display special effects is improved.
  • An embodiment of another aspect of the present disclosure provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the foregoing implementation is implemented.
  • An embodiment of another aspect of the present disclosure provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the method for processing a three-dimensional model according to the foregoing embodiment is implemented.
  • FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of adjusting a position of a target key point according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of another three-dimensional model processing apparatus according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of an internal structure of an electronic device in an embodiment
  • FIG. 6 is a schematic diagram of an image processing circuit as a possible implementation manner
  • FIG. 7 is a schematic diagram of an image processing circuit as another possible implementation manner.
  • FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure.
  • the processing method of the three-dimensional model includes the following steps:
  • the electronic device may be a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
  • a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
  • Step 101 Obtain a three-dimensional model of the human body.
  • the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame.
  • the three-dimensional model of the human body obtained in this embodiment includes multiple key points and a model frame formed by the connection of multiple key points, and texture information covering the model frame.
  • the key points and the model frame formed by the connection of multiple key points can be expressed in the form of three-dimensional coordinates.
  • different regions of the human body have different skin textures
  • different regions of the human 3D model have different texture information. For example, the skin texture of hair, eye, face, hand, etc. is different.
  • the acquisition of the three-dimensional model of the human body in this embodiment is obtained by performing three-dimensional reconstruction according to the depth information and the human body image, instead of simply acquiring RGB data and depth data.
  • the depth information and the color information corresponding to the two-dimensional image of the human body can be fused to obtain a three-dimensional model of the human body.
  • the key point is a conspicuous point on the human body or a point on a key position.
  • the key points may be hair, eyes, nose, mouth, hands, and the like.
  • keypoint recognition can be performed on a human image to obtain keypoints corresponding to the human image, so that multiple keypoints can be connected to form a three-dimensional model according to the relative position of each keypoint in three-dimensional space.
  • Frame to obtain texture information covering the model frame.
  • Step 102 Obtain environmental information of an environment in which a three-dimensional model is constructed.
  • the environment information includes acceleration information and lighting information of an environment in which a three-dimensional model is constructed.
  • acceleration information may be generated due to movements of the electronic device due to shaking, shaking, and the like.
  • the three-dimensional model will also change accordingly. For example, as the electronic device shakes, the hair or facial muscles in the three-dimensional model of the human body will also swing.
  • the three-dimensional model of the human body changes. For example, when the electronic device is illuminated with strong light, the three-dimensional model will also be illuminated by strong light.
  • a first acceleration vector is first measured by an acceleration sensor configured by an electronic device, and then a second acceleration vector is measured by a gravity sensor configured by the electronic device, and then the measured first acceleration vector and the second acceleration vector are obtained. Synthesis is performed to obtain an acceleration vector, and finally acceleration information in the environment information is determined according to the synthesized acceleration vector.
  • the acceleration vector includes the magnitude and direction of the acceleration.
  • the ambient light sensor in the electronic device associated with the three-dimensional model acquires illumination information in the environment where the electronic device is located.
  • the lighting information includes the intensity of light in the environment and / or the angle information of the light illuminating the screen of the electronic device.
  • Step 103 Adjust position of some key points in the 3D model according to the environmental information, and / or render texture information of the 3D model.
  • a target key point to be adjusted is determined from a plurality of key points of the obtained three-dimensional model to adjust the position of the target key point.
  • the three-dimensional model When acceleration information is generated due to vibration or shaking of the electronic device associated with the three-dimensional model, the three-dimensional model also changes accordingly. Therefore, it is necessary to position the target key points to be adjusted according to the obtained acceleration information of the electronic device associated with the three-dimensional model. Adjust, further, render the texture information to the 3D model after the position adjustment process, so that the 3D model can show more realistic special effects.
  • the light intensity and the light angle irradiated to the electronic device are determined according to the lighting information of the electronic device associated with the three-dimensional model, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity and the light angle. Furthermore, the 3D model after the position adjustment process is subjected to light effect rendering of the skin texture, so that the light intensity on the surface of the 3D model changes with the change of the lighting information.
  • the light intensity of the electronic device is determined according to the illumination information of the electronic device associated with the three-dimensional model, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity. For example, when irradiating an electronic device with strong light, the three-dimensional model will be as if it was illuminated by strong light. Therefore, it is necessary to adjust the size of the highlight value at different positions of the three-dimensional model, and then render the light effect of the skin texture on the three-dimensional model to make the three-dimensional model The light intensity of a surface changes with the intensity of the light.
  • the angle at which the ambient light strikes the electronic device changes.
  • the ambient light of the same light intensity illuminates the electronic device from different angles, the The light intensity is different at different locations.
  • the angle of the light irradiated to the electronic device is determined, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity.
  • the light intensity on the model surface changes with the angle of illumination.
  • the method for processing a three-dimensional model obtains a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the environment is located; according to the environmental information, position adjustment of some key points in the 3D model, and / or rendering texture information of the 3D model.
  • the effect of the three-dimensional human body model more realistically reflects the environment, and the fidelity of the three-dimensional model display special effects is improved.
  • the position of the target key point is adjusted according to the acceleration information.
  • the position of the target key point is adjusted according to the acceleration information.
  • the specific steps are as follows: :
  • Step 201 Read a correspondence between a pre-stored target keypoint and a conversion coefficient.
  • the conversion coefficient refers to a conversion coefficient between acceleration and displacement corresponding to a target key point.
  • the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model. Therefore, the target keypoints at different positions correspond to different conversion coefficients.
  • the target key point is a hair part in the 3D model
  • the target key point of the hair part in the 3D model and the displacement weight w of the target key point are set.
  • the weight of the hair root is small
  • the weight of the hair tip is small.
  • the weight is large
  • the target keypoints are determined according to the material of the corresponding 3D model of the target keypoints, such as black material, and the relative position of the target keypoints in the 3D model. The value of the corresponding conversion coefficient.
  • Step 202 Determine the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and the conversion coefficient between acceleration and displacement corresponding to the target key point.
  • the magnitude of the acceleration is obtained, and the conversion coefficient corresponding to the target key point is determined according to the corresponding relationship between the pre-stored target key point and the conversion coefficient. value. Furthermore, according to the acceleration of the target keypoint and the corresponding conversion coefficient between acceleration and displacement, the displacement value corresponding to the target keypoint is determined.
  • the movement direction of the hair part in the three-dimensional model is -VD.
  • the gravity acceleration G of the electronic device is obtained according to the measurement of the gravity sensor, and the gravity acceleration G and the acceleration V are combined to obtain the movement direction of the hair in the three-dimensional model caused by the movement of the electronic device and the external force as G-VD.
  • the displacement value S w (G-VD) corresponding to each target key point of the hair part in the three-dimensional model is determined.
  • Step 203 Determine the adjustment direction of the target key point according to the acceleration direction indicated by the acceleration information.
  • the direction of acceleration is obtained, and then the adjustment direction of the target key point is determined.
  • step 204 the target key point is moved along the adjustment direction, and the moving distance conforms to the displacement value.
  • the position of the target key point is moved along the adjustment direction of the target key point determined by the acceleration direction, and the moving distance is a corresponding displacement value.
  • the present disclosure also proposes a processing device for a three-dimensional model.
  • FIG. 3 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure.
  • the three-dimensional model processing device 100 includes a first acquisition module 110, a second acquisition module 120, and a processing module 130.
  • the first obtaining module 110 is configured to obtain a three-dimensional model of the human body.
  • the three-dimensional model includes multiple key points, a model frame formed by connecting the multiple key points, and texture information covering the model frame.
  • the second acquisition module 120 is configured to acquire environmental information of an environment in which a three-dimensional model is constructed.
  • the processing module 130 is configured to adjust position of some key points in the three-dimensional model and / or render texture information of the three-dimensional model according to environmental information.
  • the processing module 130 further includes:
  • a determining unit 131 is configured to determine a target key point to be adjusted from a plurality of key points of the three-dimensional model.
  • the adjusting unit 132 is configured to adjust the position of the target key point according to the acceleration information.
  • the rendering unit 133 is configured to perform skin texture rendering on the three-dimensional model after the position adjustment process.
  • the adjusting unit 132 is further configured to determine the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and the conversion coefficient between acceleration and displacement corresponding to the target key point;
  • the second obtaining module 120 further includes:
  • the first measurement unit 121 is configured to measure a first acceleration vector through an acceleration sensor.
  • the second measurement unit 122 is configured to measure a second acceleration vector through a gravity sensor.
  • a combining unit 123 is configured to combine the first acceleration vector and the second acceleration vector.
  • the second determining unit 124 is configured to determine acceleration information in the environment information according to the synthesized acceleration vector.
  • the apparatus for processing a three-dimensional model further includes:
  • the reading module 140 is configured to read a correspondence between a pre-stored target key point and a conversion coefficient
  • the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model.
  • the rendering unit 133 is further configured to determine a light intensity and / or a light angle according to the lighting information; and further, perform a light effect rendering on the skin texture of the three-dimensional model according to the light intensity and / or a light angle.
  • the three-dimensional model processing device obtains a three-dimensional model of a human body; wherein the three-dimensional model includes multiple key points, a model frame formed by the connection of multiple key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the environment is located; according to the environmental information, position adjustment of some key points in the 3D model, and / or rendering texture information of the 3D model.
  • the effect of the three-dimensional human body model more realistically reflects the environment, and the fidelity of the three-dimensional model display special effects is improved.
  • the present disclosure also provides an electronic device including: a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, the implementation is as described above.
  • FIG. 5 is a schematic diagram of the internal structure of the electronic device 200 in an embodiment.
  • the electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected through a system bus 210.
  • the memory 230 of the electronic device 200 stores an operating system and computer-readable instructions.
  • the computer-readable instructions may be executed by the processor 220 to implement a face recognition method according to an embodiment of the present disclosure.
  • the processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200.
  • the display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, or a button, a trackball, or a touchpad provided on the housing of the electronic device 200. It can also be an external keyboard, trackpad, or mouse.
  • the electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, a smart glasses), and the like.
  • FIG. 5 is only a schematic diagram of a part of the structure related to the solution of the present disclosure, and does not constitute a limitation on the electronic device 200 to which the solution of the present disclosure is applied.
  • the specific electronic device 200 may include more or fewer components than shown in the figure, or some components may be combined, or have different component arrangements.
  • an image processing circuit according to an embodiment of the present disclosure is provided.
  • the image processing circuit may be implemented by using hardware and / or software components.
  • the image processing circuit specifically includes an image unit 310, a depth information unit 320, and a processing unit 330. among them,
  • the image unit 310 is configured to output a two-dimensional human body image.
  • the depth information unit 320 is configured to output depth information.
  • a two-dimensional image may be acquired through the image unit 310, and depth information corresponding to the image may be acquired through the depth information unit 320.
  • the processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and is configured to identify the target three-dimensional template matching the image according to the two-dimensional image obtained by the image unit and the corresponding depth information obtained by the depth information unit. , Output the information associated with the target 3D module.
  • the two-dimensional image obtained by the image unit 310 may be sent to the processing unit 330, and the depth information corresponding to the image obtained by the depth information unit 320 may be sent to the processing unit 330.
  • the processing unit 330 may be based on the image and the depth information. Identify the matching target 3D template in the image and output the information associated with the target 3D module.
  • the image processing circuit may further include:
  • the image unit 310 may specifically include: an electrically connected image sensor 311 and an image signal processing (Image Signal Processing, ISP) processor 312. among them,
  • ISP Image Signal Processing
  • the image sensor 311 is configured to output original image data.
  • the ISP processor 312 is configured to output an image according to the original image data.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • Information including images in YUV or RGB format.
  • the image sensor 311 may include a color filter array (such as a Bayer filter), and a corresponding photosensitive unit.
  • the image sensor 311 may obtain light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data.
  • an image in a YUV format or an RGB format is obtained and sent to the processing unit 330.
  • the ISP processor 312 when the ISP processor 312 processes the original image data, it can process the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data and collect statistical information about the image data. The image processing operations may be performed with the same or different bit depth accuracy.
  • the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322. among them,
  • the structured light sensor 321 is configured to generate an infrared speckle pattern.
  • the depth map generation chip 322 is configured to output depth information according to the infrared speckle map; the depth information includes a depth map.
  • the structured light sensor 321 projects speckle structured light onto a subject, obtains the structured light reflected by the subject, and images the structured light reflected by the subject to obtain an infrared speckle pattern.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth map), the depth map indicates the depth of each pixel in the infrared speckle map.
  • the depth map generation chip 322 sends the depth map to the processing unit 330.
  • the processing unit 330 includes a CPU 331 and a GPU (Graphics Processing Unit) 332 which are electrically connected. among them,
  • the CPU 331 is configured to align the image and the depth map according to the calibration data, and output a three-dimensional model according to the aligned image and the depth map.
  • the GPU 332 is configured to determine a matching target 3D template according to the 3D model, and output information related to the target 3D template.
  • the CPU 331 obtains the human body image from the ISP processor 312, and obtains the depth map from the depth map generation chip 322. In combination with the calibration data obtained in advance, the two-dimensional image can be aligned with the depth map, thereby determining the image. Depth information corresponding to each pixel. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the image to obtain a three-dimensional model.
  • the CPU 331 sends the three-dimensional model to the GPU 332, so that the GPU 332 executes the three-dimensional model processing method as described in the foregoing embodiment according to the three-dimensional model, implements position adjustment of some key points in the three-dimensional model, and / or renders skin texture of the three-dimensional model. .
  • the GPU 332 may determine a matching target three-dimensional template according to the three-dimensional model, and then perform annotation in the image according to the information associated with the target three-dimensional template, and output an image with the labeled information.
  • the image processing circuit may further include a display unit 340.
  • the display unit 340 is electrically connected to the GPU 332 and is configured to display an image with labeled information.
  • the beautified image processed by the GPU 332 may be displayed by the display 340.
  • the image processing circuit may further include: an encoder 350 and a memory 360.
  • the beautified image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
  • the memory 360 may be multiple or divided into multiple storage spaces.
  • the image data processed by the storage GPU312 may be stored in a dedicated memory, or a dedicated storage space, and may include DMA (Direct Memory Access, direct and direct). Memory access) feature.
  • the memory 360 may be configured to implement one or more frame buffers.
  • the original image data captured by the image sensor 311 is first processed by the ISP processor 312.
  • the ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311.
  • the information including images in YUV format or RGB format, is sent to the CPU 331.
  • the structured light sensor 321 projects speckle structured light onto a subject, acquires the structured light reflected by the subject, and forms an infrared speckle pattern based on the reflected structured light.
  • the structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth Map).
  • the depth map generation chip 322 sends the depth map to the CPU 331.
  • the CPU 331 obtains a two-dimensional human body image from the ISP processor 312, and obtains a depth map from the depth map generation chip 322. In combination with the calibration data obtained in advance, the face image can be aligned with the depth map, thereby determining the correspondence of each pixel in the image Depth information. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the two-dimensional image to obtain a reconstructed three-dimensional model.
  • the CPU 331 sends the three-dimensional model of the human body to the GPU 332, so that the GPU 332 performs the three-dimensional model processing method described in the foregoing embodiment according to the three-dimensional model of the human body, realizes position adjustment of some key points in the three-dimensional model, and / or renders skin texture of the three-dimensional model.
  • the processed three-dimensional model obtained by the processing by the GPU 332 may be displayed on the display 340, and / or stored in the memory 360 after being encoded by the encoder 350.
  • the following are the steps for implementing the control method using the processor 220 in FIG. 5 or the image processing circuit (specifically, the CPU 331 and the GPU 332) in FIG.
  • the CPU 331 obtains a two-dimensional human body image and depth information corresponding to the human body image; the CPU 331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model of the human body, wherein the three-dimensional model includes multiple Key points, and a model frame formed by the connection of multiple key points, and the skin texture covering the model frame; the CPU 331 further obtains environmental information of the environment in which the electronic device associated with the 3D model is located; the GPU 332 performs a Some key points in the position adjustment, and / or rendering of the skin texture of the 3D model.
  • the present disclosure also proposes a computer-readable storage medium on which a computer program is stored, characterized in that when instructions in the storage medium are executed by a processor, the implementation is as described in the foregoing embodiment. Processing method of 3D model.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present disclosure, the meaning of "a plurality” is at least two, for example, two, three, etc., unless it is specifically and specifically defined otherwise.
  • any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for implementing steps of a custom logic function or process
  • the scope of the preferred embodiments of the present disclosure includes additional implementations in which functions may be performed out of the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present disclosure belong.
  • Logic and / or steps represented in a flowchart or otherwise described herein, for example, a sequenced list of executable instructions that may be considered to implement a logical function, may be embodied in any computer-readable medium, For use by, or in combination with, an instruction execution system, device, or device (such as a computer-based system, a system that includes a processor, or another system that can fetch and execute instructions from an instruction execution system, device, or device) Or equipment.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
  • portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic circuits with logic gates for implementing logic functions on data signals Logic circuits, ASICs with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • a person of ordinary skill in the art can understand that all or part of the steps carried by the methods in the foregoing embodiments can be implemented by a program instructing related hardware.
  • the program can be stored in a computer-readable storage medium.
  • the program is When executed, one or a combination of the steps of the method embodiment is included.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above integrated modules may be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk, or an optical disk.

Abstract

A three-dimensional model processing method and apparatus, an electronic device, and a readable storage device. The method comprises: acquiring a three-dimensional model of a human body; the three-dimensional model comprising a plurality of key points, a model frame formed by the plurality of key points being connected, and texture information encompassing the model frame (101); acquiring environmental information concerning an environment in which the three-dimensional model is constructed (102); and adjusting the positions of part of the key points in the three-dimensional model according to the environmental information, and/or rendering the texture information of the three-dimensional model (103). The method enables a three-dimensional model of a human body to more accurately reflect the effect of an environment, improving the fidelity of the display effect of the three-dimensional model.

Description

三维模型的处理方法、装置、电子设备及可读存储介质Method, device, electronic device and readable storage medium for processing three-dimensional model
相关申请的交叉引用Cross-reference to related applications
本公开要求OPPO广东移动通信有限公司于2018年8月16日提交的、发明名称为“三维模型的处理方法、装置、电子设备以及可读存储介质”的、中国专利申请号“201810934594.1”的优先权。This disclosure requires the priority of China Patent Application No. “201810934594.1”, submitted by OPPO Guangdong Mobile Communication Co., Ltd. on August 16, 2018, with the invention name “Processing Method, Device, Electronic Device, and Readable Storage Medium for Three-Dimensional Models” right.
技术领域Technical field
本公开涉及电子设备技术领域,尤其涉及一种三维模型的处理方法、装置、电子设备以及可读存储介质。The present disclosure relates to the technical field of electronic devices, and in particular, to a method, an apparatus, an electronic device, and a readable storage medium for processing a three-dimensional model.
背景技术Background technique
三维模型重建是建立适合计算机表示和处理的数学模型,是在计算机环境下对其进行处理、操作和分析其性质的基础,也是在计算机中建立表达客观世界的虚拟现实的关键技术。通常通过对三维模型中关键点进行处理,实现模型的重建。3D model reconstruction is a mathematical model suitable for computer representation and processing. It is the basis for processing, manipulating, and analyzing its properties in a computer environment. It is also a key technology for establishing a virtual reality that expresses the objective world in a computer. Usually, the key points in the three-dimensional model are processed to realize the reconstruction of the model.
在实际操作中,与三维模型关联的电子设备晃动或所处的环境光线变化时,三维模型也相应的变化。例如,电子设备运动或不同光线照射时,人脸三维模型也会发生相应的变化,如刘海和面部肌肉会随着电子设备的晃动同时摇摆,面部的光线也随着环境光线同时发生改变。In actual operation, when the electronic device associated with the three-dimensional model shakes or the ambient light changes, the three-dimensional model also changes accordingly. For example, when the electronic device moves or is exposed to different light, the three-dimensional model of the face will also change accordingly. For example, bangs and facial muscles will swing at the same time as the electronic device shakes, and the light on the face will also change with the ambient light.
发明内容Summary of the Invention
本公开旨在至少在一定程度上解决相关技术中的技术问题之一。The present disclosure aims to solve at least one of the technical problems in the related art.
为此,本公开提出一种三维模型的处理方法,以解决现有技术中与三维模型关联的电子设备晃动或所处环境的光线变化时,三维模型也相应的变化,使人体三维模型不能真实的反应环境的效果的技术问题。For this reason, the present disclosure proposes a method for processing a three-dimensional model, so as to solve the current situation that the electronic device associated with the three-dimensional model shakes or the light in the environment changes, the three-dimensional model also changes accordingly, so that the three-dimensional model of the human body cannot be real Technical issues that respond to environmental effects.
本公开提出一种三维模型的处理装置。The present disclosure proposes a processing device for a three-dimensional model.
本公开提出一种电子设备。The present disclosure proposes an electronic device.
本公开提出一种计算机可读存储介质。The present disclosure proposes a computer-readable storage medium.
本公开一方面实施例提出了一种三维模型的处理方法,包括:An embodiment of one aspect of the present disclosure provides a method for processing a three-dimensional model, including:
获取人体的三维模型;其中,所述三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖所述模型框架的纹理信息;Acquiring a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame;
获取构建所述三维模型时所处环境的环境信息;Acquiring environmental information of an environment in which the three-dimensional model is constructed;
根据所述环境信息,对所述三维模型中的部分关键点进行位置调整,和/或对所述三维模型纹理信息进行渲染。According to the environment information, position adjustments are performed on some key points in the three-dimensional model, and / or rendering the three-dimensional model texture information.
本公开实施例的三维模型的处理方法,通过获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息;获取构建三维模型时所处环境的环境信息;根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。由此,通过对人体三维模型的部分关键点进行位置调整以及对人体三维模型的纹理信息进行渲染,使人体三维模型更加真实的反应环境的效果,提高了三维模型展示特效的逼真度。The method for processing a three-dimensional model in the embodiment of the present disclosure obtains a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the model is located; according to the environmental information, position adjustments of some key points in the 3D model, and / or rendering texture information of the 3D model. Therefore, by adjusting the position of some key points of the 3D human body model and rendering the texture information of the 3D human body model, the 3D human body model more realistically reflects the environment effect, and the fidelity of the 3D model display special effects is improved.
本公开又一方面实施例提出了一种三维模型的处理装置,包括:An embodiment of another aspect of the present disclosure provides a three-dimensional model processing device, including:
第一获取模块,用于获取人体的三维模型;其中,所述三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖所述模型框架的纹理信息;A first acquisition module for acquiring a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by a plurality of key points connected, and texture information covering the model frame;
第二获取模块,用于获取构建所述三维模型时所处环境的环境信息;A second acquisition module, configured to acquire environmental information of an environment in which the three-dimensional model is constructed;
处理模块,用于根据所述环境信息,对所述三维模型中的部分关键点进行位置调整,和/或对所述三维模型的纹理信息进行渲染。A processing module, configured to adjust position of some key points in the three-dimensional model and / or render texture information of the three-dimensional model according to the environment information.
本公开实施例的三维模型的处理装置,通过获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息;获取构建三维模型时所处环境的环境信息;根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。由此,通过对人体三维模型的部分关键点进行位置调整以及对人体三维模型的纹理信息进行渲染,使人体三维模型更加真实的反应环境的效果,提高了三维模型展示特效的逼真度。The apparatus for processing a three-dimensional model according to an embodiment of the present disclosure obtains a three-dimensional model of a human body; wherein the three-dimensional model includes multiple key points, a model frame formed by the connection of multiple key points, and texture information covering the model frame; Environmental information of the environment in which the model is located; according to the environmental information, position adjustments of some key points in the 3D model, and / or rendering texture information of the 3D model. Therefore, by adjusting the position of some key points of the 3D human body model and rendering the texture information of the 3D human body model, the 3D human body model more realistically reflects the environment effect, and the fidelity of the 3D model display special effects is improved.
本公开又一方面实施例提出了一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如前述实施例所述的三维模型的处理方法。An embodiment of another aspect of the present disclosure provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor. When the processor executes the program, the foregoing implementation is implemented. The processing method of the three-dimensional model described in the example.
本公开又一方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如前述实施例所述的三维模型的处理方法。An embodiment of another aspect of the present disclosure provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the method for processing a three-dimensional model according to the foregoing embodiment is implemented.
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。Additional aspects and advantages of the present disclosure will be given in part in the following description, part of which will become apparent from the following description, or be learned through the practice of the present disclosure.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and / or additional aspects and advantages of the present disclosure will become apparent and easily understood from the following description of the embodiments in conjunction with the accompanying drawings, in which:
图1为本公开实施例提供的一种三维模型处理方法的流程示意图;FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure;
图2为本公开实施例提供的对目标关键点进行位置调整的流程示意图;2 is a schematic flowchart of adjusting a position of a target key point according to an embodiment of the present disclosure;
图3为本公开实施例提供的一种三维模型处理装置的结构示意图;3 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure;
图4为本公开实施例提供的另一种三维模型处理装置的结构示意图;4 is a schematic structural diagram of another three-dimensional model processing apparatus according to an embodiment of the present disclosure;
图5为一个实施例中电子设备的内部结构示意图;5 is a schematic diagram of an internal structure of an electronic device in an embodiment;
图6为作为一种可能的实现方式的图像处理电路的示意图;6 is a schematic diagram of an image processing circuit as a possible implementation manner;
图7为作为另一种可能的实现方式的图像处理电路的示意图。FIG. 7 is a schematic diagram of an image processing circuit as another possible implementation manner.
具体实施方式detailed description
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。Hereinafter, embodiments of the present disclosure will be described in detail. Examples of the embodiments are shown in the accompanying drawings, wherein the same or similar reference numerals represent the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary, and are intended to explain the present disclosure, and should not be construed as limiting the present disclosure.
下面参考附图描述本公开实施例的三维模型处理方法和装置。The three-dimensional model processing method and device according to the embodiments of the present disclosure are described below with reference to the drawings.
图1为本公开实施例所提供的一种三维模型处理方法的流程示意图。FIG. 1 is a schematic flowchart of a three-dimensional model processing method according to an embodiment of the present disclosure.
如图1所示,该三维模型的处理方法包括以下步骤:As shown in FIG. 1, the processing method of the three-dimensional model includes the following steps:
本公开实施例中,电子设备可以为手机、平板电脑、个人数字助理、穿戴式设备等具有各种操作系统、触摸屏和/或显示屏的硬件设备。In the embodiment of the present disclosure, the electronic device may be a hardware device such as a mobile phone, a tablet computer, a personal digital assistant, or a wearable device, which has various operating systems, a touch screen, and / or a display screen.
步骤101,获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息。Step 101: Obtain a three-dimensional model of the human body. The three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame.
本实施例中获取的人体的三维模型,包括多个关键点以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息。其中,关键点以及多个关键点连接形成的模型框架可以采用三维坐标的形式表示出来。The three-dimensional model of the human body obtained in this embodiment includes multiple key points and a model frame formed by the connection of multiple key points, and texture information covering the model frame. Among them, the key points and the model frame formed by the connection of multiple key points can be expressed in the form of three-dimensional coordinates.
由于人体不同区域对应的皮肤纹理不同,因此人体三维模型的不同区域,对应的纹理信息也不相同。例如,头发部位、眼睛部位、面部区域、手部等位置的皮肤纹理均不相同。Because different regions of the human body have different skin textures, different regions of the human 3D model have different texture information. For example, the skin texture of hair, eye, face, hand, etc. is different.
本实施例中人体的三维模型的获取,是根据深度信息和人体图像,进行三维重构得到的,而不是简单的获取RGB数据和深度数据。The acquisition of the three-dimensional model of the human body in this embodiment is obtained by performing three-dimensional reconstruction according to the depth information and the human body image, instead of simply acquiring RGB data and depth data.
作为一种可能的实现方式,可以将深度信息与人体二维图像对应的色彩信息进行融合,得到人体三维模型。具体地,可以基于人体关键点检测技术,从深度信息提取人体的关键点,以及从色彩信息中提取人体的关键点,而后将从深度信息中提取的关键点和从色彩信息中提取的关键点,进行配准和关键点融合处理,最终根据融合后的关键点,生成人体三维模型。其中,关键点为人体上显眼的点,或者为关键位置上的点,例如关键点可以为头发、眼、鼻子、嘴、手等。As a possible implementation manner, the depth information and the color information corresponding to the two-dimensional image of the human body can be fused to obtain a three-dimensional model of the human body. Specifically, it is possible to extract the key points of the human body from the depth information and the key points of the human body from the color information based on the human key point detection technology, and then extract the key points from the depth information and the key points from the color information. , To perform registration and key point fusion processing, and finally generate a three-dimensional model of the human body based on the key points after fusion. The key point is a conspicuous point on the human body or a point on a key position. For example, the key points may be hair, eyes, nose, mouth, hands, and the like.
进一步的,可以基于人体关键点检测技术,对人体图像进行关键点识别,得到人体图像对应的关键点,从而可以根据各关键点在三维空间中的相对位置,将多个关键点连接形成三维模型的框架,进而获取覆盖模型框架的纹理信息。Further, based on human keypoint detection technology, keypoint recognition can be performed on a human image to obtain keypoints corresponding to the human image, so that multiple keypoints can be connected to form a three-dimensional model according to the relative position of each keypoint in three-dimensional space. Frame to obtain texture information covering the model frame.
步骤102,获取构建三维模型时所处环境的环境信息。Step 102: Obtain environmental information of an environment in which a three-dimensional model is constructed.
其中,环境信息,包括构建三维模型时所处环境的加速度信息以及光照信息。The environment information includes acceleration information and lighting information of an environment in which a three-dimensional model is constructed.
需要说明的是,构建三维模型时可能由于电子设备的抖动、晃动等原因运动,产生加速度信息。同时,三维模型也会发生相应的变化,例如,随着电子设备的抖动,人体三维模型中的头发或者面部肌肉等也会随着摆动。It should be noted that when constructing a three-dimensional model, acceleration information may be generated due to movements of the electronic device due to shaking, shaking, and the like. At the same time, the three-dimensional model will also change accordingly. For example, as the electronic device shakes, the hair or facial muscles in the three-dimensional model of the human body will also swing.
当电子设备所处环境光线的强度发生变化时,人体三维模型也会随着改变,例如,用强光照射电子设备的时,三维模型也会如同被强光照射一样。When the intensity of the ambient light in which the electronic device is located changes, the three-dimensional model of the human body also changes. For example, when the electronic device is illuminated with strong light, the three-dimensional model will also be illuminated by strong light.
本公开实施例中,首先通过电子设备配置的加速度传感器测量得到第一加速度向量,再通过电子设备配置的重力传感器测量得到第二加速度向量,进而对测得的第一加速度向量和第二加速度向量进行合成得到加速度向量,最终根据合成得到的加速度向量,确定环境信息中的加速度信息。其中,加速度向量包括加速度的大小和方向。In the embodiment of the present disclosure, a first acceleration vector is first measured by an acceleration sensor configured by an electronic device, and then a second acceleration vector is measured by a gravity sensor configured by the electronic device, and then the measured first acceleration vector and the second acceleration vector are obtained. Synthesis is performed to obtain an acceleration vector, and finally acceleration information in the environment information is determined according to the synthesized acceleration vector. The acceleration vector includes the magnitude and direction of the acceleration.
同样地,通过三维模型关联的电子设备中的环境光传感器获取该电子设备所处环境中的光照信息。光照信息包括环境中的光线的强度和/或光线照射电子设备屏幕的角度信息。Similarly, the ambient light sensor in the electronic device associated with the three-dimensional model acquires illumination information in the environment where the electronic device is located. The lighting information includes the intensity of light in the environment and / or the angle information of the light illuminating the screen of the electronic device.
步骤103,根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。Step 103: Adjust position of some key points in the 3D model according to the environmental information, and / or render texture information of the 3D model.
本公开实施例中,从获得的三维模型的多个关键点中,确定待调整的目标关键点,以对目标关键点进行位置调整。In the embodiment of the present disclosure, a target key point to be adjusted is determined from a plurality of key points of the obtained three-dimensional model to adjust the position of the target key point.
由于三维模型关联的电子设备抖动或者晃动等原因产生加速度信息时,三维模型也做出相应的变化,因此,需要根据获得的三维模型关联的电子设备的加速度信息,对待调整的目标关键点进行位置调整,进一步地,再对位置调整处理后的三维模型,进行纹理信息渲染,使得三维模型能够展示出更加逼真的特效。When acceleration information is generated due to vibration or shaking of the electronic device associated with the three-dimensional model, the three-dimensional model also changes accordingly. Therefore, it is necessary to position the target key points to be adjusted according to the obtained acceleration information of the electronic device associated with the three-dimensional model. Adjust, further, render the texture information to the 3D model after the position adjustment process, so that the 3D model can show more realistic special effects.
进一步地,根据三维模型关联的电子设备的光照信息确定照射到电子设备的光线强度和光线角度,并同时根据光线强度和光线角度,调整三维模型不同位置处的高光值的大小。进而对位置调整处理后的三维模型,进行皮肤纹理的光效渲染,进而使三维模型表面的光强随着光照信息的变化而变化。Further, the light intensity and the light angle irradiated to the electronic device are determined according to the lighting information of the electronic device associated with the three-dimensional model, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity and the light angle. Furthermore, the 3D model after the position adjustment process is subjected to light effect rendering of the skin texture, so that the light intensity on the surface of the 3D model changes with the change of the lighting information.
作为另一种可能的实现方式,根据三维模型关联的电子设备的光照信息确定照射到电子设备的光线强度,并根据光线强度,调整三维模型不同位置处的高光值的大小。例如,用强光照射电子设备的时,三维模型也会如同被强光照射一样,因此需要调整三维模型不同位置处的高光值大小,进而对三维模型进行皮肤纹理的光效渲染,使三维模型表面的光 强随着光照强度的变化而变化。As another possible implementation manner, the light intensity of the electronic device is determined according to the illumination information of the electronic device associated with the three-dimensional model, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity. For example, when irradiating an electronic device with strong light, the three-dimensional model will be as if it was illuminated by strong light. Therefore, it is necessary to adjust the size of the highlight value at different positions of the three-dimensional model, and then render the light effect of the skin texture on the three-dimensional model to make the three-dimensional model The light intensity of a surface changes with the intensity of the light.
作为另一种可能的实现方式,由于电子设备翻转、抖动等运动时,环境光照射到电子设备的光线角度会发生变化,同一光线强度的环境光,从不同角度照射电子设备时,三维模型的不同位置处的光线强度则不同。根据三维模型关联的电子设备的光照信息确定照射到电子设备的光线角度,并根据光线强度,调整三维模型不同位置处的高光值的大小,进而对三维模型进行皮肤纹理的光效渲染,使三维模型表面的光强随着光照角度的变化而变化。As another possible implementation manner, when the electronic device flips, shakes, or the like, the angle at which the ambient light strikes the electronic device changes. When the ambient light of the same light intensity illuminates the electronic device from different angles, the The light intensity is different at different locations. According to the lighting information of the electronic device associated with the three-dimensional model, the angle of the light irradiated to the electronic device is determined, and the size of the highlight value at different positions of the three-dimensional model is adjusted according to the light intensity. The light intensity on the model surface changes with the angle of illumination.
本公开实施例的三维模型处理方法,通过获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息;获取构建三维模型时所处环境的环境信息;根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。本公开中,通过对人体三维模型的部分关键点进行位置调整以及对人体三维模型的纹理信息进行渲染,使人体三维模型更加真实的反应环境的效果,提高了三维模型展示特效的逼真度。The method for processing a three-dimensional model according to the embodiment of the present disclosure obtains a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the environment is located; according to the environmental information, position adjustment of some key points in the 3D model, and / or rendering texture information of the 3D model. In the present disclosure, by adjusting position of some key points of the three-dimensional human body model and rendering texture information of the three-dimensional human body model, the effect of the three-dimensional human body model more realistically reflects the environment, and the fidelity of the three-dimensional model display special effects is improved.
作为一种可能的实现方式,对于确定的人体三维模型的目标关键点,根据加速度信息,对目标关键点进行位置调整。为了能够准确的根据加速度信息,对目标关键点进行位置调整,本公开实施例中,根据确定的目标关键点的调整方向,进而沿着调整方向,移动目标关键点,参见图2,具体步骤如下:As a possible implementation manner, for the determined target key point of the three-dimensional model of the human body, the position of the target key point is adjusted according to the acceleration information. In order to accurately adjust the position of the target keypoint according to the acceleration information, in the embodiment of the present disclosure, according to the determined adjustment direction of the target keypoint, and then move the target keypoint along the adjustment direction, see FIG. 2. The specific steps are as follows: :
步骤201,读取预存的目标关键点与转换系数之间的对应关系。Step 201: Read a correspondence between a pre-stored target keypoint and a conversion coefficient.
其中,转换系数,是指目标关键点对应的加速度与位移之间的转换系数。Among them, the conversion coefficient refers to a conversion coefficient between acceleration and displacement corresponding to a target key point.
需要说明的是,目标关键点为多个,各目标关键点对应的转换系数取值,是根据各目标关键点对应三维模型的材质和/或在三维模型中的相对位置确定的。因此,不同位置的目标关键点对应不同的转换系数。It should be noted that there are multiple target keypoints, and the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model. Therefore, the target keypoints at different positions correspond to different conversion coefficients.
作为一种示例,目标关键点为三维模型中的头发部位时,设定三维模型中头发部位的目标关键点和目标关键点的位移权重w,例如,发根占的权重较小,发尖占的权重较大,根据各目标关键点对应三维模型的材质,例如黑色材质,以及目标关键点在三维模型中的相对位置,根据预存的目标关键点与转换系数之间的对应关系,确定目标关键点对应的转换系数的取值。As an example, when the target key point is a hair part in the 3D model, the target key point of the hair part in the 3D model and the displacement weight w of the target key point are set. For example, the weight of the hair root is small, and the weight of the hair tip is small. The weight is large, and the target keypoints are determined according to the material of the corresponding 3D model of the target keypoints, such as black material, and the relative position of the target keypoints in the 3D model. The value of the corresponding conversion coefficient.
步骤202,根据加速度信息指示的加速度值,以及目标关键点对应的加速度与位移之间的转换系数,确定目标关键点对应的位移值。Step 202: Determine the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and the conversion coefficient between acceleration and displacement corresponding to the target key point.
具体地,根据三维模型关联的电子设备的加速度传感器测量得到的加速度信息,得到加速度的大小,并且根据预存的目标关键点与转换系数之间的对应关系,确定目标关键点对应的转换系数的取值。进而根据目标关键点的加速度至以及对应的加速度与位移之间的 转换系数,确定目标关键点对应的位移值。Specifically, according to the acceleration information measured by the acceleration sensor of the electronic device associated with the three-dimensional model, the magnitude of the acceleration is obtained, and the conversion coefficient corresponding to the target key point is determined according to the corresponding relationship between the pre-stored target key point and the conversion coefficient. value. Furthermore, according to the acceleration of the target keypoint and the corresponding conversion coefficient between acceleration and displacement, the displacement value corresponding to the target keypoint is determined.
作为一种示例,根据电子设备的加速度传感器测量得到的加速度的大小V以及加速度的方向D,确定三维模型中头发部分的运动方向为-VD。进一步的,根据重力传感器测量得到电子设备的重力加速度G,将重力加速度G与加速度V合成,得到由电子设备运动以及外力引起的三维模型中头发的运动方向为G-VD。进而确定三维模型中头发部位的各个目标关键点对应的位移值S=w(G-VD)。As an example, according to the magnitude V of the acceleration measured by the acceleration sensor of the electronic device and the direction D of the acceleration, it is determined that the movement direction of the hair part in the three-dimensional model is -VD. Further, the gravity acceleration G of the electronic device is obtained according to the measurement of the gravity sensor, and the gravity acceleration G and the acceleration V are combined to obtain the movement direction of the hair in the three-dimensional model caused by the movement of the electronic device and the external force as G-VD. Furthermore, the displacement value S = w (G-VD) corresponding to each target key point of the hair part in the three-dimensional model is determined.
步骤203,根据加速度信息指示的加速度方向,确定目标关键点的调整方向。Step 203: Determine the adjustment direction of the target key point according to the acceleration direction indicated by the acceleration information.
具体地,根据三维模型关联的电子设备的加速度传感器测量得到的加速度信息,得到加速度的方向,进而确定目标关键点的调整方向。Specifically, according to the acceleration information measured by the acceleration sensor of the electronic device associated with the three-dimensional model, the direction of acceleration is obtained, and then the adjustment direction of the target key point is determined.
步骤204,沿调整方向,移动目标关键点,移动距离符合位移值。In step 204, the target key point is moved along the adjustment direction, and the moving distance conforms to the displacement value.
本实施例中,沿着通过加速度方向确定的目标关键点的调整方向,移动目标关键点的位置,其中,移动距离为对应的位移值。In this embodiment, the position of the target key point is moved along the adjustment direction of the target key point determined by the acceleration direction, and the moving distance is a corresponding displacement value.
本公开实施例的三维模型的处理方法,通过读取预存的目标关键点与转换系数之间的对应关系;根据加速度信息指示的加速度值,以及目标关键点对应的加速度与位移之间的转换系数,确定目标关键点对应的位移值;再根据加速度信息指示的加速度方向,确定目标关键点的调整方向;最终沿调整方向,移动目标关键点,移动距离符合位移值。该方法通过调整三维模型中目标关键点的位置与方向,使得人体三维模型更加真实的反应环境的效果,提高了三维模型展示特效的逼真度。In the method for processing a three-dimensional model according to the embodiment of the present disclosure, by reading a pre-stored correspondence between a target keypoint and a conversion coefficient; an acceleration value indicated according to acceleration information, and a conversion coefficient between acceleration and displacement corresponding to the target keypoint , Determine the displacement value corresponding to the target key point; then determine the adjustment direction of the target key point according to the acceleration direction indicated by the acceleration information; finally, move the target key point along the adjustment direction, and the moving distance conforms to the displacement value. This method adjusts the position and direction of the key points of the target in the 3D model to make the 3D model of the human body more realistically reflect the effect of the environment and improve the fidelity of the 3D model display special effects.
为了实现上述实施例,本公开还提出一种三维模型的处理装置。In order to implement the above embodiments, the present disclosure also proposes a processing device for a three-dimensional model.
图3为本公开实施例提供的一种三维模型的处理装置的结构示意图。FIG. 3 is a schematic structural diagram of a three-dimensional model processing apparatus according to an embodiment of the present disclosure.
如图所示,该三维模型的处理装置100包括:第一获取模块110、第二获取模块120以及处理模块130。As shown in the figure, the three-dimensional model processing device 100 includes a first acquisition module 110, a second acquisition module 120, and a processing module 130.
第一获取模块110,用于获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息。The first obtaining module 110 is configured to obtain a three-dimensional model of the human body. The three-dimensional model includes multiple key points, a model frame formed by connecting the multiple key points, and texture information covering the model frame.
第二获取模块120,用于获取构建三维模型时所处环境的环境信息。The second acquisition module 120 is configured to acquire environmental information of an environment in which a three-dimensional model is constructed.
处理模块130,用于根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。The processing module 130 is configured to adjust position of some key points in the three-dimensional model and / or render texture information of the three-dimensional model according to environmental information.
作为一种可能的实现方式,参见图4,处理模块130,还包括:As a possible implementation manner, referring to FIG. 4, the processing module 130 further includes:
确定单元131,用于从三维模型的多个关键点中,确定待调整的目标关键点。A determining unit 131 is configured to determine a target key point to be adjusted from a plurality of key points of the three-dimensional model.
调整单元132,用于根据加速度信息,对目标关键点进行位置调整。The adjusting unit 132 is configured to adjust the position of the target key point according to the acceleration information.
渲染单元133,用于对位置调整处理后的三维模型,进行皮肤纹理渲染。The rendering unit 133 is configured to perform skin texture rendering on the three-dimensional model after the position adjustment process.
作为一种可能的实现方式,调整单元132,还用于根据加速度信息指示的加速度值, 以及目标关键点对应的加速度与位移之间的转换系数,确定目标关键点对应的位移值;As a possible implementation manner, the adjusting unit 132 is further configured to determine the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and the conversion coefficient between acceleration and displacement corresponding to the target key point;
根据加速度信息指示的加速度方向,确定目标关键点的调整方向;Determine the adjustment direction of the target key point according to the acceleration direction indicated by the acceleration information;
沿调整方向,移动目标关键点,移动距离符合位移值。Along the adjustment direction, move the key point of the target, and the moving distance conforms to the displacement value.
作为一种可能的实现方式,参见图4,第二获取模块120,还包括:As a possible implementation manner, referring to FIG. 4, the second obtaining module 120 further includes:
第一测量单元121,用于通过加速度传感器测得第一加速度向量。The first measurement unit 121 is configured to measure a first acceleration vector through an acceleration sensor.
第二测量单元122,用于通过重力传感器测得第二加速度向量。The second measurement unit 122 is configured to measure a second acceleration vector through a gravity sensor.
合成单元123,用于对第一加速度向量和第二加速度向量进行合成。A combining unit 123 is configured to combine the first acceleration vector and the second acceleration vector.
第二确定单元124,用于根据合成得到的加速度向量,确定环境信息中的加速度信息。The second determining unit 124 is configured to determine acceleration information in the environment information according to the synthesized acceleration vector.
作为一种可能的实现方式,参见图4,该三维模型的处理装置,还包括:As a possible implementation manner, referring to FIG. 4, the apparatus for processing a three-dimensional model further includes:
读取模块140,用于读取预存的目标关键点与转换系数之间的对应关系;The reading module 140 is configured to read a correspondence between a pre-stored target key point and a conversion coefficient;
其中,目标关键点为多个,各目标关键点对应的转换系数取值,是根据各目标关键点对应三维模型的材质和/或在三维模型中的相对位置确定的。There are multiple target keypoints, and the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model.
作为一种可能的实现方式,渲染单元133,还用于根据光照信息确定光强和/或光线角度;进而根据光强和/或光线角度,对三维模型的皮肤纹理进行光效渲染。As a possible implementation manner, the rendering unit 133 is further configured to determine a light intensity and / or a light angle according to the lighting information; and further, perform a light effect rendering on the skin texture of the three-dimensional model according to the light intensity and / or a light angle.
本公开实施例的三维模型处理装置,通过获取人体的三维模型;其中,三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖模型框架的纹理信息;获取构建三维模型时所处环境的环境信息;根据环境信息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的纹理信息进行渲染。本公开中,通过对人体三维模型的部分关键点进行位置调整以及对人体三维模型的皮肤纹理进行渲染,使人体三维模型更加真实的反应环境的效果,提高了三维模型展示特效的逼真度。The three-dimensional model processing device according to the embodiment of the present disclosure obtains a three-dimensional model of a human body; wherein the three-dimensional model includes multiple key points, a model frame formed by the connection of multiple key points, and texture information covering the model frame; obtaining and constructing a three-dimensional model Environmental information of the environment in which the environment is located; according to the environmental information, position adjustment of some key points in the 3D model, and / or rendering texture information of the 3D model. In the present disclosure, by adjusting the position of some key points of the three-dimensional human body model and rendering the skin texture of the three-dimensional human body model, the effect of the three-dimensional human body model more realistically reflects the environment, and the fidelity of the three-dimensional model display special effects is improved.
需要说明的是,前述对三维模型处理方法实施例的解释说明也适用于该实施例的三维模型处理装置,此处不再赘述。It should be noted that the foregoing explanation of the embodiment of the three-dimensional model processing method is also applicable to the three-dimensional model processing apparatus of this embodiment, and details are not described herein again.
为了实现上述实施例,本公开还提出一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如前述实施例所述的三维模型的处理方法。In order to implement the above embodiments, the present disclosure also provides an electronic device including: a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the program, the implementation is as described above. The processing method of the three-dimensional model according to the embodiment.
图5为一个实施例中电子设备200的内部结构示意图。该电子设备200包括通过系统总线210连接的处理器220、存储器230、显示器240和输入装置250。其中,电子设备200的存储器230存储有操作系统和计算机可读指令。该计算机可读指令可被处理器220执行,以实现本公开实施方式的人脸识别方法。该处理器220用于提供计算和控制能力,支撑整个电子设备200的运行。电子设备200的显示器240可以是液晶显示屏或者电子墨水显示屏等,输入装置250可以是显示器240上覆盖的触摸层,也可以是电子设备200外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备200可 以是手机、平板电脑、笔记本电脑、个人数字助理或穿戴式设备(例如智能手环、智能手表、智能头盔、智能眼镜)等。FIG. 5 is a schematic diagram of the internal structure of the electronic device 200 in an embodiment. The electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected through a system bus 210. The memory 230 of the electronic device 200 stores an operating system and computer-readable instructions. The computer-readable instructions may be executed by the processor 220 to implement a face recognition method according to an embodiment of the present disclosure. The processor 220 is used to provide computing and control capabilities to support the operation of the entire electronic device 200. The display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, or a button, a trackball, or a touchpad provided on the housing of the electronic device 200. It can also be an external keyboard, trackpad, or mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, a smart glasses), and the like.
本领域技术人员可以理解,图5中示出的结构,仅仅是与本公开方案相关的部分结构的示意图,并不构成对本公开方案所应用于其上的电子设备200的限定,具体的电子设备200可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 5 is only a schematic diagram of a part of the structure related to the solution of the present disclosure, and does not constitute a limitation on the electronic device 200 to which the solution of the present disclosure is applied. The specific electronic device 200 may include more or fewer components than shown in the figure, or some components may be combined, or have different component arrangements.
作为一种可能的实现方式,请参阅图6,提供了本公开实施例的图像处理电路,图像处理电路可利用硬件和/或软件组件实现。As a possible implementation manner, referring to FIG. 6, an image processing circuit according to an embodiment of the present disclosure is provided. The image processing circuit may be implemented by using hardware and / or software components.
如图6,该图像处理电路具体包括:图像单元310、深度信息单元320和处理单元330。其中,As shown in FIG. 6, the image processing circuit specifically includes an image unit 310, a depth information unit 320, and a processing unit 330. among them,
图像单元310,用于输出二维的人体图像。The image unit 310 is configured to output a two-dimensional human body image.
深度信息单元320,用于输出深度信息。The depth information unit 320 is configured to output depth information.
本公开实施例中,可以通过图像单元310,获取二维的图像,以及通过深度信息单元320,获取图像对应的深度信息。In the embodiment of the present disclosure, a two-dimensional image may be acquired through the image unit 310, and depth information corresponding to the image may be acquired through the depth information unit 320.
处理单元330,分别与图像单元310和深度信息单元320电性连接,用于根据图像单元获取的二维的图像,以及深度信息单元获取的对应的深度信息,识别与图像中匹配的目标三维模板,输出目标三维模块关联的信息。The processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and is configured to identify the target three-dimensional template matching the image according to the two-dimensional image obtained by the image unit and the corresponding depth information obtained by the depth information unit. , Output the information associated with the target 3D module.
本公开实施例中,图像单元310获取的二维图像可以发送至处理单元330,以及深度信息单元320获取的图像对应的深度信息可以发送至处理单元330,处理单元330可以根据图像以及深度信息,识别与图像中匹配的目标三维模板,输出目标三维模块关联的信息。具体的实现过程,可以参见上述图1至图2实施例中对三维模型处理的方法的解释说明,此处不做赘述。In the embodiment of the present disclosure, the two-dimensional image obtained by the image unit 310 may be sent to the processing unit 330, and the depth information corresponding to the image obtained by the depth information unit 320 may be sent to the processing unit 330. The processing unit 330 may be based on the image and the depth information. Identify the matching target 3D template in the image and output the information associated with the target 3D module. For a specific implementation process, reference may be made to the explanation of the method for processing a three-dimensional model in the embodiments in FIG. 1 to FIG. 2 described above, and details are not described herein.
进一步地,作为本公开一种可能的实现方式,参见图7,在图6所示实施例的基础上,该图像处理电路还可以包括:Further, as a possible implementation manner of the present disclosure, referring to FIG. 7, based on the embodiment shown in FIG. 6, the image processing circuit may further include:
作为一种可能的实现方式,图像单元310具体可以包括:电性连接的图像传感器311和图像信号处理(Image Signal Processing,简称ISP)处理器312。其中,As a possible implementation manner, the image unit 310 may specifically include: an electrically connected image sensor 311 and an image signal processing (Image Signal Processing, ISP) processor 312. among them,
图像传感器311,用于输出原始图像数据。The image sensor 311 is configured to output original image data.
ISP处理器312,用于根据原始图像数据,输出图像。The ISP processor 312 is configured to output an image according to the original image data.
本公开实施例中,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的图像。其中,图像传感器311可包括色彩滤镜阵列(如Bayer滤镜),以及对应的感光单元,图像传感器311可获取每个感 光单元捕捉的光强度和波长信息,并提供可由ISP处理器312处理的一组原始图像数据。ISP处理器312对原始图像数据进行处理后,得到YUV格式或者RGB格式的图像,并发送至处理单元330。In the embodiment of the present disclosure, the original image data captured by the image sensor 311 is first processed by the ISP processor 312. The ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311. Information, including images in YUV or RGB format. The image sensor 311 may include a color filter array (such as a Bayer filter), and a corresponding photosensitive unit. The image sensor 311 may obtain light intensity and wavelength information captured by each photosensitive unit, and provide information that can be processed by the ISP processor 312. A set of raw image data. After the ISP processor 312 processes the original image data, an image in a YUV format or an RGB format is obtained and sent to the processing unit 330.
其中,ISP处理器312在对原始图像数据进行处理时,可以按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器312可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。Among them, when the ISP processor 312 processes the original image data, it can process the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the original image data and collect statistical information about the image data. The image processing operations may be performed with the same or different bit depth accuracy.
作为一种可能的实现方式,深度信息单元320,包括电性连接的结构光传感器321和深度图生成芯片322。其中,As a possible implementation manner, the depth information unit 320 includes an electrically connected structured light sensor 321 and a depth map generation chip 322. among them,
结构光传感器321,用于生成红外散斑图。The structured light sensor 321 is configured to generate an infrared speckle pattern.
深度图生成芯片322,用于根据红外散斑图,输出深度信息;深度信息包括深度图。The depth map generation chip 322 is configured to output depth information according to the infrared speckle map; the depth information includes a depth map.
本公开实施例中,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map),该深度图指示了红外散斑图中各像素点的深度。深度图生成芯片322将深度图发送至处理单元330。In the embodiment of the present disclosure, the structured light sensor 321 projects speckle structured light onto a subject, obtains the structured light reflected by the subject, and images the structured light reflected by the subject to obtain an infrared speckle pattern. The structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth map), the depth map indicates the depth of each pixel in the infrared speckle map. The depth map generation chip 322 sends the depth map to the processing unit 330.
作为一种可能的实现方式,处理单元330,包括:电性连接的CPU331和GPU(Graphics Processing Unit,图形处理器)332。其中,As a possible implementation manner, the processing unit 330 includes a CPU 331 and a GPU (Graphics Processing Unit) 332 which are electrically connected. among them,
CPU331,用于根据标定数据,对齐图像与深度图,根据对齐后的图像与深度图,输出三维模型。The CPU 331 is configured to align the image and the depth map according to the calibration data, and output a three-dimensional model according to the aligned image and the depth map.
GPU332,用于根据三维模型,确定匹配的目标三维模板,输出目标三维模板关联的信息。The GPU 332 is configured to determine a matching target 3D template according to the 3D model, and output information related to the target 3D template.
本公开实施例中,CPU331从ISP处理器312获取到人体图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将二维图像与深度图对齐,从而确定出图像中各像素点对应的深度信息。进而,CPU331根据深度信息和图像,进行三维重构,得到三维模型。In the embodiment of the present disclosure, the CPU 331 obtains the human body image from the ISP processor 312, and obtains the depth map from the depth map generation chip 322. In combination with the calibration data obtained in advance, the two-dimensional image can be aligned with the depth map, thereby determining the image. Depth information corresponding to each pixel. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the image to obtain a three-dimensional model.
CPU331将三维模型发送至GPU332,以便GPU332根据三维模型执行如前述实施例中描述的三维模型处理方法,实现对三维模型中的部分关键点进行位置调整,和/或对三维模型的皮肤纹理进行渲染。The CPU 331 sends the three-dimensional model to the GPU 332, so that the GPU 332 executes the three-dimensional model processing method as described in the foregoing embodiment according to the three-dimensional model, implements position adjustment of some key points in the three-dimensional model, and / or renders skin texture of the three-dimensional model. .
具体地,GPU332可以根据三维模型,确定匹配的目标三维模板,而后根据目标三维模板关联的信息,在图像中进行标注,输出标注信息的图像。Specifically, the GPU 332 may determine a matching target three-dimensional template according to the three-dimensional model, and then perform annotation in the image according to the information associated with the target three-dimensional template, and output an image with the labeled information.
进一步地,图像处理电路还可以包括:显示单元340。Further, the image processing circuit may further include a display unit 340.
显示单元340,与GPU332电性连接,用于对标注信息的图像进行显示。The display unit 340 is electrically connected to the GPU 332 and is configured to display an image with labeled information.
具体地,GPU332处理得到的美化后的图像,可以由显示器340显示。Specifically, the beautified image processed by the GPU 332 may be displayed by the display 340.
可选地,图像处理电路还可以包括:编码器350和存储器360。Optionally, the image processing circuit may further include: an encoder 350 and a memory 360.
本公开实施例中,GPU332处理得到的美化后的图像,还可以由编码器350编码后存储至存储器360,其中,编码器350可由协处理器实现。In the embodiment of the present disclosure, the beautified image processed by the GPU 332 may also be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
在一个实施例中,存储器360可以为多个,或者划分为多个存储空间,存储GPU312处理后的图像数据可存储至专用存储器,或者专用存储空间,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。存储器360可被配置为实现一个或多个帧缓冲器。In one embodiment, the memory 360 may be multiple or divided into multiple storage spaces. The image data processed by the storage GPU312 may be stored in a dedicated memory, or a dedicated storage space, and may include DMA (Direct Memory Access, direct and direct). Memory access) feature. The memory 360 may be configured to implement one or more frame buffers.
下面结合图7,对上述过程进行详细说明。The above process is described in detail below with reference to FIG. 7.
如图7所示,图像传感器311捕捉的原始图像数据首先由ISP处理器312处理,ISP处理器312对原始图像数据进行分析以捕捉可用于确定图像传感器311的一个或多个控制参数的图像统计信息,包括YUV格式或者RGB格式的图像,并发送至CPU331。As shown in FIG. 7, the original image data captured by the image sensor 311 is first processed by the ISP processor 312. The ISP processor 312 analyzes the original image data to capture image statistics that can be used to determine one or more control parameters of the image sensor 311. The information, including images in YUV format or RGB format, is sent to the CPU 331.
如图7所示,结构光传感器321向被摄物投射散斑结构光,并获取被摄物反射的结构光,根据反射的结构光成像,得到红外散斑图。结构光传感器321将该红外散斑图发送至深度图生成芯片322,以便深度图生成芯片322根据红外散斑图确定结构光的形态变化情况,进而据此确定被摄物的深度,得到深度图(Depth Map)。深度图生成芯片322将深度图发送至CPU331。As shown in FIG. 7, the structured light sensor 321 projects speckle structured light onto a subject, acquires the structured light reflected by the subject, and forms an infrared speckle pattern based on the reflected structured light. The structured light sensor 321 sends the infrared speckle pattern to the depth map generation chip 322, so that the depth map generation chip 322 determines the morphological change of the structured light according to the infrared speckle pattern, and then determines the depth of the object to obtain a depth map. (Depth Map). The depth map generation chip 322 sends the depth map to the CPU 331.
CPU331从ISP处理器312获取到二维人体图像,从深度图生成芯片322获取到深度图,结合预先得到的标定数据,可以将人脸图像与深度图对齐,从而确定出图像中各像素点对应的深度信息。进而,CPU331根据深度信息和二维图像,进行三维重构,得到重构的三维模型。The CPU 331 obtains a two-dimensional human body image from the ISP processor 312, and obtains a depth map from the depth map generation chip 322. In combination with the calibration data obtained in advance, the face image can be aligned with the depth map, thereby determining the correspondence of each pixel in the image Depth information. Furthermore, the CPU 331 performs three-dimensional reconstruction based on the depth information and the two-dimensional image to obtain a reconstructed three-dimensional model.
CPU331将人体三维模型发送至GPU332,以便GPU332根据人体三维模型执行如前述实施例中描述的三维模型处理方法,实现三维模型中部分关键点进行位置调整,和/或对三维模型的皮肤纹理进行渲染。GPU332处理得到的处理后的三维模型,可以由显示器340显示,和/或,由编码器350编码后存储至存储器360。The CPU 331 sends the three-dimensional model of the human body to the GPU 332, so that the GPU 332 performs the three-dimensional model processing method described in the foregoing embodiment according to the three-dimensional model of the human body, realizes position adjustment of some key points in the three-dimensional model, and / or renders skin texture of the three-dimensional model. . The processed three-dimensional model obtained by the processing by the GPU 332 may be displayed on the display 340, and / or stored in the memory 360 after being encoded by the encoder 350.
例如,以下为运用图5中的处理器220或运用图7中的图像处理电路(具体为CPU331和GPU332)实现控制方法的步骤:For example, the following are the steps for implementing the control method using the processor 220 in FIG. 5 or the image processing circuit (specifically, the CPU 331 and the GPU 332) in FIG.
获取所述三维模型关联的电子设备所处环境的环境信息;Acquiring environmental information of an environment in which the electronic device associated with the three-dimensional model is located;
CPU331获取二维的人体图像,以及所述人体图像对应的深度信息;CPU331根据所述深度信息和所述人脸图像,进行三维重构,得到人体的三维模型;其中,所述三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖所述模型框架的皮肤纹理;CPU331进一步的获取三维模型关联的电子设备所处环境的环境信息;GPU332根据环境信 息,对三维模型中的部分关键点进行位置调整,和/或对三维模型的皮肤纹理进行渲染。The CPU 331 obtains a two-dimensional human body image and depth information corresponding to the human body image; the CPU 331 performs three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model of the human body, wherein the three-dimensional model includes multiple Key points, and a model frame formed by the connection of multiple key points, and the skin texture covering the model frame; the CPU 331 further obtains environmental information of the environment in which the electronic device associated with the 3D model is located; the GPU 332 performs a Some key points in the position adjustment, and / or rendering of the skin texture of the 3D model.
为了实现上述实施例,本公开还提出一种计算机可读存储介质,其上存储有计算机程序,其特征在于,当所述存储介质中的指令由处理器执行时,实现如前述实施例所述的三维模型的处理方法。In order to implement the above embodiments, the present disclosure also proposes a computer-readable storage medium on which a computer program is stored, characterized in that when instructions in the storage medium are executed by a processor, the implementation is as described in the foregoing embodiment. Processing method of 3D model.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, the description with reference to the terms “one embodiment”, “some embodiments”, “examples”, “specific examples”, or “some examples” and the like means specific features described in conjunction with the embodiments or examples , Structure, material, or characteristic is included in at least one embodiment or example of the present disclosure. In this specification, the schematic expressions of the above terms are not necessarily directed to the same embodiment or example. Moreover, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. In addition, without any contradiction, those skilled in the art may combine and combine different embodiments or examples and features of the different embodiments or examples described in this specification.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as "first" and "second" may explicitly or implicitly include at least one of the features. In the description of the present disclosure, the meaning of "a plurality" is at least two, for example, two, three, etc., unless it is specifically and specifically defined otherwise.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。Any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for implementing steps of a custom logic function or process And, the scope of the preferred embodiments of the present disclosure includes additional implementations in which functions may be performed out of the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present disclosure belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机 存储器中。Logic and / or steps represented in a flowchart or otherwise described herein, for example, a sequenced list of executable instructions that may be considered to implement a logical function, may be embodied in any computer-readable medium, For use by, or in combination with, an instruction execution system, device, or device (such as a computer-based system, a system that includes a processor, or another system that can fetch and execute instructions from an instruction execution system, device, or device) Or equipment. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it may be implemented using any one or a combination of the following techniques known in the art: Discrete logic circuits with logic gates for implementing logic functions on data signals Logic circuits, ASICs with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。A person of ordinary skill in the art can understand that all or part of the steps carried by the methods in the foregoing embodiments can be implemented by a program instructing related hardware. The program can be stored in a computer-readable storage medium. The program is When executed, one or a combination of the steps of the method embodiment is included.
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module. The above integrated modules may be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。The aforementioned storage medium may be a read-only memory, a magnetic disk, or an optical disk. Although the embodiments of the present disclosure have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limitations on the present disclosure. Those skilled in the art can understand the above within the scope of the present disclosure. Embodiments are subject to change, modification, substitution, and modification.

Claims (20)

  1. 一种三维模型的处理方法,其特征在于,所述方法包括以下步骤:A method for processing a three-dimensional model, wherein the method includes the following steps:
    获取人体的三维模型;其中,所述三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖所述模型框架的纹理信息;Acquiring a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by connecting the plurality of key points, and texture information covering the model frame;
    获取构建所述三维模型时所处环境的环境信息;Acquiring environmental information of an environment in which the three-dimensional model is constructed;
    根据所述环境信息,对所述三维模型中的部分关键点进行位置调整,和/或对所述三维模型纹理信息进行渲染。According to the environment information, position adjustments are performed on some key points in the three-dimensional model, and / or rendering the three-dimensional model texture information.
  2. 根据权利要求1所述的处理方法,其特征在于,所述环境信息包括加速度信息;所述根据所述环境信息,对所述三维模型中的部分关键点进行位置调整,和/或对所述三维模型纹理信息进行渲染,包括:The processing method according to claim 1, wherein the environment information includes acceleration information; and according to the environment information, a position adjustment is performed on some key points in the three-dimensional model, and / or the 3D model texture information for rendering, including:
    从所述三维模型的多个关键点中,确定待调整的目标关键点;Determining a target key point to be adjusted from a plurality of key points of the three-dimensional model;
    根据所述加速度信息,对所述目标关键点进行位置调整;Adjusting the position of the target key point according to the acceleration information;
    对位置调整处理后的三维模型,进行纹理信息渲染。Render the texture information to the 3D model after position adjustment.
  3. 根据权利要求2所述的处理方法,其特征在于,所述根据所述加速度信息,对所述目标关键点进行位置调整,包括:The processing method according to claim 2, wherein the adjusting the position of the target keypoint according to the acceleration information comprises:
    根据所述加速度信息指示的加速度值,以及所述目标关键点对应的加速度与位移之间的转换系数,确定所述目标关键点对应的位移值;Determining the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and a conversion coefficient between acceleration and displacement corresponding to the target key point;
    根据所述加速度信息指示的加速度方向,确定所述目标关键点的调整方向;Determining an adjustment direction of the target key point according to an acceleration direction indicated by the acceleration information;
    沿所述调整方向,移动所述目标关键点,移动距离符合所述位移值。Moving the target key point along the adjustment direction, and the moving distance conforms to the displacement value.
  4. 根据权利要求1-3任一项所述的处理方法,其特征在于,所述获取构建所述三维模型时所处环境的环境信息,包括:The processing method according to any one of claims 1-3, wherein the acquiring environmental information of an environment in which the three-dimensional model is constructed comprises:
    通过加速度传感器测得第一加速度向量;Measuring a first acceleration vector through an acceleration sensor;
    通过重力传感器测得第二加速度向量;A second acceleration vector measured by a gravity sensor;
    对所述第一加速度向量和所述第二加速度向量进行合成;Synthesizing the first acceleration vector and the second acceleration vector;
    根据合成得到的加速度向量,确定所述环境信息中的加速度信息。Acceleration information in the environment information is determined according to the acceleration vector obtained through synthesis.
  5. 根据权利要求3或4所述的处理方法,其特征在于,所述根据所述加速度信息指示的加速度值,以及所述目标关键点对应的加速度与位移之间的转换系数,确定所述目标关键点对应的位移值之前还包括:The processing method according to claim 3 or 4, wherein the target key is determined according to an acceleration value indicated by the acceleration information and a conversion coefficient between acceleration and displacement corresponding to the target key point. The displacement value corresponding to the point also includes:
    读取预存的所述目标关键点与所述转换系数之间的对应关系;Reading a pre-stored correspondence between the target keypoint and the conversion coefficient;
    其中,所述目标关键点为多个,各目标关键点对应的转换系数取值,是根据各目标关键点对应三维模型的材质和/或在所述三维模型中的相对位置确定的。There are multiple target keypoints, and the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model.
  6. 根据权利要求1-5任一项所述的处理方法,其特征在于,所述环境信息包括光照信息;所述对所述三维模型的纹理信息进行渲染,包括:The processing method according to any one of claims 1 to 5, wherein the environment information includes lighting information; and rendering the texture information of the three-dimensional model includes:
    根据所述光照信息确定光强和/或光线角度;Determining light intensity and / or light angle according to the illumination information;
    根据所述光强和/或光线角度,对所述三维模型的纹理信息进行光效渲染。Performing light effect rendering on the texture information of the three-dimensional model according to the light intensity and / or light angle.
  7. 根据权利要求6所述的处理方法,其特征在于,所述根据所述光强和/或光线角度,对所述三维模型的纹理信息进行光效渲染,包括:The processing method according to claim 6, wherein performing light effect rendering on the texture information of the three-dimensional model according to the light intensity and / or light angle comprises:
    根据所述光强和/或光线角度,调整所述三维模型不同位置处的高光值;Adjusting highlight values at different positions of the three-dimensional model according to the light intensity and / or light angle;
    根据所述高光值,对所述三维模型的纹理信息进行光效渲染。Light effect rendering is performed on the texture information of the three-dimensional model according to the highlight value.
  8. 根据权利要求1-7任一项所述的处理方法,其特征在于,所述获取人体的三维模型,包括:The processing method according to any one of claims 1-7, wherein the acquiring a three-dimensional model of a human body comprises:
    将深度信息与人体二维图像对应的色彩信息进行融合,得到人体三维模型。The depth information is fused with the color information corresponding to the two-dimensional image of the human body to obtain a three-dimensional model of the human body.
  9. 根据权利要求8所述的处理方法,其特征在于,所述将深度信息与人体二维图像对应的色彩信息进行融合,得到人体三维模型,包括:The processing method according to claim 8, wherein the fusing the depth information with the color information corresponding to the two-dimensional image of the human body to obtain a three-dimensional model of the human body comprises:
    从所述深度信息提取出人体的第一关键点,以及从所述色彩信息中提取出人体的第二关键点;Extracting a first key point of the human body from the depth information, and extracting a second key point of the human body from the color information;
    对所述第一关键点和所述第二关键点进行配准和融合处理,得到融合后的关键点;Performing registration and fusion processing on the first key point and the second key point to obtain a fused key point;
    根据所述融合后的关键点,生成所述人体三维模型。According to the fused key points, the three-dimensional model of the human body is generated.
  10. 一种三维模型的处理装置,其特征在于,所述装置包括:A three-dimensional model processing device, wherein the device includes:
    第一获取模块,用于获取人体的三维模型;其中,所述三维模型包括多个关键点,以及多个关键点连接形成的模型框架,以及覆盖所述模型框架的纹理信息;A first acquisition module for acquiring a three-dimensional model of a human body; wherein the three-dimensional model includes a plurality of key points, a model frame formed by a plurality of key points connected, and texture information covering the model frame;
    第二获取模块,用于获取构建所述三维模型时所处环境的环境信息;A second acquisition module, configured to acquire environmental information of an environment in which the three-dimensional model is constructed;
    处理模块,用于根据所述环境信息,对所述三维模型中的部分关键点进行位置调整,和/或对所述三维模型纹理信息进行渲染。A processing module, configured to adjust position of some key points in the three-dimensional model and / or render texture information of the three-dimensional model according to the environment information.
  11. 根据权利要求10所述的处理装置,其特征在于,所述环境信息包括加速度信息;所述处理模块,包括:The processing device according to claim 10, wherein the environmental information includes acceleration information; and the processing module includes:
    确定单元,用于从所述三维模型的多个关键点中,确定待调整的目标关键点;A determining unit, configured to determine a target key point to be adjusted from a plurality of key points of the three-dimensional model;
    调整单元,用于根据所述加速度信息,对所述目标关键点进行位置调整;An adjustment unit, configured to adjust a position of the target key point according to the acceleration information;
    渲染单元,用于对位置调整处理后的三维模型,进行纹理信息渲染。The rendering unit is used to render texture information to the three-dimensional model after the position adjustment process.
  12. 根据权利要求11所述的处理装置,其特征在于,所述调整单元,用于:The processing device according to claim 11, wherein the adjustment unit is configured to:
    根据所述加速度信息指示的加速度值,以及所述目标关键点对应的加速度与位移之间的转换系数,确定所述目标关键点对应的位移值;Determining the displacement value corresponding to the target key point according to the acceleration value indicated by the acceleration information and a conversion coefficient between acceleration and displacement corresponding to the target key point;
    根据所述加速度信息指示的加速度方向,确定所述目标关键点的调整方向;Determining an adjustment direction of the target key point according to an acceleration direction indicated by the acceleration information;
    沿所述调整方向,移动所述目标关键点,移动距离符合所述位移值。Moving the target key point along the adjustment direction, and the moving distance conforms to the displacement value.
  13. 根据权利要求10-12任一项所述的处理装置,其特征在于,所述第二获取模块,包括:The processing device according to any one of claims 10-12, wherein the second acquisition module comprises:
    第一测量单元,用于通过加速度传感器测得第一加速度向量;A first measurement unit, configured to measure a first acceleration vector through an acceleration sensor;
    第二测量单元,用于通过重力传感器测得第二加速度向量;A second measurement unit, configured to measure a second acceleration vector through a gravity sensor;
    合成单元,用于对所述第一加速度向量和所述第二加速度向量进行合成;A synthesis unit, configured to synthesize the first acceleration vector and the second acceleration vector;
    第二确定单元,用于根据合成得到的加速度向量,确定所述环境信息中的加速度信息。The second determining unit is configured to determine acceleration information in the environment information according to the synthesized acceleration vector.
  14. 根据权利要求12或13所述的处理装置,其特征在于,所述处理装置,还包括:The processing device according to claim 12 or 13, wherein the processing device further comprises:
    读取模块,用于读取预存的所述目标关键点与所述转换系数之间的对应关系;A reading module, configured to read a corresponding relationship between the target keypoint and the conversion coefficient that are stored in advance;
    其中,所述目标关键点为多个,各目标关键点对应的转换系数取值,是根据各目标关键点对应三维模型的材质和/或在所述三维模型中的相对位置确定的。There are multiple target keypoints, and the value of the conversion coefficient corresponding to each target keypoint is determined according to the material of the three-dimensional model corresponding to each target keypoint and / or the relative position in the three-dimensional model.
  15. 根据权利要求10-14任一项所述的处理装置,其特征在于,所述环境信息包括光照信息;所述渲染单元,用于:The processing device according to any one of claims 10 to 14, wherein the environment information includes lighting information; and the rendering unit is configured to:
    根据所述光照信息确定光强和/或光线角度;Determining light intensity and / or light angle according to the illumination information;
    根据所述光强和/或光线角度,对所述三维模型的纹理信息进行光效渲染。Performing light effect rendering on the texture information of the three-dimensional model according to the light intensity and / or light angle.
  16. 根据权利要求15所述的处理装置,其特征在于,所述渲染单元,还用于:The processing device according to claim 15, wherein the rendering unit is further configured to:
    根据所述光强和/或光线角度,调整所述三维模型不同位置处的高光值;Adjusting highlight values at different positions of the three-dimensional model according to the light intensity and / or light angle;
    根据所述高光值,对所述三维模型的纹理信息进行光效渲染。Light effect rendering is performed on the texture information of the three-dimensional model according to the highlight value.
  17. 根据权利要求10-16任一项所述的处理装置,其特征在于,所述获取模块,包括:The processing device according to any one of claims 10 to 16, wherein the acquisition module comprises:
    融合单元,用于将深度信息与人体二维图像对应的色彩信息进行融合,得到人体三维模型。A fusion unit is used to fuse the depth information with the color information corresponding to the two-dimensional image of the human body to obtain a three-dimensional model of the human body.
  18. 根据权利要求17所述的处理装置,其特征在于,所述融合单元,用于:The processing device according to claim 17, wherein the fusion unit is configured to:
    从所述深度信息提取出人体的第一关键点,以及从所述色彩信息中提取出人体的第二关键点;Extracting a first key point of the human body from the depth information, and extracting a second key point of the human body from the color information;
    对所述第一关键点和所述第二关键点进行配准和融合处理,得到.融合后的关键点;Performing registration and fusion processing on the first key point and the second key point to obtain a key point after fusion;
    根据所述融合后的关键点,生成所述人体三维模型。According to the fused key points, the three-dimensional model of the human body is generated.
  19. 一种电子设备,其特征在于,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-9中任一所述的三维模型的处理方法。An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor. When the processor executes the program, the method according to any one of claims 1-9 is implemented. A method for processing a three-dimensional model.
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时,实现如权利要求1-9中任一所述的三维模型的处理方法。A computer-readable storage medium having stored thereon a computer program, characterized in that when the program is executed by a processor, a method for processing a three-dimensional model according to any one of claims 1-9 is implemented.
PCT/CN2019/090557 2018-08-16 2019-06-10 Three-dimensional model processing method and apparatus, electronic device and readable storage medium WO2020034738A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810934594.1 2018-08-16
CN201810934594.1A CN109285214A (en) 2018-08-16 2018-08-16 Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model

Publications (1)

Publication Number Publication Date
WO2020034738A1 true WO2020034738A1 (en) 2020-02-20

Family

ID=65183565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090557 WO2020034738A1 (en) 2018-08-16 2019-06-10 Three-dimensional model processing method and apparatus, electronic device and readable storage medium

Country Status (2)

Country Link
CN (1) CN109285214A (en)
WO (1) WO2020034738A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285214A (en) * 2018-08-16 2019-01-29 Oppo广东移动通信有限公司 Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model
CN110717867B (en) * 2019-09-04 2023-07-11 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN111063024A (en) * 2019-12-11 2020-04-24 腾讯科技(深圳)有限公司 Three-dimensional virtual human driving method and device, electronic equipment and storage medium
CN112426716A (en) * 2020-11-26 2021-03-02 网易(杭州)网络有限公司 Three-dimensional hair model processing method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
US20170053422A1 (en) * 2015-08-17 2017-02-23 Fabien CHOJNOWSKI Mobile device human body scanning and 3d model creation and analysis
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109285214A (en) * 2018-08-16 2019-01-29 Oppo广东移动通信有限公司 Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013029675A1 (en) * 2011-08-31 2013-03-07 Metaio Gmbh Method for estimating a camera motion and for determining a three-dimensional model of a real environment
US9025859B2 (en) * 2012-07-30 2015-05-05 Qualcomm Incorporated Inertial sensor aided instant autofocus
CN104424630A (en) * 2013-08-20 2015-03-18 华为技术有限公司 Three-dimension reconstruction method and device, and mobile terminal
CN107705355A (en) * 2017-09-08 2018-02-16 郭睿 A kind of 3D human body modeling methods and device based on plurality of pictures
CN107944420B (en) * 2017-12-07 2020-10-27 北京旷视科技有限公司 Illumination processing method and device for face image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
US20170053422A1 (en) * 2015-08-17 2017-02-23 Fabien CHOJNOWSKI Mobile device human body scanning and 3d model creation and analysis
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109285214A (en) * 2018-08-16 2019-01-29 Oppo广东移动通信有限公司 Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model

Also Published As

Publication number Publication date
CN109285214A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
WO2020034738A1 (en) Three-dimensional model processing method and apparatus, electronic device and readable storage medium
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN104937635B (en) More hypothesis target tracking devices based on model
WO2020035002A1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN1320423C (en) Image display apparatus and method
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
US11069151B2 (en) Methods and devices for replacing expression, and computer readable storage media
US10489956B2 (en) Robust attribute transfer for character animation
WO2020034785A1 (en) Method and device for processing three-dimensional model
JP2020526818A (en) Methods and systems for performing simultaneous localization and mapping using convolutional image transformation
AU2018214005A1 (en) Systems and methods for generating a 3-D model of a virtual try-on product
WO2020034743A1 (en) Three-dimensional model processing method and apparatus, electronic device, and readable storage medium
CN106325509A (en) Three-dimensional gesture recognition method and system
WO2019196745A1 (en) Face modelling method and related product
CN113822977A (en) Image rendering method, device, equipment and storage medium
WO2010038693A1 (en) Information processing device, information processing method, program, and information storage medium
CN109949900B (en) Three-dimensional pulse wave display method and device, computer equipment and storage medium
CN109242760B (en) Face image processing method and device and electronic equipment
CN114373044A (en) Method, device, computing equipment and storage medium for generating three-dimensional face model
EP4049245B1 (en) Augmented reality 3d reconstruction
CN108549484A (en) Man-machine interaction method and device based on human body dynamic posture
JP2013231607A (en) Calibration tool display device, calibration tool display method, calibration device, calibration method, calibration system and program
US20110025685A1 (en) Combined geometric and shape from shading capture
Jain et al. Human computer interaction–Hand gesture recognition
JP2023124678A (en) Image processing device, image processing method, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19849653

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19849653

Country of ref document: EP

Kind code of ref document: A1