WO2021098143A1 - 图像处理方法及装置、图像处理设备及存储介质 - Google Patents

图像处理方法及装置、图像处理设备及存储介质 Download PDF

Info

Publication number
WO2021098143A1
WO2021098143A1 PCT/CN2020/086695 CN2020086695W WO2021098143A1 WO 2021098143 A1 WO2021098143 A1 WO 2021098143A1 CN 2020086695 W CN2020086695 W CN 2020086695W WO 2021098143 A1 WO2021098143 A1 WO 2021098143A1
Authority
WO
WIPO (PCT)
Prior art keywords
deformation
target
model
image
parameter
Prior art date
Application number
PCT/CN2020/086695
Other languages
English (en)
French (fr)
Inventor
李通
刘文韬
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202104071TA priority Critical patent/SG11202104071TA/en
Priority to KR1020207035889A priority patent/KR102406438B1/ko
Priority to JP2020570014A priority patent/JP2022512262A/ja
Priority to US17/131,879 priority patent/US11450068B2/en
Publication of WO2021098143A1 publication Critical patent/WO2021098143A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image processing method and device, image processing equipment and storage medium.
  • the captured image includes the imaging (target) of the captured object, and its imaging effect may need to be adjusted, and the target may be deformed, for example, performing body beautification, facial beautification, and so on.
  • the deformation effect of the image is not ideal, and even unnecessary deformation other than the desired deformation occurs after the deformation.
  • the embodiments of the present disclosure expect to provide an image processing method and device, image processing equipment, and storage medium.
  • a first aspect of the embodiments of the present disclosure provides an image processing method, including: obtaining a first 3D model of the target in a three-dimensional 3D space based on a first two-dimensional 2D image containing the target; obtaining 3D deformation parameters, and based on the 3D Deformation parameters, transform the first 3D model into a second 3D model; map the first 3D model to a 2D space to obtain a first 2D coordinate; and map the second 3D model to a 2D space to obtain a second 2D coordinates; performing the deformation of the target in the first 2D image based on the first 2D coordinates and the second 2D coordinates to obtain a second 2D image containing the deformed target.
  • the obtaining a first 3D model of the target in 3D space based on the first 2D image containing the target includes: recovering the model from the first 2D image through a human body mesh The first reconstruction parameter used to reconstruct the 3D model of the target is extracted; and the first 3D model of the target in the 3D space is reconstructed by using the extracted first reconstruction parameter.
  • the first reconstruction parameter includes at least one of the following parameters: a first joint point parameter of the target; a first morphological parameter of the target; the first 2D The camera parameters of the image.
  • the method further includes: extracting a second joint point parameter of the target from the first 2D image through a human body detection model; wherein, the second joint point parameter represents There is an overlap between the second joint point of the second joint point and the part of the first joint point represented by the first joint point parameter; the first reconstruction parameter of the extracted first reconstruction parameter is used to reconstruct the first joint point of the target in the 3D space
  • the 3D model includes: replacing the first joint point parameter of the first joint point that overlaps the second joint point in the first reconstruction parameter with the second joint point parameter to form the second reconstruction parameter ; Based on the second reconstruction parameter, reconstruct the first 3D model of the target in the 3D space.
  • mapping the first 3D model to a 2D space to obtain a first 2D coordinate, and mapping the second 3D model to a 2D space to obtain a second 2D coordinate includes : According to the camera parameters corresponding to the first 2D image, respectively map the first 3D model and the second 3D model to a 2D space to obtain the first 2D coordinates and the second 2D coordinates.
  • the method further includes: acquiring a first 2D deformation parameter of the first 2D image; the acquiring a 3D deformation parameter includes: according to the first 2D deformation parameter and 2D The mapping relationship between the space and the 3D space is used to obtain the 3D deformation parameters.
  • the method further includes: acquiring contour point parameters of the target in the first 2D image; and determining a connection between at least two contour points based on the contour point parameters Relationship and determine the deformation direction represented by the connection relationship; the deformation of the target in the first 2D image is performed based on the first 2D coordinates and the second 2D coordinates to obtain a target that includes the deformed target
  • the second 2D image includes: deforming the target in the first 2D image in the deforming direction based on the first 2D coordinates and the second 2D coordinates to obtain a second target including the deformed target Two 2D images.
  • the determining a connection relationship between at least two contour points and determining a deformation direction represented by the connection relationship based on the contour point parameters includes: based on the contour point Parameter, the deformation direction is determined according to the first connecting direction of at least two local contour points symmetrically distributed in the target; wherein the deformation direction includes at least one of the following directions: parallel to the first A first deformation direction of a connection direction; a second deformation direction perpendicular to the first connection direction.
  • the determining a connection relationship between at least two contour points and determining a deformation direction represented by the connection relationship based on the contour point parameters includes: based on the contour point Parameters, determine the deformation direction according to the second connecting direction of at least two contour points symmetrically distributed with a predetermined local center point or center line in the target, wherein the deformation direction includes: parallel to the first The third deformation direction of the two connecting directions, and/or the fourth deformation direction perpendicular to the second connecting direction.
  • the deformation of the target in the first 2D image is performed based on the first 2D coordinates and the second 2D coordinates, to obtain a second target that includes the deformed target.
  • a 2D image including: determining a second 2D deformation parameter that moves the first 2D coordinate to a second 2D coordinate along the deformation direction; and performing the target in the first 2D image based on the second 2D deformation parameter The deformation of, obtains a second 2D image containing the deformed target.
  • the transforming the first 3D model into the second 3D model includes: changing the coordinates of at least part of the contour points of the first 3D model based on the 3D deformation parameter Obtain the second 3D model.
  • a second aspect of the embodiments of the present disclosure provides an image processing device, including:
  • the first obtaining module is configured to obtain a first 3D model of the target in a three-dimensional 3D space based on the first two-dimensional 2D image containing the target;
  • the second acquisition module is configured to acquire 3D deformation parameters, and based on the 3D deformation parameters, transform the first 3D model into a second 3D model;
  • a mapping module configured to map the first 3D model to a 2D space to obtain a first 2D coordinate, and to map the second 3D model to a 2D space to obtain a second 2D coordinate;
  • the deforming module is configured to deform the target in the first 2D image based on the first 2D coordinates and the second 2D coordinates to obtain a second 2D image containing the deformed target.
  • the first acquisition module is configured to extract a first reconstruction parameter for reconstructing a 3D model of the target from the first 2D image through a human body mesh recovery model ; Using the extracted first reconstruction parameters to reconstruct the first 3D model of the target in the 3D space.
  • the first reconstruction parameter includes at least one of the following parameters: a first joint point parameter of the target; a first morphological parameter of the target; the first 2D The camera parameters of the image.
  • the device further includes: an extraction module configured to extract a second joint point parameter of the target from the first 2D image through a human body detection model; wherein, the There is an overlap between the second joint point represented by the second joint point parameter and a part of the first joint point represented by the first joint point parameter;
  • the first acquisition module is configured to replace the first joint point parameter of the part of the first joint point that overlaps with the second joint point in the first reconstruction parameter with the second joint point parameter to form A second reconstruction parameter; based on the second reconstruction parameter, reconstruct a first 3D model of the target in a 3D space.
  • the mapping module is configured to respectively map the first 3D model and the second 3D model to a 2D space according to camera parameters corresponding to the first 2D image Obtain the first 2D coordinate and the second 2D coordinate.
  • the device further includes: a third acquisition module configured to acquire 2D deformation parameters of the first 2D image;
  • the second acquisition module is configured to obtain the 3D deformation parameter according to the first 2D deformation parameter and the mapping relationship between the 2D space and the 3D space.
  • the device further includes: a fourth acquiring module configured to acquire contour point parameters of the target in the first 2D image;
  • a determining module configured to determine a connection relationship between at least two contour points based on the contour point parameters and determine a deformation direction represented by the connection relationship
  • the deforming module is configured to deform the target in the first 2D image in the deforming direction based on the first 2D coordinates and the second 2D coordinates to obtain a second target including the deformed target Two 2D images.
  • the determining module is configured to determine the first connection direction of at least two local contour points symmetrically distributed in the target based on the contour point parameters Deformation direction; wherein the deformation direction includes at least one of the following directions: a first deformation direction parallel to the first connection direction; and a second deformation direction perpendicular to the first connection direction.
  • the determining module is configured to, based on the contour point parameter, according to the second of at least two contour points symmetrically distributed with a predetermined local center point or a center line in the target
  • the connection direction determines the deformation direction, wherein the deformation direction includes: a third deformation direction parallel to the second connection direction, and/or a fourth deformation perpendicular to the second connection direction direction.
  • the deformation module is configured to determine a second 2D deformation parameter that moves the first 2D coordinate to a second 2D coordinate along the deformation direction; based on the second 2D deformation Parameter, performing the deformation of the target in the first 2D image to obtain a second 2D image containing the deformed target.
  • the second acquisition module is configured to change the coordinates of at least part of the contour points of the first 3D model to obtain the second 3D model based on the 3D deformation parameter.
  • a third aspect of the embodiments of the present disclosure provides an image processing device, the image processing device including:
  • Memory for storing computer executable instructions
  • the processor is connected to the memory, and is configured to implement the image processing method provided by any of the foregoing technical solutions by executing the computer-executable instructions.
  • a fourth aspect of the embodiments of the present disclosure provides a computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, the image processing method provided by any of the foregoing technical solutions can be implemented.
  • the image deformation is no longer directly performed in the 2D plane, but the target in the first 2D image is converted to the target in the 3D space.
  • deforming the first 3D model by obtaining the 3D deformation parameters, directly deform the first 3D model to obtain the deformed second 3D model; then respectively map the first 3D model and the second 3D model to 2D Space, get the first 2D coordinates and the second 2D coordinates mapped back to the 2D plane, and perform the deformation of the target in the first 2D image based on the first 2D coordinates and the second 2D coordinates; in this way, compared to directly in the 2D plane Deformation of the target can reduce unnecessary deformation and improve the deformation effect of 2D images.
  • FIG. 1 is a schematic flowchart of a first image processing method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of 14 human body joint points provided by an embodiment of the disclosure.
  • FIG. 3 is a schematic diagram of 25 human body joint points provided by an embodiment of the disclosure.
  • FIG. 4 is a schematic flowchart of a second image processing method provided by an embodiment of the disclosure.
  • 5A is a schematic diagram of the effect of a 2D image provided by an embodiment of the present disclosure.
  • FIG. 5B is a schematic diagram of extracted human joint points according to an embodiment of the disclosure.
  • FIG. 5C is a schematic diagram of extracted contour points of a human body according to an embodiment of the disclosure.
  • FIG. 5D is a schematic diagram of a first deformation direction provided by an embodiment of the present disclosure.
  • 5E is a schematic diagram of a second deformation direction provided by an embodiment of the disclosure.
  • 6A is a schematic diagram of the effect of a 3D model provided by an embodiment of the present disclosure.
  • 6B is a schematic diagram of the effect of another 3D model provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
  • this embodiment provides an image processing method, including:
  • S110 Obtain a first 3D model of the target in a three-dimensional 3D space based on the first two-dimensional 2D image containing the target;
  • S120 Obtain 3D deformation parameters, and based on the 3D deformation parameters, transform the first 3D model into a second 3D model;
  • S130 Map the first 3D model to the 2D space to obtain the first 2D coordinates, and map the second 3D model to the 2D space to obtain the second 2D coordinates;
  • S140 Perform deformation of the target in the first 2D image based on the first 2D coordinates and the second 2D coordinates, to obtain a second 2D image including the deformed target.
  • the image processing device may include various terminal devices, and the terminal devices include: mobile phones or wearable devices.
  • the terminal device may also include: a vehicle-mounted terminal device or a fixed terminal device fixed in a certain place and dedicated to image collection and photography.
  • the image processing device may further include a server, for example, a local server or a cloud server located in a cloud platform that provides image processing services.
  • the 2D image may be captured by a 2D camera.
  • the 2D image may be a Red Green Blue (RGB) image or a YUV image; among them, YUV is a color coding method, and "Y” represents brightness (Luminance or Luma), that is, grayscale value.
  • RGB Red Green Blue
  • YUV a color coding method
  • Y represents brightness (Luminance or Luma), that is, grayscale value.
  • U” and “V” stand for chrominance (Chrominance or Chroma), which are used to describe the color and saturation of an image.
  • this embodiment does not directly deform the target in the 2D image, but obtains the 3D model of the target in the 3D space based on the 2D image, and the aforementioned 3D model is denoted as the first 3D model.
  • 6A and 6B are schematic diagrams of the effects of two first 3D models corresponding to different targets.
  • the 3D deformation parameters are obtained based on the first 3D model; for example, the 3D deformation parameters may include: transformation parameters for one or more 3D coordinates on the first 3D model in the 3D space.
  • the transformation parameters may include at least one of the following parameters: scale values of different parts of the desired 3D model, size values of different parts of the desired 3D model, and transformation of the first 3D model into the second 3D model The direction and/or the magnitude of the deformation in the corresponding deformation direction.
  • the scale value of different parts of the desired 3D model is a scalar, which represents the scale of different parts.
  • the ratio value may be the ratio between the length of the upper body of the human body and the length of the legs.
  • the size values of different parts of the desired 3D model may include, for example, size values such as the leg length value, waist width value, and height value of the human body.
  • the method further includes: acquiring the first 2D deformation parameter of the first 2D image; acquiring the 3D deformation parameter may include: obtaining the 3D deformation according to the first 2D deformation parameter and the mapping relationship between the 2D space and the 3D space parameter.
  • the user can manually move the 2D coordinates of one or more parts of the target on the first 2D image; the image processing device can determine according to the user operation input by the user The first 2D deformation parameter.
  • the first 2D deformation parameter may include: contour points that undergo 2D coordinate transformation on the first 2D image in the 2D space and coordinate transformation values of these contour points. According to the mapping relationship between the 2D space and the 3D space, the first 2D deformation parameter is converted into a 3D deformation parameter.
  • the 3D deformation parameter may be: a deformation parameter generated according to a user instruction received from a human-computer interaction interface. For example, after the first 3D model is constructed, one plane of the first 3D model can be displayed on the 2D interface at a time; different planes of the first 3D model can be displayed on the 2D interface by rotating or moving; at this time, by targeting Various user operations on different planes on the first 3D model can also determine 3D deformation parameters.
  • a user-input at least a user instruction for indicating a deformation effect or a deformation amount is received on the human-computer interaction interface, and the user instruction is quantified to obtain the 3D deformation parameter.
  • user A expects his photo to show a waist circumference of 2 feet after being slim.
  • the image processing device maps the first 2D image to the 3D space to obtain the first 3D
  • the model estimates the actual waist circumference of the user, and then obtains the amount of waist circumference that needs to be deformed in the three-dimensional space based on the estimated actual waist circumference and the expected waist circumference.
  • the displacement of each coordinate of the waist surface on the first 3D model is determined, so as to obtain the 3D deformation parameters.
  • the 3D deformation parameters obtained at this time can be used to convert the first 3D model into the second 3D model.
  • the first 3D model is transformed based on the 3D deformation parameters to generate a second 3D model that is different from the first 3D model.
  • the difference between the second 3D model and the first 3D model is that the 3D coordinates of the contour points on the surface of the second 3D model are at least partially different from the coordinates of the contour points on the surface of the first 3D model.
  • the form corresponding to the second 3D model is different from the form corresponding to the first 3D model.
  • the target is a human body as an example
  • the shape of the human body corresponding to the second 3D model is different from the shape of the human body corresponding to the first 3D model.
  • the human body corresponding to the first 3D model is heavier than the human body corresponding to the second 3D model, and so on.
  • the first 3D model and the second 3D model are obtained after step S120.
  • step S130 in this embodiment may include: projecting the first 3D model and the second 3D model into a 2D plane by means of projection, respectively, to obtain the first 3D model corresponding to the first 3D model.
  • step S130 may further include: obtaining a second 2D deformation parameter according to the first 2D coordinate and the second 2D coordinate, and the second 2D deformation parameter can convert the first 2D coordinate into the second 2D coordinate.
  • the coordinate values of the first 2D coordinate and the second 2D coordinate are compared to obtain the difference between the first 2D coordinate and the second 2D coordinate, and the difference may be used as the second 2D deformation parameter.
  • the first 2D coordinates include: the first 2D coordinates obtained by mapping the contour points of the breast surface of the first 3D model back to the 2D space; the breast of the second 3D model after the breast augmentation of the first 3D model The contour points of the surface are mapped back to the 2D space to obtain the second 2D coordinates.
  • the 2D coordinates of the two breasts are compared to obtain the second 2D deformation parameter.
  • the first 2D coordinate and the corresponding second 2D coordinate can be transformed and fitted, and the fitting is used to transform
  • the first 2D coordinate is transformed into the transformation matrix of the second 2D coordinate;
  • the transformation matrix can be directly used as the second 2D deformation parameter to deform the target on the first 2D image, and based on the second 2D deformation parameter, the transformation matrix in the 2D image
  • the target is deformed, so as to obtain a second 2D image containing the deformed target.
  • the second 2D deformation parameter further includes at least one of the following parameters: a recombination algorithm or recombination parameter of each pixel in the target, a color change algorithm of each pixel in the target, and the like.
  • the deformation of the target in the 2D image includes, but is not limited to, at least one of the following: fat and thin deformation of the target; height deformation of the target; shape deformation of the facial features of the target.
  • the target deformation may be, for example, the body fat and thin deformation, the height deformation of the human body, the deformation of the facial features of the human body, the change of the length of the hands or the feet of the human body, and so on.
  • the image when the target in the 2D plane is deformed, the image is no longer directly deformed in the 2D plane, but the target in the first 2D image is converted to the first 3D model in the 3D space, and the target is deformed.
  • the first 3D model is directly deformed by obtaining the 3D deformation parameters
  • the second 3D model after the deformation is obtained; after the first 3D model and the second 3D model are respectively mapped to the 2D space, the second 3D model is mapped back to the 2D plane.
  • the first 2D coordinates and the second 2D coordinates, and the deformation of the target in the first 2D image is performed based on the first 2D coordinates and the second 2D coordinates; this way, compared to directly deforming the target in the 2D plane, unnecessary The deformation is generated, which improves the deformation effect of the 2D image.
  • a first 3D model composed of multiple polygonal meshes in the 3D space may be obtained based on the first 2D image containing the target.
  • the first 3D model is a three-dimensional model located in a 3D space
  • the first 3D model includes a large number of key points
  • the plurality of key points are connected to form a polygonal grid.
  • the polygon mesh also known as mesh
  • the triangular mesh is a type of polygonal mesh. Three adjacent key points among the many key points can be connected to form a triangular grid to obtain the first 3D model composed of multiple triangular grids.
  • This first 3D model composed of mesh can realistically simulate the collection object corresponding to the target in the 3D space, so as to realize the height restoration of the collection object in the 3D space to ensure the deformation effect of the image.
  • the human mesh recovery (HMR) model may be used to obtain the first 2D model from the first 2D model.
  • the first reconstruction parameter used to reconstruct the 3D model of the target is extracted from the image; the first 3D model of the target in the 3D space is reconstructed by using the extracted first reconstruction parameter.
  • the first reconstruction parameter may include at least one of the following parameters: the first joint point parameter of the target, the first morphological parameter of the target, the camera parameter of the first 2D image, and so on.
  • the first 3D model can be accurately constructed based on the camera parameters of the 2D image, the first morphological parameters of the target, and the first joint point parameters of the target.
  • the first joint point parameter includes the 3D coordinates of the first joint point and the like. There may be many joint points included in the target, some joint points may not be used when forming the first 3D model, and some joint points used when forming the first 3D model are called first joint points. For example, taking the human body as an example, the human body contains many joints, and the key points of these joints are called joint points. When constructing the first 3D model, some joint points of the human body that are not very important to the appearance of the human body can be ignored. For example, the joint points corresponding to the joints that bend the fingers of the hand may not be very important and can be ignored.
  • the first morphological parameter of the target may include: various parameters indicating the size of the target in different dimensions, for example, indicating the height, fat or thinness, and size of different parts of the target.
  • the size of the different parts may be, for example, Including: morphological parameters such as waist circumference, bust circumference, hip circumference or face length.
  • the camera parameters of the 2D image may include: camera internal parameters for shooting the 2D image.
  • the internal parameters include, but are not limited to: the focal length, the width dx of a single pixel of the 2D image in the world coordinate system, and the height dy of a single pixel of the 2D image in the world coordinate system.
  • the target is a human body as an example.
  • the HMR model will calculate the parameters of each key point of the human body in the first 2D image. For example, these parameters include: the 3D coordinates of the joint points corresponding to the joints on the 24 human skeleton and the morphological parameters of the human body.
  • the HMR model also outputs the camera parameters of the camera that took the first 2D image. Camera parameters may include, for example, focal length and optical center coordinates.
  • the parametric human body model can be, for example, a Skinned Multi-Person Linear Model (SMPL) model.
  • the SMPL model is a skinned, vertex-based, three-dimensional human body model. It can accurately represent the different shapes and poses of the human body.
  • the HMR model outputs the coordinates of joint points corresponding to various joints on various skeletons in the first 2D image.
  • the shape of the human body depends not only on the skeleton but also on the characteristics of tissues and organs such as muscles.
  • the 3D coordinates of the skeleton obtained by the HMR model and the morphological parameters of the human body can be extracted from the first 2D image, and the muscle movement and tissue distribution around the periphery of the skeleton can be simulated, so as to achieve the rendering of the skeleton to obtain the first A 3D model.
  • the first 3D model obtained in this way can realistically reflect various features of the target in the first 2D image.
  • Fig. 5A is a schematic diagram of the acquired original first 2D image
  • Fig. 5B is a schematic diagram of the first joint point extracted from the human body shown in Fig. 5A, represented by solid circles distributed on the human body.
  • the method further includes: extracting a second joint point parameter of the target from the first 2D image.
  • the above-mentioned second joint point parameter and the first joint point parameter are the parameters of the joint points in the human body, but the extraction method is different, or the accuracy of the first joint point parameter and the second joint point parameter are different; or, the first joint point parameter The type of the parameter and the second key point parameter are different.
  • the first joint point parameter includes: the 3D coordinates of the first joint point; the second joint point parameter includes the 2D coordinates of the first joint point.
  • the first joint point parameter is extracted using the HMR model
  • the second joint point is extracted using the human body detection model.
  • the human body detection model can obtain the 2D coordinates of the joint points on the human skeleton according to the input first 2D image.
  • the second joint point parameter extracted from the 2D image alone may have a higher accuracy than the first joint point parameter in the first reconstruction parameter.
  • the method may further include: extracting the second joint point parameter of the target from the first 2D image through a human body detection model (such as an OPEN POSE model); wherein, the second joint point represented by the second joint point parameter is the same as There is overlap between some of the first joint points indicated by the first joint point parameter;
  • the above-mentioned reconstruction of the first 3D model of the target in 3D space by using the extracted first reconstruction parameter includes: the first joint point parameter of the first joint point that overlaps the second joint point among the first reconstruction parameters Replace with the second joint point parameter to form the second reconstruction parameter; based on the second reconstruction parameter, reconstruct the first 3D model of the target in the 3D space.
  • the second reconstruction parameter is relative to the first reconstruction parameter, and part of the first joint point parameter in the first reconstruction parameter is replaced by the second joint point parameter.
  • M there are M parameters for the first joint point, and N parameters for the second joint point, and N is less than or equal to M.
  • M may be 25; N may be 14.
  • the first joint point parameter that points to the same joint point as the 14 second joint point parameters is replaced with the second joint point parameter.
  • the second joint point parameter is a 2D coordinate
  • the first joint point parameter when the first joint point parameter is replaced, the 2D coordinate included in the 3D coordinate is replaced, and the remaining parameters in the 3D coordinate may not be changed. That is, the first joint point parameter of the part of the first joint point that overlaps the second joint point is replaced with the second joint point parameter to form the second reconstruction parameter.
  • 3D coordinates include three coordinates corresponding to the x-axis, y-axis, and z-axis; while 2D coordinates include two coordinates corresponding to the x-axis and y-axis.
  • the 2D coordinates are replaced with the coordinate values on the x-axis and the y-axis in the 3D coordinates to obtain the aforementioned second reconstruction parameters.
  • Figure 2 shows a human skeleton with 14 second joint points
  • Figure 3 shows a human skeleton with 25 first joint points.
  • the spine root joint point in Figure 3 is the left hip
  • the pivot point and the center point of the right hip joint point can be ignored. It can be seen that the 14 joint points in Fig. 2 are included in the 25 joint points in Fig. 3.
  • the joint points in Figure 2 include: key point 1 is the head joint point; joint point 2 is the neck joint point; joint point 4 is the left shoulder joint point; joint point 3 is the right shoulder joint point; joint point 6 is the left elbow joint point, Joint point 5 is the right elbow joint point; joint point 8 is the left wrist joint point; joint point 7 is the right wrist joint point; joint point 10 is the left hip joint point; joint point 9 is the right hip joint point; joint point 12 is the left Knee joint point; joint point 11 is the right knee joint point; joint point 13 is the left ankle joint point; joint point 14 is the right ankle joint point.
  • the first joint point parameter can be directly replaced by the second joint point parameter Then build the first 3D model based on the replaced second preset parameters.
  • step S130 may include: according to the camera parameters corresponding to the first 2D image, respectively mapping the first 3D model to the 2D space to obtain the first 2D coordinates and the second 2D coordinates.
  • the camera parameters are internal camera parameters.
  • the camera parameters may include the length of the first 2D image in two mutually perpendicular directions in the world coordinate system, and may also include parameters such as focal length.
  • the size of the projection surface onto which the first 3D model and the second 3D model are projected can be determined; and the first 3D can also be determined based on the focal length.
  • the projection area of the model and the second 3D model onto the projection surface is realized.
  • mapping projection After the first 3D model is mapped into the 2D space, a mapping projection is obtained, and the coordinates of the mapping projection are the first 2D coordinates. Similarly, after the second 3D model is mapped into the 2D space, a mapping projection is obtained, and the coordinates of the mapping projection are the second 2D coordinates.
  • the above method further includes:
  • Step S201 Acquire contour point parameters of the target in the first 2D image
  • Step S202 Based on the contour point parameters, determine the connection relationship between at least two contour points and determine the deformation direction represented by the connection relationship;
  • the deformation of the target in the first 2D image based on the first 2D coordinates and the second 2D coordinates to obtain a second 2D image containing the deformed target includes: based on the first 2D coordinates and the second 2D coordinates, The deformation direction deforms the target in the first 2D image to obtain a second 2D image containing the deformed target.
  • a contour point model capable of extracting contour points of the target in the first 2D image may be used to extract contour point parameters of the target in the first 2D image.
  • the contour point parameters include but are not limited to contour point coordinates.
  • FIG. 5C is a schematic diagram of a contour point of the human body in FIG. 5A extracted.
  • the operation of transforming the first 2D coordinates into the second 2D coordinates is involved.
  • the contour point parameters of the target in the first 2D image are acquired, and two predetermined contour points are connected based on the contour point parameters to obtain the first 2D coordinate transformation corresponding to a certain part of the target into the second 2D The deformation direction of the coordinate.
  • step S140 when the target is deformed, it can be deformed along the deformation direction determined in step S202, instead of arbitrarily performing the deformation method that only needs to deform the first 2D coordinate into the second 2D coordinate, by following the deformation The way of deformation improves the deformation effect again.
  • step S201 may include: determining the deformation direction according to the first connecting direction of at least two local contour points symmetrically distributed in the target based on the contour point parameters; wherein, the deformation direction includes the following directions: At least one of: a first deformation direction parallel to the first connection direction; and a second deformation direction perpendicular to the first connection direction.
  • the two symmetrically distributed parts of the human body include at least one of the following: the left and right shoulders of the human body; the left and right legs of the human body; the left and right breasts of the human body; the left and right hips of the human body, etc.
  • two symmetrical parts of the human body are symmetrically distributed as the four predetermined parts.
  • step S201 may further include: based on the contour point parameters, determining the deformation direction according to a second connecting direction of at least two contour points symmetrically distributed with a predetermined local center point or center line in the target, where ,
  • the deformation direction includes: a third deformation direction parallel to the second connection direction, and/or a fourth deformation direction perpendicular to the second connection direction.
  • the predetermined part may include at least one of the following: a left leg, a right leg, a left arm, a right arm, and so on.
  • connection direction of the contour points on both sides of the left leg is the second connection direction
  • connection direction of the contour points on both sides of the right leg is the second connection direction
  • the deformation direction includes at least the following two types of directions: one type of direction is the deformation direction of the adjustment target; for example, the first deformation direction and/or the third deformation direction; the other type It is the deformation direction of adjusting the height of the target; for example, the second deformation direction and/or the fourth deformation direction.
  • the second deformation direction is perpendicular to the first deformation direction; the fourth deformation direction is perpendicular to the second deformation direction.
  • FIG. 5D shows a solid horizontal line added to the human body shown in FIG. 5A, and the solid horizontal line is the first deformation direction and/or the third deformation direction.
  • the vertical dashed line covering the human body shown in FIG. 5E is a line perpendicular to the solid transverse direction in FIG. 5D, corresponding to the second deformation direction and/or the fourth deformation direction.
  • step S140 may include: determining a second 2D deformation parameter that moves the first 2D coordinate to the second 2D coordinate along the deformation direction; and performing the deformation of the target in the first 2D image based on the second 2D deformation parameter , To obtain a second 2D image containing the deformed target.
  • the first 2D coordinates and the second 2D coordinates correspond to the same position of the target or the coordinates of the same joint point of the target, and move along the deformation, so that the corresponding position of the target in the 2D image can be fitted and deformed
  • the deformation algorithm deformation function
  • the target in the 2D image is deformed according to the 2D deformation parameters.
  • step S120 may include: based on the 3D deformation parameters, changing the coordinates of at least part of the contour points of the first 3D model to obtain the second 3D model.
  • a new 3D model is obtained, and the new 3D model is referred to as the second 3D model in the embodiments of the present disclosure.
  • the image processing device includes:
  • the first obtaining module 110 is configured to obtain a first 3D model of the target in a three-dimensional 3D space based on a first two-dimensional 2D image containing the target;
  • the second acquiring module 120 is configured to acquire 3D deformation parameters, and based on the 3D deformation parameters, transform the first 3D model into a second 3D model;
  • the mapping module 130 is configured to map the first 3D model to the 2D space to obtain the first 2D coordinates, and to map the second 3D model to the 2D space to obtain the second 2D coordinates;
  • the deforming module 140 is configured to deform the target in the first 2D image based on the first 2D coordinate and the second 2D coordinate to obtain a second 2D image containing the deformed target.
  • the first acquisition module 110, the second acquisition module 120, the mapping module 130, and the transformation module 140 may all be program modules, which can implement the aforementioned functions after being executed by the processor.
  • the first acquisition module 110, the second acquisition module 120, the mapping module 130, and the deformation module 140 may all be software-hardware combined modules; the software-hardware combined modules include but are not limited to programmable arrays; programmable arrays include But it is not limited to complex programmable arrays or field programmable arrays.
  • the first acquisition module 110, the second acquisition module 120, the mapping module 140, and the transformation module 140 may all be pure hardware modules, and the pure hardware modules include, but are not limited to, application specific integrated circuits.
  • the first acquisition module 110 is configured to extract the first reconstruction parameters for reconstructing the 3D model of the target from the first 2D image through the human body mesh recovery model; using the extracted first reconstruction parameters, Rebuild the first 3D model of the target in the 3D space.
  • the first reconstruction parameter includes at least one of the following parameters: a first joint point parameter of the target; a first morphological parameter of the target; a camera parameter of the first 2D image.
  • the device further includes:
  • the extraction module is configured to extract the second joint point parameter of the target from the first 2D image through the human body detection model; wherein, the second joint point represented by the second joint point parameter and the part of the second joint point represented by the first joint point parameter There is overlap between a joint point;
  • the first acquisition module 110 is configured to replace the first joint point parameter of the first joint point that overlaps the second joint point in the first reconstruction parameter with the second joint point parameter to form the second reconstruction parameter;
  • the second reconstruction parameter is to reconstruct the first 3D model of the target in the 3D space.
  • the mapping module 130 is configured to respectively map the first 3D model and the second 3D model to the 2D space according to the camera parameters corresponding to the first 2D image to obtain the first 2D coordinates and the second 2D coordinates.
  • the device further includes:
  • the third acquiring module is configured to acquire the 2D deformation parameters of the first 2D image
  • the second acquiring module 120 is configured to acquire the 3D deformation parameter according to the first 2D deformation parameter and the mapping relationship between the 2D space and the 3D space.
  • the device further includes:
  • the fourth acquiring module is configured to acquire contour point parameters of the target in the first 2D image
  • the determining module is configured to determine the connection relationship between at least two contour points and determine the deformation direction represented by the connection relationship based on the contour point parameters;
  • the deforming module 140 is configured to deform the target in the first 2D image in the deforming direction based on the first 2D coordinate and the second 2D coordinate to obtain a second 2D image containing the deformed target.
  • the determining module is configured to determine the deformation direction according to the first connection direction of at least two local contour points symmetrically distributed in the target based on the contour point parameters; wherein the deformation direction includes at least one of the following directions: One type: a first deformation direction parallel to the first connection direction; a second deformation direction perpendicular to the first connection direction.
  • the determining module is configured to determine the deformation direction based on the contour point parameters according to the second connecting direction of at least two contour points symmetrically distributed with a predetermined local center point or center line in the target, wherein the deformation
  • the directions include: a third deformation direction parallel to the second connection direction, and/or a fourth deformation direction perpendicular to the second connection direction.
  • the deformation module 140 is configured to determine a second 2D deformation parameter in which the first 2D coordinate moves to the second 2D coordinate along the deformation direction; and based on the second 2D deformation parameter, perform the deformation of the target in the first 2D image, A second 2D image containing the deformed target is obtained.
  • the second acquisition module 120 is configured to change the coordinates of at least part of the contour points of the first 3D model to obtain the second 3D model based on the 3D deformation parameters.
  • This example provides an image processing method, including:
  • the OpenPose model is used to extract the joint point parameters of the N joints of the human body from the first 2D image of a single sheet, where N is a positive integer.
  • the first 2D image is input into the HMR model, and the extracted first reconstruction parameters are output from the HMR model.
  • the first reconstruction parameters may include: 3D coordinates of M1 joint points in the human body (except the spinal root joint points), so M1 ⁇ 3 parameters can be obtained; at the same time, the HMR model will also obtain M2 cameras of the camera of the first 2D image Internal parameters:
  • the M2 camera internal parameters may respectively include: the focal length when the first 2D image is collected, and the width and height of a single pixel of the 2D image in the world coordinate system. In this way, the first reconstruction parameter composed of M1 ⁇ 3+M2 parameters can be obtained through the HMR model. Both M1 and M2 are positive integers.
  • N1 joint point parameters in the M1 ⁇ 3+M2 parameters are replaced with the N joint point parameters provided by the OpenPose model to form the updated first reconstruction parameter, that is, the aforementioned second reconstruction parameter.
  • N1 is a positive integer.
  • the first 3D model is composed of X joint points, and is composed of more than Y million triangular faces formed based on these joint points. Both X and Y are positive integers.
  • the two 3D models are respectively mapped into the 2D plane according to the camera parameters, and the first 2D coordinates and the second 2D coordinates respectively corresponding to the two models will be obtained.
  • a first 2D coordinate and a second 2D coordinate both correspond to a joint point of the human body.
  • the contour points of the human body in the first 2D image are detected using the human contour detection model to obtain the coordinates of the contour points. Determine the deformation direction according to the local deformation and the deformation demand.
  • the number of the contour points may be Z.
  • Z is a positive integer.
  • the line will be formed according to the horizontal connection of the contour points of the human body to obtain the corresponding connection direction.
  • the left shoulder contour point and the right shoulder contour point get the first connection direction.
  • the 2D deformation parameters of the shoulder deformation (for example, 2D deformation algorithm) are fitted according to the first 2D coordinates of the shoulder, the first connection direction and the second 2D coordinates of the shoulder.
  • the 2D deformation parameter deforms the shoulder in the target.
  • the connection direction of the two end points parallel to the part is obtained as the vector direction of the first set of vectors, and the second set of vectors is also introduced.
  • the vector direction of the two sets of vectors is perpendicular to the vector direction of the first set of vectors. If the vector direction of the first set of vectors is consistent with the direction of the body becoming fat or thinner, the deformation will be carried out along the vector direction of the first set of vectors when the body fatness is changed. At the same time, if the height of the human body needs to be deformed, the deformation is carried out along the vector direction of the second set of vectors when performing the height deformation of the human body.
  • the shoulders can be stretched directly in the first 2D image; if the shoulders are reduced, after the shoulders are reduced in the first 2D image, the human body (that is, the target) will be changed at the same time.
  • the background other than the background is stretched to fill the gaps after the shoulders of the human body are reduced, and then a second 2D image after deformation will be obtained.
  • an embodiment of the present disclosure also provides an image processing device, including:
  • Memory for storing computer executable instructions
  • the processor is connected to the memory, and is configured to implement the image processing method provided by one or more of the foregoing technical solutions by executing computer-executable instructions stored on the memory, for example, the image shown in FIG. 1 and/or FIG. 4 At least one of the processing methods.
  • the memory can be various types of memory, such as random access memory, read-only memory, flash memory, and so on.
  • the memory can be used for information storage, for example, to store computer-executable instructions.
  • the computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
  • the processor may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
  • the processor can be connected to the memory via a bus.
  • the bus may be an integrated circuit bus or the like.
  • the terminal device may further include: a communication interface, and the communication interface may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like.
  • the communication interface is also connected to the processor and can be used to send and receive information.
  • the terminal device further includes a human-computer interaction interface.
  • the human-computer interaction interface may include various input and output devices, such as a keyboard, a touch screen, and the like.
  • the image processing device further includes a display, which can display various prompts, collected facial images, and/or various interfaces.
  • the embodiments of the present disclosure provide a computer storage medium, and the computer storage medium stores computer-executable instructions; after the computer-executable instructions are executed, the image processing method provided by one or more technical solutions can be implemented, for example, as shown in FIG. 1 And/or at least one of the image processing methods shown in FIG. 4.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined or integrated To another system, or some features can be ignored, or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present disclosure can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: removable storage devices, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks, etc.
  • ROM read-only memory
  • RAM Random Access Memory
  • magnetic disks or optical disks etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本公开实施例公开了一种图像处理方法及装置、图像处理设备及存储介质。所述图像处理方法,包括:基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;获取3D变形参数,并基于所述3D变形参数,将所述第一3D模型变换为第二3D模型;将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标;基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。

Description

图像处理方法及装置、图像处理设备及存储介质
相关申请的交叉引用
本公开基于申请号为201911148068.3、申请日为2019年11月21日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本公开。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、图像处理设备及存储介质。
背景技术
采集图像包含有采集对象的成像(目标),可能需要调整其成像效果,会进行目标的形变,例如,进行人体的美体、进行脸部的美容等。但是在一些情况下,图像的变形效果并不理想,甚至在变形之后产生了期望变形以外的不必要形变。
发明内容
本公开实施例期望提供一种图像处理方法及装置、图像处理设备及存储介质。
本公开实施例的技术方案是这样实现的:
本公开实施例第一方面提供一种图像处理方法,包括:基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;获取3D变形参数,并基于所述3D变形参数,将所述第一3D模型变换为第二3D模型;将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标;基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述基于包含目标的第一2D图像,获得目标在3D空间内的第一3D模型,包括:通过人体网格恢复模型从所述第一2D图像中提取出用于重建所述目标的3D模型的第一重建参数;利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型。
在本公开的一些可选实施例中,所述第一重建参数包括以下参数中的至少一种:所述目标的第一关节点参数;所述目标的第一形态参数;所述第一2D图像的相机参数。
在本公开的一些可选实施例中,所述方法还包括:通过人体检测模型从所述第一2D 图像中提取出所述目标的第二关节点参数;其中,第二关节点参数所表示的第二关节点与所述第一关节点参数所表示的部分第一关节点之间存在重叠;所述利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型,包括:将所述第一重建参数中,与所述第二关节点存在重叠的部分第一关节点的第一关节点参数替换为所述第二关节点参数,形成第二重建参数;基于所述第二重建参数,重建所述目标在3D空间内的第一3D模型。
在本公开的一些可选实施例中,所述将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标,包括:根据所述第一2D图像所对应的相机参数,分别将所述第一3D模型和所述第二3D模型映射到2D空间得到所述第一2D坐标和所述第二2D坐标。
在本公开的一些可选实施例中,所述方法还包括:获取所述第一2D图像的第一2D变形参数;所述获取3D变形参数,包括:根据所述第一2D变形参数及2D空间与3D空间之间的映射关系,获得所述3D变形参数。
在本公开的一些可选实施例中,所述方法还包括:获取所述第一2D图像中所述目标的轮廓点参数;基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向;所述基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像,包括:基于所述第一2D坐标和所述第二2D坐标,在所述变形方向对所述第一2D图像中的所述目标进行变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向,包括:基于所述轮廓点参数,根据所述目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定所述变形方向;其中,所述变形方向包括以下方向中的至少一种:平行于所述第一连线方向的第一变形方向;垂直于所述第一连线方向的第二变形方向。
在本公开的一些可选实施例中,所述基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向,包括:基于所述轮廓点参数,根据所述目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的第二连线方向,确定所述变形方向,其中,所述变形方向包括:平行于所述第二连线方向的第三变形方向,和/或,垂直于所述第二连线方向的第四变形方向。
在本公开的一些可选实施例中,所述基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像,包括:确定所述第一2D坐标沿所述变形方向移动到第二2D坐标的第二2D变形参数;基于所述第二2D变形参数,进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述将所述第一3D模型变换为第二3D模型,包括:基于所述3D变形参数,改变所述第一3D模型的至少部分轮廓点的坐标得到所述 第二3D模型。
本公开实施例第二方面提供一种图像处理装置,包括:
第一获取模块,配置为基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;
第二获取模块,配置为获取3D变形参数,并基于所述3D变形参数,将所述第一3D模型变换为第二3D模型;
映射模块,配置为将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标;
变形模块,配置为基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述第一获取模块,配置为通过人体网格恢复模型从所述第一2D图像中提取出用于重建所述目标的3D模型的第一重建参数;利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型。
在本公开的一些可选实施例中,所述第一重建参数包括以下参数中的至少一种:所述目标的第一关节点参数;所述目标的第一形态参数;所述第一2D图像的相机参数。
在本公开的一些可选实施例中,所述装置还包括:提取模块,配置为通过人体检测模型从所述第一2D图像中提取出所述目标的第二关节点参数;其中,所述第二关节点参数所表示的第二关节点与所述第一关节点参数所表示的部分第一关节点之间存在重叠;
所述第一获取模块,配置为将所述第一重建参数中,与所述第二关节点存在重叠的部分第一关节点的第一关节点参数替换为所述第二关节点参数,形成第二重建参数;基于所述第二重建参数,重建所述目标在3D空间内的第一3D模型。
在本公开的一些可选实施例中,所述映射模块,配置为根据所述第一2D图像所对应的相机参数,分别将所述第一3D模型和所述第二3D模型映射到2D空间得到所述第一2D坐标和所述第二2D坐标。
在本公开的一些可选实施例中,所述装置还包括:第三获取模块,配置为获取所述第一2D图像的2D变形参数;
所述第二获取模块,配置为根据所述第一2D变形参数及2D空间与3D空间之间的映射关系,获得所述3D变形参数。
在本公开的一些可选实施例中,所述装置还包括:第四获取模块,配置为获取所述第一2D图像中所述目标的轮廓点参数;
确定模块,配置为基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向;
所述变形模块,配置为基于所述第一2D坐标和所述第二2D坐标,在所述变形方向对所述第一2D图像中的所述目标进行变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述确定模块,配置为基于所述轮廓点参数,根据 所述目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定所述变形方向;其中,所述变形方向包括以下方向中的至少一种:平行于所述第一连线方向的第一变形方向;垂直于所述第一连线方向的第二变形方向。
在本公开的一些可选实施例中,所述确定模块,配置为基于所述轮廓点参数,根据所述目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的第二连线方向,确定所述变形方向,其中,所述变形方向包括:平行于所述第二连线方向的第三变形方向,和/或,垂直于所述第二连线方向的第四变形方向。
在本公开的一些可选实施例中,所述变形模块,配置为确定所述第一2D坐标沿所述变形方向移动到第二2D坐标的第二2D变形参数;基于所述第二2D变形参数,进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
在本公开的一些可选实施例中,所述第二获取模块,配置为基于所述3D变形参数,改变所述第一3D模型的至少部分轮廓点的坐标得到所述第二3D模型。
本公开实施例第三方面提供一种图像处理设备,所述图像处理设备包括:
存储器,用于存储计算机可执行指令;
处理器,与所述存储器连接,用于通过执行所述计算机可执行指令,实现前述任意技术方案提供的图像处理方法。
本公开实施例第四方面提供一种计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行之后,能够实现前述任意技术方案提供的图像处理方法。
本公开实施例提供的技术方案,在对第一2D图像内的目标进行变形时,不再是直接在2D平面内进行图像变形,而是将第一2D图像中的目标转换为在3D空间内的第一3D模型,在进行变形时,通过获取3D变形参数,直接对第一3D模型进行变形,得到变形后的第二3D模型;再分别将第一3D模型和第二3D模型映射到2D空间,得到映射回2D平面内的第一2D坐标和第二2D坐标,并基于第一2D坐标和第二2D坐标进行第一2D图像中目标的变形;如此,相对于直接在2D平面内直接对目标进行变形,可以减少不必要的形变产生,提升了2D图像的变形效果。
附图说明
图1为本公开实施例提供的第一种图像处理方法的流程示意图;
图2为本公开实施例提供的一种14个人体关节点的示意图;
图3为本公开实施例提供的一种25个人体关节点的示意图;
图4为本公开实施例提供的第二种图像处理方法的流程示意图;
图5A为本公开实施例提供的一种2D图像的效果示意图;
图5B为本公开实施例提供的一种提取出的人体关节点的示意图;
图5C为本公开实施例提供的一种提取出的人体轮廓点的示意图;
图5D为本公开实施例提供的一种第一变形方向的示意图;
图5E为本公开实施例提供的一种第二变形方向的示意图;
图6A为本公开实施例提供的一种3D模型的效果示意图;
图6B为本公开实施例提供的另一种3D模型的效果示意图;
图7为本公开实施例提供的一种图像处理装置的结构示意图;
图8为本公开实施例提供的一种图像处理设备的结构示意图。
具体实施方式
以下结合说明书附图及具体实施例对本公开的技术方案做进一步的详细阐述。
如图1所示,本实施例提供一种图像处理方法,包括:
S110:基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;
S120:获取3D变形参数,并基于3D变形参数,将第一3D模型变换为第二3D模型;
S130:将第一3D模型映射到2D空间得到第一2D坐标,以及,将第二3D模型映射到2D空间得到第二2D坐标;
S140:基于第一2D坐标和第二2D坐标进行第一2D图像中目标的变形,得到包含变形后的目标的第二2D图像。
本实施例的图像处理方法,可应用于各种图像处理设备中。示例性的,图像处理设备可包括各种终端设备,终端设备包括:手机或可穿戴式设备等。终端设备还可包括:车载终端设备或固定于某一处的、专用于图像采集摄影的固定终端设备。在另一些实施例中,图像处理设备还可包括:服务器,例如,本地服务器或者位于云平台中提供图像处理服务的云服务器等。
在一些可选实施例中,2D图像可通过2D摄像头采集。示例性的,2D图像可为红绿蓝(Red Green Blue,RGB)图像或YUV图像;其中,YUV为一种颜色编码方式,“Y”表示亮度(Luminance或Luma),也就是灰阶值,“U”和“V”表示色度(Chrominance或Chroma),用于描述影像色彩及饱和度。
在一些可选实施例中,为了实现2D图像中目标的优化变形。本实施例不会直接对2D图像中的目标进行变形,而是会基于2D图像得到目标在3D空间内的3D模型,上述3D模型记为第一3D模型。图6A和图6B为对应不同目标的两个第一3D模型的效果示意图。
基于第一3D模型获得3D变形参数;示例性的,3D变形参数可包括:针对3D空间内的第一3D模型上一个或多个3D坐标的变换参数。
示例性的,变换参数可以包括以下参数中的至少之一:期望的3D模型的不同局部的比例值、期望的3D模型的不同局部的尺寸值、第一3D模型变换为第二3D模型的变 形方向和/或在对应变形方向上的变形幅度值。
其中,期望的3D模型的不同局部的比例值是一个标量,表示不同局部的比例。例如,比例值可以是人体上半身的长度与腿部的长度之间的比值。
以目标人体为例,期望的3D模型的不同局部的尺寸值例如可包括:人体的腿长值、腰宽值、身高值等尺寸值。
在一些实施例中,方法还包括:获取第一2D图像的第一2D变形参数;获取3D变形参数可包括:根据第一2D变形参数及2D空间与3D空间之间的映射关系,获得3D变形参数。
例如,在2D的人机交互界面上显示第一2D图像,用户可通过手动操作等方式移动第一2D图像上目标一个或多个部位的2D坐标;图像处理设备可以根据用户输入的用户操作确定第一2D变形参数。第一2D变形参数可包括:在2D空间内第一2D图像上有发生2D坐标变换的轮廓点及这些轮廓点的坐标变换值。根据2D空间和3D空间之间的映射关系,将第一2D变形参数转换为3D变形参数。
在一些实施例中,3D变形参数可为:根据从人机交互界面接收用户指令所产生的变形参数。例如,构建了第一3D模型之后,在2D界面上一次可以展示第一3D模型的一个平面;可以通过旋转或移动的方式在2D界面上展示第一3D模型的不同平面;此时,通过针对第一3D模型上不同平面的各个用户操作,也可以确定3D变形参数。
在一些实施例中,在人机交互界面上接收用户输入的至少用于指示变形效果或指示变形量的用户指令,将用户指令通过量化的方式得到3D变形参数。例如,用户A期望自己的照片瘦身之后呈现出2尺的腰围,此时,可以在人机交互界面上输入期望的腰围,而图像处理设备根据第一2D图像映射到3D空间得到的第一3D模型估算出用户的实际腰围,再根据估算的实际腰围和期望的腰围,得到在三维空间内需要变形的腰围量。再结合腰部的理想形状,确定第一3D模型上腰部表面的各个坐标的位移量,从而得到3D变形参数。此时得到的3D变形参数可以用于将第一3D模型转换为第二3D模型。
在获取到3D变形参数之后,基于3D变形参数对第一3D模型进行变换,生成不同于第一3D模型的第二3D模型。第二3D模型和第一3D模型的不同点在于:第二3D模型表面的轮廓点的3D坐标至少部分不同于第一3D模型表面的轮廓点的坐标。如此,使得第二3D模型所对应的形态不同于第一3D模型对应的形态。示例性的,以目标是人体为例,则第二3D模型所对应的人体的形态是不同于第一3D模型所对应的人体的形态。例如,第一3D模型所对应的人体比第二3D模型所对应的人体,给人的感官上会显得体重更重一些等等。
如此,在步骤S120之后得到第一3D模型和第二3D模型。
在一些可选实施例中,本实施例中步骤S130可包括:通过投影的方式将第一3D模型和第二3D模型分别投影到2D平面内,分别得到表征与第一3D模型对应的第一投影的第一2D坐标,以及表征与第二3D模型对应的第二投影的第二2D坐标。
在一些可选实施例中,步骤S130还可包括:根据第一2D坐标和第二2D坐标,得 到第二2D变形参数,第二2D变形参数能够将第一2D坐标转换为第二2D坐标。
本公开的一些可选实施例中,比对第一2D坐标和第二2D坐标的坐标值,得到第一2D坐标和第二2D坐标的差值,该差值可作为第二2D变形参数的其中之一。例如,以人体丰胸变形为例,第一2D坐标包括:第一3D模型的胸部表面的轮廓点映射回2D空间得到的第一2D坐标;对第一3D模型丰胸后的第二3D模型的胸部表面的轮廓点映射回2D空间得到第二2D坐标。将这两个胸部的2D坐标进行比较得到第二2D变形参数。
在一些实施例中,在某一个局部的变化都涉及多个轮廓点的2D坐标变换的情况下,可以将第一2D坐标和对应的第二2D坐标进行变换拟合,拟合得到用于将第一2D坐标变换为第二2D坐标的变换矩阵;该变换矩阵可以直接作为第二2D变形参数,用于对第一2D图像上的目标进行变形,基于第二2D变形参数对2D图像中的目标进行变形,从而得到包含变形后的目标的第二2D图像。
本公开的一些可选实施例中,该第二2D变形参数还包括以下参数中的至少一种:目标中各像素的重组算法或重组参数、目标中各像素的颜色变化算法等。
示例性的,2D图像中目标的变形包括但不限于以下至少一种:目标的胖瘦变形;目标的高矮变形;目标的五官的形状变形。
例如,以目标为人体为例进行说明,目标的变形例如可以是:人体胖瘦变形、人体的高矮变形、人体的五官变形、人体的手长或脚长的改变等。
本实施例中进行2D平面中目标变形时,不再是直接在2D平面内进行图像变形,而是将第一2D图像中的目标转换为在3D空间内的第一3D模型,在进行目标变形时,通过获取3D变形参数,直接对第一3D模型进行变形,得到变形后的第二3D模型;在分别将第一3D模型和第二3D模型映射到2D空间,得到映射回2D平面内的第一2D坐标和第二2D坐标,并基于第一2D坐标和第二2D坐标进行第一2D图像中目标的变形;如此,相对于直接在2D平面内直接对目标进行变形,可以减少不必要的形变产生,提升了2D图像的变形效果。
在一些可选实施例中,步骤S110中,可以基于包含目标的第一2D图像,获得目标在3D空间内包含多个多边形网格组成的第一3D模型。
示例性的,第一3D模型是位于3D空间内的立体模型,该第一3D模型包括众多的关键点,众多关键点中的连接多个关键点形成多边形网格。其中,多边形网格又称为mesh,是计算机图形学中用于为各种不规则物体建立模型的一种数据结构。其中,三角网格(又称三角面片)是多边形网格的一种。可以将众多关键点中三个相邻关键点连成三角网格,得到由多个三角网格组成的第一3D模型。
这种利用mesh组成的第一3D模型能够逼真的模拟3D空间中的目标所对应的采集对象,从而实现采集对象在3D空间内的高度还原,以确保图像的变形效果。
在一些实施例中,在基于第一2D图像获得目标在3D空间内的第一3D模型的过程中,即步骤S110中,可以通过人体网格恢复(Human Mesh Recovery,HMR)模型从第 一2D图像中提取出用于重建目标的3D模型的第一重建参数;利用提取出的第一重建参数,重建目标在3D空间内的第一3D模型。其中,第一重建参数可包括以下参数中的至少一种:目标的第一关节点参数、目标的第一形态参数、第一2D图像的相机参数等等。示例性的,可基于2D图像的相机参数、目标的第一形态参数及目标的第一关节点参数,可以精确的搭建出第一3D模型。
在一些实施例中,第一关节点参数包括第一关节点的3D坐标等。目标所包含的关节点可能有很多,有一些关节点在形成第一3D模型时可能不会使用到,另一些在形成第一3D模型时使用到的关节点被称为第一关节点。例如,以人体为例,人体包含很多关节,这些关节的关键点称为关节点。在构建第一3D模型时,人体一些对人体外观呈现不是很重要的关节点可以忽略,例如,手部使得手指弯曲的关节所对应的关节点就可能不是很重要,可以忽略。
在一些实施例中,目标的第一形态参数可包括:各种指示目标的不同维度上的尺寸的参数,例如,指示目标的身高、胖瘦、不同局部的尺寸,上述不同局部的尺寸例如可以包括:腰围、胸围、臀围或者脸部长度等尺寸的形态参数。
在一些实施例中,2D图像的相机参数(例如第一2D图像的相机参数)可包括:拍摄2D图像的相机内参。内参包括但不限于:焦距、2D图像的单个像素在世界坐标系中的宽度dx及2D图像的单个像素在世界坐标系中的高度dy。
示例性的,以目标为人体为例进行说明,将包含有人体的第一2D图像输入到HMR模型后,HMR模型会通过计算得到人体各个关键点在第一2D图像中的参数。例如,这些参数包括:24个人体骨架上关节所对应关节点的3D坐标以及人体的形态参数。此外,HMR模型还会输出拍摄第一2D图像的相机的相机参数。相机参数可包括例如,焦距及光心坐标等。
进一步还可以利用参数化人体模型以及HMR模型输出的上述参数重建第一3D模型的骨架,并对骨架进行渲染。其中,参数化人体模型例如可以采用蒙皮多人线性(A Skinned Multi-Person Linear Model,SMPL)模型,SMPL模型是一种裸体的(skinned),基于顶点(vertex-based)的人体三维模型,能够精确地表示人体的不同形状(shape)和姿态(pose)。
示例性的,HMR模型输出第一2D图像中各种骨架上各个关节所对应关节点的坐标。而人体的形态不仅取决于骨架还取决于肌肉等组织器官的特征。而利用SMPL模型可以根据HMR模型得到的骨架的3D坐标及从第一2D图像中提取出了人体的形态参数,可以模拟出骨架的外围的肌肉运动和组织分布,从而实现对骨架的渲染得到第一3D模型。采用这种方式获得的第一3D模型能够逼真的反映出第一2D图像中目标的各类特征。
图5A是获取的原始的第一2D图像的一种示意图;图5B为从图5A所示人体中提取的包含有第一关节点的示意图,由分布在人体上的实心圆点表示。
在一些实施例中,方法还包括:从第一2D图像中提取目标的第二关节点参数。上 述第二关节点参数和第一关节点参数都是人体内关节点的参数,但是提取的方式不同、或者第一关节点参数和第二关节点参数的精确度不同;或者,第一关节点参数和第二关键点参数的类型不同。
例如,第一关节点参数包括:第一关节点的3D坐标;第二关节点参数包含的是第一关节点的2D坐标。
又例如,第一关节点参数是使用HMR模型提取的,而第二关节点采用的是人体检测模型提取的。人体检测模型根据输入的第一2D图像可以得到人体骨架上关节点的2D坐标。
在一些可选实施例中,单独从2D图像中提取出的第二关节点参数可能比第一重建参数中的第一关节点参数的精确度更高。有鉴于此,方法还可以包括:通过人体检测模型(例如OPEN POSE模型)从第一2D图像中提取出目标的第二关节点参数;其中,第二关节点参数所表示的第二关节点与第一关节点参数所表示的部分第一关节点之间存在重叠;
上述利用提取出的第一重建参数,重建目标在3D空间内的第一3D模型,包括:将第一重建参数中,与第二关节点存在重叠的部分第一关节点的第一关节点参数替换为第二关节点参数,形成第二重建参数;基于第二重建参数,重建目标在3D空间内的第一3D模型。
可以理解,第二重建参数相对于第一重建参数而言,第一重建参数中的部分第一关节点参数被第二关节点参数替换。
例如,第一关节点参数为M个,第二关节点参数为N个,N小于或等于M。示例性的,M可为25;N可为14。将25个第一关节点参数中、与14个第二关节点参数指向同一个关节点的第一关节点参数替换为第二关节点参数。其中,由于第二关节点参数为2D坐标,因此在对第一关节点参数进行替换时,替换的是3D坐标中包括的2D坐标,3D坐标中的其余参数可以不改变。也就是说,与第二关节点存在重叠的部分第一关节点的第一关节点参数替换为第二关节点参数,形成第二重建参数。例如,3D坐标包括对应于x轴、y轴及z轴的三个坐标;而2D坐标包含对应于x轴及y轴上两个坐标,在进行替换时,若某一个3D坐标和2D坐标指向同一关节点,则将2D坐标替换3D坐标中x轴和y轴上的坐标值,得到前述第二重建参数。
以目标为人体为例进行说明,图2所示为14个第二关节点的人体骨架,图3为25个第一关节点的人体骨架,其中,图3中的脊柱根部关节点是左臀关节点和右臀关节点的中心点,故可以忽略不计。可见图2中14个关节点包含在图3中的25个关节点内。
图2中关节点包括:关键点1为头部关节点;关节点2为脖子关节点;关节点4为左肩关节点;关节点3为右肩关节点;关节点6为左肘关节点,关节点5为右肘关节点;关节点8为左手腕关节点;关节点7为右手腕关节点;关节点10为左臀关节点;关节点9为右臀关节点;关节点12为左膝关节点;关节点11为右膝关节点;关节点13为左脚腕关节点;关节点14为右脚腕关节点。
比对图2和图3可知,图3中相对于图2增加了以下关节点:脊柱根部关节点、脊柱肩部关节点、脊柱中间关节点、左手尖端关节点、右手尖端关节点、左手拇指关节点、右手拇指关节点、左手关节点、右手关节点、左脚关节点和右脚关节点。
由于第二关节点参数的统计精确度会高于第一关节点参数的统计精确度,为了提升第一3D模型的建模精确度,可以直接利用第二关节点参数直接替换第一关节点参数中对应的部分,然后基于替换后的第二预设参数搭建第一3D模型。
在一些实施例中,步骤S130可包括:根据第一2D图像所对应的相机参数,分别将第一3D模型映射到2D空间得到第一2D坐标和第二2D坐标。
可以理解,相机参数为相机内参。相机参数可包括第一2D图像在世界坐标系中的相互垂直的两个方向上的长度,还可包括焦距等参数。通过第一2D图像在世界坐标系中的相互垂直的两个方向上的长度,可以确定第一3D模型和第二3D模型投影到的投影面的尺寸;并且还可基于焦距,确定第一3D模型和第二3D模型投影到投影面的投影面积。这样可以实现第一3D模型到2D空间内的映射,和第二3D模型到2D空间内的映射。
第一3D模型映射到2D空间内后得到一个映射投影,该映射投影的坐标即为第一2D坐标。同理,第二3D模型映射到2D空间内后得到一个映射投影,该映射投影的坐标即为第二2D坐标。
在一些实施例中,如图4所示,上述方法还包括:
步骤S201:获取第一2D图像中目标的轮廓点参数;
步骤S202:基于轮廓点参数,确定至少两个轮廓点之间的连接关系并确定连接关系所表示的变形方向;
则上述S140,基于第一2D坐标和第二2D坐标进行第一2D图像中目标的变形,得到包含变形后的目标的第二2D图像,包括:基于第一2D坐标和第二2D坐标,在变形方向对第一2D图像中的目标进行变形,得到包含变形后的目标的第二2D图像。
本实施例可以利用能够提取出第一2D图像中目标轮廓点的轮廓点模型,提取出第一2D图像中目标的轮廓点参数,轮廓点参数包括但不限于轮廓点坐标。其中,图5C为提取的图5A中人体的一种轮廓点的示意图。
在基于第一2D坐标和第二2D坐标进行目标变形,使得第一2D图像变形为第二2D图像过程中,涉及第一2D坐标变换为第二2D坐标的操作。实际应用中,将第一2D坐标变化为第二2D坐标的方式有很多种,但是有一些变换方式会使得变形后的目标出现混乱。为了减少这种现象,本实施例中获取第一2D图像中目标的轮廓点参数,基于轮廓点参数连接预定的两个轮廓点,得到目标某一部分所对应的第一2D坐标变换为第二2D坐标的变形方向。
在步骤S140中,在进行目标的变形时,可以沿着步骤S202中确定的变形方向进行变形,而非随意的进行仅需要将第一2D坐标变形为第二2D坐标的变形方式,通过按照变形方式进行变形再次提升了变形效果。
在一些可选实施例中,步骤S201可包括:基于轮廓点参数,根据目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定变形方向;其中,变形方向包括以下方向中的至少一种:平行于第一连线方向的第一变形方向;垂直于第一连线方向的第二变形方向。
示例性的,人体对称分布的两个局部包括以下至少之一:人体的左肩和右肩;人体的左腿和右腿;人体的左胸和右胸;人体的左臀和右臀等。
示例性的,以人体对称分布的两个对称局部作为4个预定局部。以人体左肩及右肩为一组,确定对人体正面变形的第一连线方向;并根据人体的左臀和右臀,确定对人体背面变形的第一连线方向。
在另一些实施例中,步骤S201还可包括:基于轮廓点参数,根据目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的第二连线方向,确定变形方向,其中,变形方向包括:平行于第二连线方向的第三变形方向,和/或,垂直于第二连线方向的第四变形方向。
例如,以人体为例进行说明,该预定局部可包括以下至少之一:左腿、右腿、左臂、右臂等。
例如,左腿两侧的轮廓点的连线方向为第二连线方向;右腿两侧的轮廓点的连线方向为第二连线方向。
在本公开的一些可选实施例中,变形方向至少包括以下两大类方向:一类方向是调整目标胖瘦的变形方向;例如,第一变形方向和/或第三变形方向;另一类是调整目标高矮的变形方向;例如,第二变形方向和/或第四变形方向。其中,第二变形方向是垂直于第一变形方向;第四变形方向是垂直于第二变形方向的。
示例性的,图5D为图5A所示的人体上增加了实心横线,该实心横线即为第一变形方向和/或第三变形方向。图5E中所示覆盖在人体上的竖直虚线,即为垂直于图5D中实心横向的线,对应的是第二变形方向和/或第四变形方向。
在一些可选实施例中,步骤S140可包括:确定第一2D坐标沿变形方向移动到第二2D坐标的第二2D变形参数;基于第二2D变形参数,进行第一2D图像中目标的变形,得到包含变形后的目标的第二2D图像。
例如,第一2D坐标和第二2D坐标中为目标对应的同一个位置或目标的同一个关节点的坐标相对应,并沿着变形移动,从而可以拟合出2D图像中目标对应位置进行变形的变形算法(变形函数),从而得到2D变形参数。最终在步骤S140中会根据2D变形参数对2D图像中的目标进行变形。
在一些实施例中,步骤S120可包括:基于3D变形参数,改变第一3D模型的至少部分轮廓点的坐标得到第二3D模型。通过改变第一3D模型表面的轮廓点的一个或多个3D坐标,得到一个新的3D模型,该新的3D模型在本公开实施例中称为第二3D模型。
本公开实施例还提供了一种图像处理装置,如图7所示,图像处理装置,包括:
第一获取模块110,配置为基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;
第二获取模块120,配置为获取3D变形参数,并基于3D变形参数,将第一3D模型变换为第二3D模型;
映射模块130,配置为将第一3D模型映射到2D空间得到第一2D坐标,以及,将第二3D模型映射到2D空间得到第二2D坐标;
变形模块140,配置为基于第一2D坐标和第二2D坐标进行第一2D图像中目标的变形,得到包含变形后的目标的第二2D图像。
在一些实施例中,第一获取模块110、第二获取模块120、映射模块130及变形模块140均可为程序模块,该程序模块被处理器执行后,能够实现前述功能。
在另一些实施例中,第一获取模块110、第二获取模块120、映射模块130及变形模块140均可为软硬结合模块;软硬结合模块包括但不限于可编程阵列;可编程阵列包括但不限于复杂可编程阵列或现场可编程阵列。
在另一些实施例中,第一获取模块110、第二获取模块120、映射模块140及变形模块140均可为纯硬件模块,该纯硬件模块包括但不限于专用集成电路。
在一些实施例中,第一获取模块110,配置为通过人体网格恢复模型从第一2D图像中提取出用于重建目标的3D模型的第一重建参数;利用提取出的第一重建参数,重建目标在3D空间内的第一3D模型。
在一些实施例中,第一重建参数包括以下参数中的至少一种:目标的第一关节点参数;目标的第一形态参数;第一2D图像的相机参数。
在一些实施例中,装置还包括:
提取模块,配置为通过人体检测模型从第一2D图像中提取出目标的第二关节点参数;其中,第二关节点参数所表示的第二关节点与第一关节点参数所表示的部分第一关节点之间存在重叠;
第一获取模块110,配置为将第一重建参数中,与第二关节点存在重叠的部分第一关节点的第一关节点参数替换为第二关节点参数,形成第二重建参数;基于第二重建参数,重建目标在3D空间内的第一3D模型。
在一些实施例中,映射模块130,配置为根据第一2D图像所对应的相机参数,分别将第一3D模型和第二3D模型映射到2D空间得到第一2D坐标和第二2D坐标。
在一些实施例中,装置还包括:
第三获取模块,配置为获取第一2D图像的2D变形参数;
第二获取模块120,配置为根据第一2D变形参数及2D空间与3D空间之间的映射关系,获得3D变形参数。
在一些实施例中,装置还包括:
第四获取模块,配置为获取第一2D图像中目标的轮廓点参数;
确定模块,配置为基于轮廓点参数,确定至少两个轮廓点之间的连接关系并确定连 接关系所表示的变形方向;
变形模块140,配置为基于第一2D坐标和第二2D坐标,在变形方向对第一2D图像中的目标进行变形,得到包含变形后的目标的第二2D图像。
在一些实施例中,确定模块,配置为基于轮廓点参数,根据目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定变形方向;其中,变形方向包括以下方向中的至少一种:平行于第一连线方向的第一变形方向;垂直于第一连线方向的第二变形方向。
在一些实施例中,确定模块,配置为基于轮廓点参数,根据目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的第二连线方向,确定变形方向,其中,变形方向包括:平行于第二连线方向的第三变形方向,和/或,垂直于第二连线方向的第四变形方向。
在一些实施例中,变形模块140,配置为确定第一2D坐标沿变形方向移动到第二2D坐标的第二2D变形参数;基于第二2D变形参数,进行第一2D图像中目标的变形,得到包含变形后的目标的第二2D图像。
在一些实施例中,第二获取模块120,配置为基于3D变形参数,改变第一3D模型的至少部分轮廓点的坐标得到第二3D模型。
以下结合上述任意实施例提供一个具体示例。
本示例提供一种图像处理方法,包括:
利用Open Pose模型从单张的第一2D图像中提取人体的N个关节的关节点参数,N为正整数。
将第一2D图像输入到HMR模型中,由HMR模型输出提取的第一重建参数。第一重建参数可包括:人体内M1个关节点(脊柱根关节点除外)的3D坐标,因此可获得M1×3个参数;同时HMR模型还会获得该第一2D图像的相机的M2个相机内参;M2个相机内参分别可包括:采集第一2D图像时的焦距、2D图像单个像素在世界坐标系内的宽度和高度。如此,通过HMR模型可获得由M1×3+M2个参数组成第一重建参数。M1及M2均为正整数。
将M1×3+M2个参数中的N1个关节点参数利用Open Pose模型提供的N个关节点参数替换,形成更新的第一重建参数,即前述的第二重建参数。N1为正整数。
再利用SMPL模型基于第二重建参数重建出一个3D模型并对该3D模型进行渲染,得到前述第一3D模型。该第一3D模型是由X个关节点组成,并由Y万多张基于这些关节点所形成的三角面片组成。X和Y均为正整数。
从人机交互界面或者从其他设备接收2D变形参数,基于该2D变形参数转换为3D变形参数,再基于3D变形参数调整第一3D模型,生成第一3D模型变换得到的第二3D模型。
得到第二3D模型之后,根据相机参数分别将两个3D模型映射到2D平面内,会得到两个模型分别对应的第一2D坐标和第二2D坐标。其中,一个第一2D坐标和一个第二2D坐标都对应了人体的一个关节点。
利用人体轮廓检测模型检测第一2D图像中人体的轮廓点,得到轮廓点的坐标。根据需要变形的局部和变形需求,确定变形方向。
示例性的,该轮廓点的数目可为Z个。Z为正整数。
例如,对人体进行胖瘦变形时,会根据人体的轮廓点的横向连接成线,从而得到对应的连线方向,左肩轮廓点与右肩轮廓点得到第一连线方向,若在对肩部进行尺寸变形时,根据肩部的第一2D坐标、第一连线方向及肩部的第二2D坐标拟合出肩部变形的2D变形参数(例如,2D变形算法),如此后续将会根据2D变形参数进行目标中肩部的变形。
具体地,根据第一2D坐标和第二2D坐标的相应部位的轮廓点获取平行于该部位的两个端点连线方向,作为第一组向量的向量方向,同时还引入第二组向量,第二组向量的向量方向垂直于第一组向量的向量方向。若第一组向量的向量方向与人体的变胖或变瘦的方向一致,则在进行人体胖瘦变化时沿着第一组向量的向量方向进行变形。与此同时,若需要对人体进行高矮的变形,则在进行人体高矮变形时沿着第二组向量的向量方向进行变形。
例如,如肩部拉宽可以直接进行第一2D图像中肩部拉伸处理即可;若肩部缩小,则在进行第一2D图像中肩部缩小处理之后,同时会将人体(即目标)以外的背景进行拉伸处理,填补人体肩部缩小之后的空白,进而会得到变形后的第二2D图像。
如图8所示,本公开实施例还提供了一种图像处理设备,包括:
存储器,用于存储计算机可执行指令;
处理器,与存储器连接,用于通过执行存储在存储器上的计算机可执行指令,能够实现前述一个或多个技术方案提供的图像处理方法,例如,如图1和/或图4所示的图像处理方法中的至少之一。
该存储器可为各种类型的存储器,可为随机存储器、只读存储器、闪存等。存储器可用于信息存储,例如,存储计算机可执行指令等。计算机可执行指令可为各种程序指令,例如,目标程序指令和/或源程序指令等。
处理器可为各种类型的处理器,例如,中央处理器、微处理器、数字信号处理器、可编程阵列、数字信号处理器、专用集成电路或图像处理器等。
处理器可以通过总线与存储器连接。总线可为集成电路总线等。
在一些实施例中,终端设备还可包括:通信接口,该通信接口可包括:网络接口、例如,局域网接口、收发天线等。通信接口同样与处理器连接,能够用于信息收发。
在一些实施例中,终端设备还包括人机交互接口,例如,人机交互接口可包括各种输入输出设备,例如,键盘、触摸屏等。
在一些实施例中,图像处理设备还包括:显示器,该显示器可以显示各种提示、采集的人脸图像和/或各种界面。
本公开实施例提供了一种计算机存储介质,计算机存储介质存储有计算机可执行指令;计算机可执行指令被执行后,能够实现前述一个或多个技术方案提供的图像处理方 法,例如,如图1和/或图4所示的图像处理方法中的至少之一。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上,仅为本公开的可选实施方式,但本公开实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开实施例的保护范围之内。因此,本公开实施例的保护范围应以权利要求的保护范围为准。

Claims (24)

  1. 一种图像处理方法,包括:
    基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;
    获取3D变形参数,并基于所述3D变形参数,将所述第一3D模型变换为第二3D模型;
    将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标;
    基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
  2. 根据权利要求1所述的方法,其中,所述基于包含目标的第一2D图像,获得目标在3D空间内的第一3D模型,包括:
    通过人体网格恢复模型从所述第一2D图像中提取出用于重建所述目标的3D模型的第一重建参数;
    利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型。
  3. 根据权利要求2所述的方法,其中,所述第一重建参数包括以下参数中的至少一种:所述目标的第一关节点参数;所述目标的第一形态参数;所述第一2D图像的相机参数。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    通过人体检测模型从所述第一2D图像中提取出所述目标的第二关节点参数;其中,所述第二关节点参数所表示的第二关节点与所述第一关节点参数所表示的部分第一关节点之间存在重叠;
    所述利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型,包括:
    将所述第一重建参数中,与所述第二关节点存在重叠的部分第一关节点的第一关节点参数替换为所述第二关节点参数,形成第二重建参数;
    基于所述第二重建参数,重建所述目标在3D空间内的第一3D模型。
  5. 根据权利要求1至4任一项所述的方法,其中,所述将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将所述第二3D模型映射到2D空间得到第二2D坐标,包括:
    根据所述第一2D图像所对应的相机参数,分别将所述第一3D模型和所述第二3D模型映射到2D空间得到所述第一2D坐标和所述第二2D坐标。
  6. 根据权利要求1至4任一项所述的方法,其中,所述方法还包括:
    获取所述第一2D图像的第一2D变形参数;
    所述获取3D变形参数,包括:
    根据所述第一2D变形参数及2D空间与3D空间之间的映射关系,获得所述3D变形参数。
  7. 根据权利要求1至6任一项所述的方法,其中,所述方法还包括:
    获取所述第一2D图像中所述目标的轮廓点参数;
    基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向;
    所述基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像,包括:
    基于所述第一2D坐标和所述第二2D坐标,在所述变形方向对所述第一2D图像中的所述目标进行变形,得到包含变形后的目标的第二2D图像。
  8. 根据权利要求7所述的方法,其中,所述基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向,包括:
    基于所述轮廓点参数,根据所述目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定所述变形方向;其中,所述变形方向包括以下方向中的至少一种:平行于所述第一连线方向的第一变形方向;垂直于所述第一连线方向的第二变形方向。
  9. 根据权利要求7或8所述的方法,其中,所述基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向,包括:
    基于所述轮廓点参数,根据所述目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的第二连线方向,确定所述变形方向,其中,所述变形方向包括:平行于所述第二连线方向的第三变形方向,和/或,垂直于所述第二连线方向的第四变形方向。
  10. 根据权利要求6至8任一所述的方法,其中,所述基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像,包括:
    确定所述第一2D坐标沿所述变形方向移动到第二2D坐标的第二2D变形参数;
    基于所述第二2D变形参数,进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
  11. 根据权利要求1至10任一项所述的方法,其中,所述基于所述3D变形参数,将所述第一3D模型变换为第二3D模型,包括:
    基于所述3D变形参数,改变所述第一3D模型的至少部分轮廓点的坐标得到所述第二3D模型。
  12. 一种图像处理装置,包括:
    第一获取模块,配置为基于包含目标的第一二维2D图像,获得目标在三维3D空间内的第一3D模型;
    第二获取模块,配置为获取3D变形参数,并基于所述3D变形参数,将所述第一3D模型变换为第二3D模型;
    映射模块,配置为将所述第一3D模型映射到2D空间得到第一2D坐标,以及,将 所述第二3D模型映射到2D空间得到第二2D坐标;
    变形模块,配置为基于所述第一2D坐标和所述第二2D坐标进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
  13. 根据权利要求12所述的装置,其中,所述第一获取模块,配置为通过人体网格恢复模型从所述第一2D图像中提取出用于重建所述目标的3D模型的第一重建参数;利用提取出的所述第一重建参数,重建所述目标在3D空间内的第一3D模型。
  14. 根据权利要求13所述的装置,其中,所述第一重建参数包括以下参数中的至少一种:
    所述目标的第一关节点参数;所述目标的第一形态参数;所述第一2D图像的相机参数。
  15. 根据权利要求14所述的装置,其中,所述装置还包括:
    提取模块,配置为通过人体检测模型从所述第一2D图像中提取出所述目标的第二关节点参数;其中,所述第二关节点参数所表示的第二关节点与所述第一关节点参数所表示的部分第一关节点之间存在重叠;
    所述第一获取模块,配置为将所述第一重建参数中,与所述第二关节点存在重叠的部分第一关节点的第一关节点参数替换为所述第二关节点参数,形成第二重建参数;基于所述第二重建参数,重建所述目标在3D空间内的第一3D模型。
  16. 根据权利要求12至15任一项所述的装置,其中,所述映射模块,配置为根据所述第一2D图像所对应的相机参数,分别将所述第一3D模型和所述第二3D模型映射到2D空间得到所述第一2D坐标和所述第二2D坐标。
  17. 根据权利要求12至16任一项所述的装置,其中,所述装置还包括:第三获取模块,配置为获取所述第一2D图像的2D变形参数;
    所述第二获取模块,配置为根据所述第一2D变形参数及2D空间与3D空间之间的映射关系,获得所述3D变形参数。
  18. 根据权利要求12至17任一项所述的装置,其中,所述装置还包括:第四获取模块,配置为获取所述第一2D图像中所述目标的轮廓点参数;
    确定模块,配置为基于所述轮廓点参数,确定至少两个轮廓点之间的连接关系并确定所述连接关系所表示的变形方向;
    所述变形模块,配置为基于所述第一2D坐标和所述第二2D坐标,在所述变形方向对所述第一2D图像中的所述目标进行变形,得到包含变形后的目标的第二2D图像。
  19. 根据权利要求18所述的装置,其中,所述确定模块,配置为基于所述轮廓点参数,根据所述目标中对称分布的至少两个局部的轮廓点的第一连线方向,确定所述变形方向;其中,所述变形方向包括以下方向中的至少一种:平行于所述第一连线方向的第一变形方向;垂直于所述第一连线方向的第二变形方向。
  20. 根据权利要求18或19所述的装置,其中,所述确定模块,配置为基于所述轮廓点参数,根据所述目标中以预定局部的中心点或中心线对称分布的至少两个轮廓点的 第二连线方向,确定所述变形方向,其中,所述变形方向包括:平行于所述第二连线方向的第三变形方向,和/或,垂直于所述第二连线方向的第四变形方向。
  21. 根据权利要求18至20任一所述的装置,其中,所述变形模块,配置为确定所述第一2D坐标沿所述变形方向移动到第二2D坐标的第二2D变形参数;基于所述第二2D变形参数,进行所述第一2D图像中所述目标的变形,得到包含变形后的目标的第二2D图像。
  22. 根据权利要求12至21任一项所述的装置,其中,所述第二获取模块,配置为基于所述3D变形参数,改变所述第一3D模型的至少部分轮廓点的坐标得到所述第二3D模型。
  23. 一种图像处理设备,所述图像处理设备包括:
    存储器,用于存储计算机可执行指令;
    处理器,与所述存储器连接,用于通过执行所述计算机可执行指令,实现权利要求1至11任一项提供的方法。
  24. 一种计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行之后,能够实现权利要求1至11任一项所述的方法。
PCT/CN2020/086695 2019-11-21 2020-04-24 图像处理方法及装置、图像处理设备及存储介质 WO2021098143A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202104071TA SG11202104071TA (en) 2019-11-21 2020-04-24 Method and device for processing image, and storage medium
KR1020207035889A KR102406438B1 (ko) 2019-11-21 2020-04-24 이미지 처리 방법 및 장치, 이미지 처리 기기 및 저장 매체
JP2020570014A JP2022512262A (ja) 2019-11-21 2020-04-24 画像処理方法及び装置、画像処理機器並びに記憶媒体
US17/131,879 US11450068B2 (en) 2019-11-21 2020-12-23 Method and device for processing image, and storage medium using 3D model, 2D coordinates, and morphing parameter

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911148068.3A CN111031305A (zh) 2019-11-21 2019-11-21 图像处理方法及装置、图像设备及存储介质
CN201911148068.3 2019-11-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/131,879 Continuation US11450068B2 (en) 2019-11-21 2020-12-23 Method and device for processing image, and storage medium using 3D model, 2D coordinates, and morphing parameter

Publications (1)

Publication Number Publication Date
WO2021098143A1 true WO2021098143A1 (zh) 2021-05-27

Family

ID=70201797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086695 WO2021098143A1 (zh) 2019-11-21 2020-04-24 图像处理方法及装置、图像处理设备及存储介质

Country Status (5)

Country Link
KR (1) KR102406438B1 (zh)
CN (1) CN111031305A (zh)
SG (1) SG11202104071TA (zh)
TW (1) TWI750710B (zh)
WO (1) WO2021098143A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111031305A (zh) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质
JP2022512262A (ja) 2019-11-21 2022-02-03 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド 画像処理方法及び装置、画像処理機器並びに記憶媒体
CN113569781B (zh) * 2021-08-03 2024-06-14 北京达佳互联信息技术有限公司 人体姿态的获取方法、装置、电子设备及存储介质
CN113469877B (zh) * 2021-09-01 2021-12-21 北京德风新征程科技有限公司 物体展示方法、场景展示方法、设备和计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100134487A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai 3d face model construction method
CN108765351A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN109146769A (zh) * 2018-07-24 2019-01-04 北京市商汤科技开发有限公司 图像处理方法及装置、图像处理设备及存储介质
CN109584168A (zh) * 2018-10-25 2019-04-05 北京市商汤科技开发有限公司 图像处理方法和装置、电子设备和计算机存储介质
CN109685915A (zh) * 2018-12-11 2019-04-26 维沃移动通信有限公司 一种图像处理方法、装置及移动终端
CN111031305A (zh) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3538263B2 (ja) * 1995-08-09 2004-06-14 株式会社日立製作所 画像生成方法
TWI315042B (en) * 2006-11-21 2009-09-21 Jing Jing Fan Method of three-dimensional digital human model construction from two photos and obtaining anthropometry information
US10134177B2 (en) * 2015-01-15 2018-11-20 Samsung Electronics Co., Ltd. Method and apparatus for adjusting face pose
CN105938627B (zh) * 2016-04-12 2020-03-31 湖南拓视觉信息技术有限公司 用于人脸虚拟整形的处理方法和系统
CN108765273B (zh) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 人脸拍照的虚拟整容方法和装置
CN109934766B (zh) * 2019-03-06 2021-11-30 北京市商汤科技开发有限公司 一种图像处理方法及装置
CN110335343B (zh) * 2019-06-13 2021-04-06 清华大学 基于rgbd单视角图像人体三维重建方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100134487A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai 3d face model construction method
CN108765351A (zh) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和存储介质
CN109146769A (zh) * 2018-07-24 2019-01-04 北京市商汤科技开发有限公司 图像处理方法及装置、图像处理设备及存储介质
CN109584168A (zh) * 2018-10-25 2019-04-05 北京市商汤科技开发有限公司 图像处理方法和装置、电子设备和计算机存储介质
CN109685915A (zh) * 2018-12-11 2019-04-26 维沃移动通信有限公司 一种图像处理方法、装置及移动终端
CN111031305A (zh) * 2019-11-21 2020-04-17 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质

Also Published As

Publication number Publication date
KR102406438B1 (ko) 2022-06-08
SG11202104071TA (en) 2021-06-29
TW202121344A (zh) 2021-06-01
KR20210064113A (ko) 2021-06-02
CN111031305A (zh) 2020-04-17
TWI750710B (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
WO2021098143A1 (zh) 图像处理方法及装置、图像处理设备及存储介质
CN112150638B (zh) 虚拟对象形象合成方法、装置、电子设备和存储介质
US11741629B2 (en) Controlling display of model derived from captured image
CN110889890B (zh) 图像处理方法及装置、处理器、电子设备及存储介质
US11450068B2 (en) Method and device for processing image, and storage medium using 3D model, 2D coordinates, and morphing parameter
EP3971841A1 (en) Three-dimensional model generation method and apparatus, and computer device and storage medium
CN111861872B (zh) 图像换脸方法、视频换脸方法、装置、设备和存储介质
CN110335343A (zh) 基于rgbd单视角图像人体三维重建方法及装置
TW201401224A (zh) 二維角色表現三維動作之系統及方法
JP2023547888A (ja) 三次元再構成方法、装置、システム、媒体及びコンピュータデバイス
JP6555755B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
CN112365589B (zh) 一种虚拟三维场景展示方法、装置及系统
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN110852934A (zh) 图像处理方法及装置、图像设备及存储介质
WO2020147797A1 (zh) 图像处理方法及装置、图像设备及存储介质
WO2023160074A1 (zh) 一种图像生成方法、装置、电子设备以及存储介质
CN111105489A (zh) 数据合成方法和装置、存储介质和电子装置
JP2021068272A (ja) 画像処理システム、画像処理方法及びプログラム
JP2023153534A (ja) 画像処理装置、画像処理方法、およびプログラム
CN115908755A (zh) 一种ar投影方法、系统及ar投影仪
CN110838182B (zh) 一种图像贴合人台的方法及其系统
JP3850080B2 (ja) 画像生成表示装置
KR20120121034A (ko) 2차원 영상으로부터 사전입력을 통한 3차원 얼굴형상의 획득 장치
CN112365588B (zh) 一种虚拟三维体感建模方法、装置及系统
RU2778288C1 (ru) Способ и устройство для определения освещенности изображения лица, устройство и носитель данных

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020570014

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20889271

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20889271

Country of ref document: EP

Kind code of ref document: A1