WO2020019915A1 - 一种图像处理方法、装置和计算机存储介质 - Google Patents

一种图像处理方法、装置和计算机存储介质 Download PDF

Info

Publication number
WO2020019915A1
WO2020019915A1 PCT/CN2019/092353 CN2019092353W WO2020019915A1 WO 2020019915 A1 WO2020019915 A1 WO 2020019915A1 CN 2019092353 W CN2019092353 W CN 2019092353W WO 2020019915 A1 WO2020019915 A1 WO 2020019915A1
Authority
WO
WIPO (PCT)
Prior art keywords
limb
mesh control
type
control surface
target object
Prior art date
Application number
PCT/CN2019/092353
Other languages
English (en)
French (fr)
Inventor
刘文韬
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020207030087A priority Critical patent/KR20200133778A/ko
Priority to JP2021506036A priority patent/JP7138769B2/ja
Priority to SG11202010404WA priority patent/SG11202010404WA/en
Publication of WO2020019915A1 publication Critical patent/WO2020019915A1/zh
Priority to US17/117,703 priority patent/US20210097268A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present application relates to image processing technologies, and in particular, to an image processing method, device, and computer storage medium.
  • body shaping such as "leg shaping”, “arms” “Shaping”, “Waist shaping”, “Hip shaping”, “Shoulder shaping”, “Head shaping”, “Chest shaping”, etc.
  • body shaping such as "leg shaping”, “arms” “Shaping”, “Waist shaping”, “Hip shaping”, “Shoulder shaping”, “Head shaping”, “Chest shaping”, etc.
  • embodiments of the present application provide an image processing method, device, and computer storage medium.
  • An embodiment of the present application provides an image processing method.
  • the method includes:
  • Deformation processing is performed based on at least a part of the mesh control surfaces of the plurality of mesh control surfaces on at least a part of a limb region corresponding to the target object to generate a second image.
  • the determining a target object in the first image includes:
  • the limb detection information includes limb key point information and / or limb contour point information;
  • the limb key point information includes coordinate information of the limb key point
  • the limb contour point information includes coordinate information of the limb contour points.
  • performing deformation processing on at least a part of a limb region corresponding to the target object based on at least a part of the plurality of mesh control surfaces includes:
  • a first set of mesh control surfaces corresponding to the first limb detection information is determined, and deformation processing is performed on the first set of mesh control surfaces.
  • determining the first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
  • the first group of mesh control surfaces includes at least one mesh Control surface
  • the mesh control surface is a first-type mesh control surface
  • Determining a first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
  • the first-type mesh control surface includes a plurality of first-type mesh control points
  • the deforming the at least one first-type mesh control surface based on the first deformation parameter includes:
  • movement of any one of the plurality of first-type grid control points realizes deformation of the first-type network control surface.
  • the mesh control surface is a second-type mesh control surface
  • the determining the first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes:
  • the second-type mesh control surface includes a plurality of second-type mesh control points
  • the deforming the at least one second-type mesh control surface based on a second deformation parameter includes:
  • movement of any one of the plurality of second-type grid control points realizes deformation of an area corresponding to the network control point in the second-type network control plane.
  • An embodiment of the present application further provides an image processing apparatus, where the apparatus includes: an obtaining unit, a mesh dividing unit, and an image processing unit; wherein,
  • the obtaining unit is configured to obtain a first image
  • the mesh dividing unit is configured to mesh the first image obtained by the obtaining unit to obtain a plurality of mesh control surfaces
  • the image processing unit is configured to determine a target object in the first image obtained by the obtaining unit; based on at least part of the plurality of mesh control surfaces, at least part of the mesh control surface faces at least a portion corresponding to the target object.
  • the limb region is deformed to generate a second image.
  • the image processing unit is configured to obtain limb detection information of a target object in the first image; the limb detection information includes limb key point information and / or limb contour points Information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
  • the image processing unit is configured to determine at least a part of a limb region to be deformed in the target object, and obtain first limb detection information of the at least part of the limb region; A first set of mesh control surfaces corresponding to the first limb detection information is determined, and deformation processing is performed on the first set of mesh control surfaces.
  • the image processing unit is configured to determine a corresponding first based on first limb keypoint information and / or first limb contour point information included in the first limb detection information.
  • the mesh control surface is a first-type mesh control surface
  • the image processing unit is configured to determine at least one first-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one first-type mesh control surface based on a first deformation parameter to Compress or stretch a limb area corresponding to the target object, and compress or stretch at least a portion of the background area outside the target object.
  • the first-type mesh control surface includes a plurality of first-type mesh control points
  • the image processing unit is configured to move at least a part of the first-type mesh control points among the plurality of first-type mesh control points included in the first-type mesh control surface based on the first deformation parameter, to
  • the network-like control surface performs deformation processing; wherein the movement of any one of the plurality of first-type grid control points realizes the deformation of the first-type network control surface.
  • the mesh control surface is a second-type mesh control surface
  • the image processing unit is configured to determine at least one second-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one second-type mesh control surface based on a second deformation parameter to Compress or stretch a part of a limb area corresponding to the target object, and compress or stretch at least a part of a background area outside the target object.
  • the second-type mesh control surface includes a plurality of second-type mesh control points; and the image processing unit is configured to move the second-type mesh based on the second deformation parameter. At least part of the second-type mesh control points included in the plurality of second-type mesh control points included in the lattice control surface to deform the second-type network control surface; The movement of any grid control point in the grid control point realizes the deformation of the area corresponding to the network control point in the second type of network control surface.
  • An embodiment of the present application further provides a computer-readable storage medium having computer instructions stored thereon, which when executed by a processor, implement the steps of the image processing method described in the embodiments of the present application.
  • An embodiment of the present application further provides an image processing apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the program, the program described in the embodiment of the present application is implemented. Steps of image processing method.
  • An embodiment of the present application further provides a computer program including computer instructions, and when the computer instructions are run in a processor of a device, the method described in the embodiments of the present application is implemented.
  • An image processing method, device, and computer storage medium provided in the embodiments of the present application, the method includes: obtaining a first image, meshing the first image to obtain a plurality of grid control surfaces; and identifying the A target object in the first image; performing deformation processing on at least a part of the limb area corresponding to the target object based on at least part of the mesh control surfaces of the plurality of mesh control surfaces to generate a second image.
  • the mesh is divided based on the image to obtain multiple mesh control surfaces, and at least part of the limb area facing the target object is deformed based on the mesh control, thereby realizing the limb area of the target object. Automatic adjustment without the need for multiple manual operations by the user, greatly improving the user's operating experience.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a hardware composition and structure of an image processing apparatus according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes:
  • Step 101 Obtain a first image, mesh the first image, and obtain multiple mesh control surfaces.
  • Step 102 Determine a target object in the first image.
  • Step 103 Perform deformation processing based on at least part of the mesh control surfaces of the plurality of mesh control surfaces to face at least part of the limb area corresponding to the target object to generate a second image.
  • the image processing method of this embodiment performs image processing on the first image, performs mesh division on the first image, and obtains multiple mesh control surfaces.
  • the first image is evenly divided into N * M grid control surfaces, N and M are both positive integers, and N and M are the same or different.
  • the target object in the first image is used as the center, and a rectangular region where the target object is located is meshed. Based on the meshing granularity of the rectangular region, a background region other than the rectangular region is meshed. Grid division.
  • the number of mesh control surfaces is related to the proportion of the limb area corresponding to the target object in the first image in the first image.
  • a mesh control surface may correspond to a part of a limb area of the target object, for example, a mesh control surface may correspond to the leg of the target object, or a mesh control surface may correspond to the chest and waist of the target object, so that Both the global deformation of the target object and the local deformation of the target object can be achieved.
  • the mesh control surface is used as a basic deformation unit to process at least part of the limb area corresponding to the target object, that is, the mesh control surface is subjected to deformation processing, so as to achieve deformation of at least part of the limb area corresponding to the target object.
  • the target object in the first image is identified; where the target object is a to-be-processed object, it can be a real person and can be understood as a real person in the image; in other embodiments, the target object is also Can be a virtual character.
  • the execution order of meshing the first image and identifying the target object in the first image is not limited to the execution order in this embodiment, and the target object in the first image may also be identified before the first An image is meshed to obtain multiple mesh control surfaces.
  • determining the target object in the first image includes: obtaining limb detection information of the target object in the first image; the limb detection information includes limb key point information and / or limb contour points Information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
  • the limb region corresponding to the target object includes a head region, a shoulder region, a chest region, a waist region, an arm region, a hand region, a hip region, a leg region, and a foot region.
  • the limb detection information includes limb key point information and / or limb contour point information; the limb key point information includes coordinate information of the limb key point; the limb contour point information includes coordinate information of the limb contour point.
  • the limb contour point represents a limb contour of a limb region of the target object, that is, a limb contour edge of the target object can be formed through coordinate information of the limb contour point.
  • the limb contour points include at least one of the following: arm contour points, hand contour points, shoulder contour points, leg contour points, foot contour points, waist contour points, head contour points, hip contour points, Chest contour points.
  • the key points of the limbs represent key points of the bones of the target object, that is, the main bones of the target object can be formed through the coordinate information of the key points of the limbs and connecting the key points of the limbs.
  • the limb key points include at least one of the following: arm key points, hand key points, shoulder key points, leg key points, foot key points, waist key points, head key points, hip key points, Key points on the chest.
  • a target object in the first image is identified by an image recognition algorithm, and limb detection information of the target object is further determined.
  • performing deformation processing on at least part of the limb area corresponding to the target object based on at least part of the mesh control surfaces of the multiple mesh control surfaces includes: determining to be deformed in the target object. Processing the at least part of the limb area to obtain the first limb detection information of the at least part of the limb area; determining a first set of grid control surfaces corresponding to the first limb detection information, and performing Deformation processing.
  • determining a first group of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first group of mesh control surfaces includes: based on the first included in the first limb detection information.
  • the limb key point information and / or the first limb contour point information determine a corresponding first set of mesh control surfaces; the first set of mesh control surfaces includes at least one mesh control surface; and the at least one mesh control surface Perform deformation processing to compress or stretch at least a part of a limb area corresponding to the target object, and compress or stretch at least a part of a background area outside the target object.
  • first determine at least part of the limb area of the target object to be deformed for example, the waist area, leg area, etc. to be deformed, or the limb area of the target object (that is, the entire limb area of the target object) ; Further determine the first limb detection information based on at least a part of the limb area to be deformed, specifically the coordinate information of the limb key points and / or the coordinate information of the limb contour points of the at least part of the limb area to be deformed; based on at least part The coordinate information of the key points of the limb region and / or the coordinate information of the limb contour points determine a first set of mesh control surfaces corresponding to the at least part of the limb region, the first set of mesh control surfaces including at least one mesh control surface That is, determining at least one grid control surface corresponding to the at least part of the limb area; it can be understood that the at least part of the limb area is within an area corresponding to the at least one grid control surface.
  • the grid control surface is rectangular in the initial state, and the grid control surface also has a plurality of virtual control points (or control lines); the composition control is changed by moving the control points (or control lines).
  • the curvature of each control line of the surface so as to realize the deformation processing of the mesh control surface. It can be understood that the mesh control surface after the deformation processing is a curved surface.
  • the mesh control surface is a first-type mesh control surface
  • the determining the first set of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first set of mesh control surfaces includes: determining at least one first corresponding to the first limb detection information.
  • a mesh-like control surface that deforms the at least one first-type mesh control surface based on a first deformation parameter to compress or stretch a limb region corresponding to the target object, and compress or stretch the target object Outside at least part of the background area.
  • the first-type mesh control surface includes a plurality of first-type mesh control points
  • the deforming the at least one first-type mesh control surface based on a first deformation parameter includes: The deformation parameters move at least a part of the first-type mesh control points among a plurality of first-type mesh control points included in the first-type mesh control surface to deform the first-type network control surface; The movement of any one of the plurality of first-type mesh control points realizes deformation of the first-type network control surface.
  • the first type of mesh control surface may be a Bezier curve formed by a Bezier curve.
  • a Bezier curve can have multiple control points. It can be understood that a Bezier surface can be formed by multiple Bezier curves. Deformation processing of Bezier curves is achieved by moving at least some of the control points corresponding to any Bezier curve. It can be understood that multiple control points of multiple Bezier curves are moved to achieve multiple deformations. Deformation processing of the limb region corresponding to the Bezier surface formed by the Bezier curve. Among the multiple control points of the Bezier surface, the movement of any control point will deform the global Bezier surface.
  • the deformation process for the entire limb area of the target object is a deformation process of at least one first-type mesh control surface by referring to the first deformation parameter, that is, the first to be adjusted in the first-type mesh control surface.
  • the mesh-like control points are deformed according to the first type of deformation parameters to achieve that the entire limb area of the target object is deformed according to the same deformation parameter.
  • the entire limb area is compressed ("slimted") by 20%. Relative to the initial data, it can be understood that the width of the waist is compressed by 20% compared to the width of the waist before deformation, the width of the legs is compressed by 20% compared to the width of the legs before deformation, and so on.
  • This embodiment is suitable for deforming a complete limb region of a target object by using a Bezier surface, so as to achieve global smoothing of the deformation of the complete limb region of the target object.
  • the mesh control surface is a second-type mesh control surface
  • the determining the first set of mesh control surfaces corresponding to the first limb detection information, and performing deformation processing on the first set of mesh control surfaces includes: determining at least one second corresponding to the first limb detection information.
  • a mesh-like control surface which deforms the at least one second-type mesh control surface based on a second deformation parameter to compress or stretch a part of a limb region corresponding to the target object, and compress or stretch the target At least part of the background area outside the object.
  • the second-type mesh control surface includes a plurality of second-type mesh control points
  • the deforming the at least one second-type mesh control surface based on a second deformation parameter includes: The deformation parameter moves at least a part of the second-type mesh control point among a plurality of second-type mesh control points included in the second-type mesh control surface to deform the second-type network control surface; The movement of any one of the plurality of second-type grid control points realizes deformation of an area corresponding to the network control point in the second-type network control plane.
  • the second type of mesh control surface specifically forms a catmull rom surface from a catmull rom spline curve.
  • the catmull rom spline curve can have multiple control points. It can be understood that the catmull rom surface can be formed by multiple catmull rom spline curves.
  • the deformation of the catmull-rom spline is realized by moving at least part of the control points of any of the control points corresponding to any catmull-rom spline curve. It can be understood that by moving the control points of multiple catmull-rom spline curves In this way, parts of the limb region corresponding to the catmull surface formed by multiple catmull surface splines are deformed.
  • the difference between the first type of mesh control surface and the second type of mesh control surface in this embodiment is that the first type of mesh control surface is a Bezier surface, and the second type of mesh control surface is a catmull surface.
  • the first type of mesh control points are not on the Bezier curve forming the Bezier surface. Curvature of the Bezier curve; it can be understood that the movement of the control points of the first type of mesh can change the curvature of the corresponding Bezier curve in a wide range, thereby achieving global deformation processing of the Bezier curve.
  • the second type of mesh control point is on the catmull curve that forms a catmull surface.
  • the movement of the second type of mesh control point changes the curvature of the position of the second type of mesh control point on the catmull curve and / It can be understood that the movement of the second type of mesh control point can change the curvature of a point on the corresponding catmullrom curve or the curve near the point, thereby realizing the deformation processing of the local area in the catmullrom surface.
  • the deformation of the partial limb area of the target object can be achieved through the deformation processing of the catmull rom surface, which can make the local deformation more accurate and improve the effect of image processing.
  • At least one second-type mesh control surface is deformed by referring to the second deformation parameter, so as to realize the deformation treatment of a part of the limb region corresponding to the target object.
  • the second deformation parameters corresponding to different partial limb regions may be the same or different, so that different partial limb regions have different deformation effects.
  • the width of the waist is compressed by 20% compared to the width of the waist before deformation
  • the width of the legs is compressed by 10% compared to the width of the leg before deformation.
  • first-type mesh control surface or the second-type mesh control surface it needs to be based on at least part of the limb area to be deformed and the type of deformation processing (such as compression processing or Stretch processing) to determine the mesh control points to be moved in the mesh control surface, and then move the determined mesh control points according to the corresponding deformation parameters.
  • type of deformation processing such as compression processing or Stretch processing
  • standard parameters are also configured during the image processing in the embodiments of the present application.
  • the standard parameters indicate parameters that are satisfied by the limb area of the processed target object, that is, when the present application is adopted After the image processing solution of the embodiment performs deformation processing on the limb area so that the limb area meets the standard parameter, the deformation processing of the limb area is terminated; as another implementation manner, the standard parameter indicates adjustment of the limb area of the target object.
  • the proportion that is, when the limb region is processed by using the image processing scheme of the embodiment of the present application, the adjustment change amount of the limb region meets the adjustment proportion. Based on this, the embodiment of the present application may determine the deformation parameters (including the first deformation parameter or the second deformation parameter) based on the standard parameter.
  • the mesh is divided based on the image to obtain multiple mesh control surfaces, and at least part of the limb area facing the target object is deformed based on the mesh control, thereby realizing the limb area of the target object. Automatic adjustment without the need for multiple manual operations by the user, greatly improving the user's operating experience.
  • FIG. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application; as shown in FIG. 2, the apparatus includes: an obtaining unit 21, a mesh dividing unit 22, and an image processing unit 23;
  • the obtaining unit 21 is configured to obtain a first image
  • the mesh dividing unit 22 is configured to mesh the first image obtained by the obtaining unit 21 to obtain multiple mesh control surfaces;
  • the image processing unit 23 is configured to determine a target object in the first image obtained by the obtaining unit 21; and based on at least a part of the mesh control surfaces of the plurality of mesh control surfaces, corresponding to the target object. Deformation processing is performed on at least part of the limb area to generate a second image.
  • the image processing unit 23 is configured to obtain limb detection information of the target object in the first image; the limb detection information includes limb key point information and / or limb contour point information; the limb key information The point information includes coordinate information of key points of the limb; the limb contour point information includes coordinate information of limb contour points.
  • the image processing unit 23 is configured to determine at least a part of a limb region in the target object to be deformed, obtain first limb detection information of the at least part of the limb region, and determine the first limb. Deformation processing is performed on the first group of mesh control surfaces corresponding to the detection information.
  • the image processing unit 23 is configured to determine a corresponding first group of mesh control surfaces based on the first limb keypoint information and / or the first limb contour point information included in the first limb detection information;
  • the first set of mesh control surfaces includes at least one mesh control surface; deforming the at least one mesh control surface to compress or stretch at least a part of a limb region corresponding to the target object, and compressing or pulling Extending at least part of the background area outside the target object.
  • the grid control surface is a first-type grid control surface
  • the image processing unit 23 is configured to determine at least one first-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one first-type mesh control surface based on a first deformation parameter, Compress or stretch a limb area corresponding to the target object, and compress or stretch at least a part of the background area outside the target object.
  • the first-type mesh control surface includes a plurality of first-type mesh control points
  • the image processing unit 23 is configured to move at least a part of the first-type mesh control points among a plurality of first-type mesh control points included in the first-type mesh control surface based on the first deformation parameter, to A type of network control plane performs deformation processing; wherein the movement of any one of the plurality of first-type grid control points realizes the deformation of the first-type network control plane.
  • the mesh control surface is a second type of mesh control surface
  • the image processing unit 23 is configured to determine at least one second-type mesh control surface corresponding to the first limb detection information, and perform deformation processing on the at least one second-type mesh control surface based on a second deformation parameter, And compressing or stretching a part of a limb area corresponding to the target object, and compressing or stretching at least a part of a background area outside the target object.
  • the second-type mesh control surface includes a plurality of second-type mesh control points
  • the image processing unit 23 is configured to move at least a part of the second-type mesh control points among a plurality of second-type mesh control points included in the second-type mesh control surface based on the second deformation parameter, to The second type of network control plane is deformed; wherein movement of any one of the plurality of second type of network control points realizes an area corresponding to the network control point in the second type of network control plane. Of deformation.
  • the obtaining unit 21, the grid division unit 22, and the image processing unit 23 in the device may be implemented by a central processing unit (CPU, Central Processing Unit), a digital signal processor (DSP, Digital Signal Processor, Microcontroller Unit (MCU) or Programmable Gate Array (FPGA, Field-Programmable Gate Array).
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • MCU Microcontroller Unit
  • FPGA Programmable Gate Array
  • FIG. 3 is a schematic diagram of a hardware composition and structure of the image processing apparatus according to the embodiment of the present application.
  • a computer program on 32 that can run on the processor 31.
  • the processor 31 executes the program, the image processing method according to any one of the foregoing embodiments of the present application is implemented.
  • bus system 33 various components in the image processing apparatus are coupled together through the bus system 33. It can be understood that the bus system 33 is used to implement connection and communication between these components.
  • the bus system 33 includes a power bus, a control bus, and a status signal bus in addition to the data bus. However, for the sake of clarity, various buses are marked as the bus system 33 in FIG. 3.
  • the memory 32 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), or an erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory), Electrically Erasable and Programmable Read-Only Memory (EEPROM), Magnetic Random Access Memory (FRAM, ferromagnetic random access memory), Flash Memory (Flash Memory), Magnetic Surface Memory , Compact disc, or read-only compact disc (CD-ROM, Compact Disc-Read-Only Memory); the magnetic surface memory can be a disk memory or a tape memory.
  • the volatile memory may be random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • RAM Random Access Memory
  • many forms of RAM are available, such as Static Random Access Memory (SRAM, Static Random Access Memory), Synchronous Static Random Access Memory (SSRAM, Static Random Access, Memory), Dynamic Random Access DRAM (Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced Type Synchronous Dynamic Random Access Memory (ESDRAM, Enhanced Synchronous Random Access Memory), Synchronous Link Dynamic Random Access Memory (SLDRAM) ).
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Link Dyna
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 31 or implemented by the processor 31.
  • the processor 31 may be an integrated circuit chip and has a signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 31 or an instruction in the form of software.
  • the aforementioned processor 31 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor 31 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • a general-purpose processor may be a microprocessor or any conventional processor.
  • the software module may be located in a storage medium.
  • the storage medium is located in the memory 32.
  • the processor 31 reads information in the memory 32 and completes the steps of the foregoing method in combination with its hardware.
  • the image processing device provided in the foregoing embodiment performs image processing
  • only the division of the foregoing program modules is used as an example.
  • the foregoing processing may be allocated by different program modules as required. That is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
  • the image processing apparatus and the image processing method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
  • an embodiment of the present application further provides a computer-readable storage medium, such as a memory 32 including a computer program, and the computer program may be executed by the processor 31 of the image processing apparatus to complete the steps of the foregoing method.
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM, or various devices including one or any combination of the above memories, such as Mobile phones, computers, tablet devices, personal digital assistants, etc.
  • An embodiment of the present application further provides a computer-readable storage medium having computer instructions stored thereon, which, when executed by a processor, implement the image processing method described in any one of the foregoing embodiments of the present application.
  • An embodiment of the present application further provides a computer program including computer-readable instructions.
  • a processor in the device executes the instructions to implement any of the foregoing implementations of the application.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
  • the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
  • the above-mentioned integrated unit of the present application is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device) is caused to perform all or part of the methods described in the embodiments of the present application.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种图像处理方法、装置和计算机存储介质,所述方法包括:获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面(101);确定所述第一图像中的目标对象(102);基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像(103)。

Description

一种图像处理方法、装置和计算机存储介质
相关申请的交叉引用
本申请基于申请号为201810829498.0、申请日为2018年7月25日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及图像处理技术,具体涉及一种图像处理方法、装置和计算机存储介质。
背景技术
随着互联网技术的飞速发展,出现了各种图像处理工具,能够对图像中的目标对象进行处理,例如对图像中的目标人物进行“身体塑形”,例如“腿部塑形”、“手臂塑形”、“腰部塑形”、臀部塑形”、“肩部塑形”、“头部塑形”、“胸部塑形”等变胖或变瘦或变大或变小的操作,让人物的身材更完美的处理。
发明内容
为解决现有存在的技术问题,本申请实施例提供一种图像处理方法、装置和计算机存储介质。
为达到上述目的,本申请实施例的技术方案是这样实现的:
本申请实施例提供了一种图像处理方法,所述方法包括:
获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面;
确定所述第一图像中的目标对象;
基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
在本申请的一种可选实施例中,所述确定所述第一图像中的目标对象,包括:
获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;
所述肢体关键点信息包括肢体关键点的坐标信息;
所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
在本申请的一种可选实施例中,所述基于所述多个网格控制面中的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,包括:
确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;
确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
在本申请的一种可选实施例中,所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:
基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;
对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述网格控制面为第一类网格控制面;
所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第 一组网格控制面进行变形处理,包括:
确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述第一类网格控制面包括多个第一类网格控制点;
所述基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,包括:
基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;
其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
在本申请的一种可选实施例中,所述网格控制面为第二类网格控制面;
所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:
确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述第二类网格控制面包括多个第二类网格控制点;
所述基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,包括:
基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;
其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
本申请实施例还提供了一种图像处理装置,所述装置包括:获取单元、网格划分单元和图像处理单元;其中,
所述获取单元,配置为获得第一图像;
所述网格划分单元,配置为将所述获取单元获得的所述第一图像进行网格划分,获得多个网格控制面;
所述图像处理单元,配置为确定所述获取单元获得的所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
在本申请的一种可选实施例中,所述图像处理单元,配置为获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
在本申请的一种可选实施例中,所述图像处理单元,配置为确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
在本申请的一种可选实施例中,所述图像处理单元,配置为基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标 对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述网格控制面为第一类网格控制面;
所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述第一类网格控制面包括多个第一类网格控制点;
所述图像处理单元,配置为基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
在本申请的一种可选实施例中,所述网格控制面为第二类网格控制面;
所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在本申请的一种可选实施例中,所述第二类网格控制面包括多个第二类网格控制点;所述图像处理单元,配置为基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机 指令,该指令被处理器执行时实现本申请实施例所述图像处理方法的步骤。
本申请实施例还提供了一种图像处理装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述图像处理方法的步骤。
本申请实施例还提供了一种计算机程序,包括计算机指令,当所述计算机指令在设备的处理器中运行时,实现上述本申请实施例所述的方法。
本申请实施例提供的一种图像处理方法、装置和计算机存储介质,所述方法包括:获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面;识别所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。采用本申请实施例的技术方案,基于图像进行网格划分,获得多个网格控制面,基于网格控制面对目标对象的至少部分肢体区域进行变形处理,实现了对目标对象的肢体区域的自动调整,无需用户多次手动操作,大大提升了用户的操作体验。
附图说明
图1为本申请实施例的图像处理方法的流程示意图;
图2为本申请实施例的图像处理装置的组成结构示意图;
图3为本申请实施例的图像处理装置的硬件组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本发明作进一步详细的说明。
本申请实施例提供了一种图像处理方法。图1为本申请实施例的图像处理方法的流程示意图;如图1所示,所述方法包括:
步骤101:获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面。
步骤102:确定所述第一图像中的目标对象。
步骤103:基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
本实施例的图像处理方法对第一图像进行图像处理,对第一图像进行网格划分,获得多个网格控制面。作为一种实施方式,将所述第一图像平均划分为N*M个网格控制面,N和M均为正整数,N和M相同或不同。作为另一种实施方式,以第一图像中的目标对象为中心,将目标对象所在的矩形区域进行网格划分,再基于该矩形区域的网格划分粒度,对矩形区域以外的背景区域进行网格划分。
在一实施例中,网格控制面的数量与第一图像中目标对象对应的肢体区域在第一图像中的比例相关。例如,一个网格控制面可对应目标对象的一个部分肢体区域,例如一个网格控制面对应于目标对象的腿部,或者一个网格控制面对应于目标对象的胸部和腰部,以便于既可实现目标对象的全局变形,也有利于目标对象的局部变形。
本实施例中以网格控制面作为基本变形单元对目标对象对应的至少部分肢体区域进行处理,即针对网格控制面进行变形处理,从而实现目标对象对应的至少部分肢体区域的变形。
本实施例中,识别出所述第一图像中的目标对象;其中,所述目标对象作为待处理对象,可以是真人,可以理解为图像中的真实人物;在其他实施方式中,目标对象也可以是虚拟人物。
本实施例中,将第一图像进行网格划分与识别第一图像中的目标对象的执行顺序不限定本实施例中的执行顺序,也可先识别第一图像中的目标对象,再将第一图像进行网格划分,获得多个网格控制面。
本实施例中,所述确定所述第一图像中的目标对象,包括:获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点 信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
具体的,所述目标对象对应的肢体区域包括:头部区域、肩部区域、胸部区域、腰部区域、手臂区域、手部区域、臀部区域、腿部区域和脚部区域。所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。其中,所述肢体轮廓点表征目标对象的肢体区域的肢体轮廓,即通过所述肢体轮廓点的坐标信息能够形成目标对象的肢体轮廓边缘。其中,所述肢体轮廓点包括以下至少一种:手臂轮廓点、手部轮廓点、肩部轮廓点、腿部轮廓点、脚部轮廓点、腰部轮廓点、头部轮廓点、臀部轮廓点、胸部轮廓点。其中,所述肢体关键点表征目标对象的骨骼的关键点,即通过所述肢体关键点的坐标信息、连接肢体关键点能够形成目标对象的主要骨骼。其中,所述肢体关键点包括以下至少一种:手臂关键点、手部关键点、肩部关键点、腿部关键点、脚部关键点、腰部关键点、头部关键点、臀部关键点、胸部关键点。
则本实施例中,通过图像识别算法识别出第一图像中的目标对象,进一步确定目标对象的肢体检测信息。
本实施例中,所述基于所述多个网格控制面中的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,包括:确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
其中,所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网 格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
这里,首先确定目标对象待进行变形处理的至少部分肢体区域,例如待进行变形处理的是腰部区域、腿部区域等等,也可以是是目标对象的肢体区域(即目标对象的完整肢体区域);进一步基于待进行变形处理的至少部分肢体区域确定第一肢体检测信息,具体是待进行变形处理的至少部分肢体区域的肢体关键点的坐标信息和/或肢体轮廓点的坐标信息;基于至少部分肢体区域的肢体关键点的坐标信息和/或肢体轮廓点的坐标信息确定该至少部分肢体区域对应的第一组网格控制面,所述第一组网格控制面包括至少一个网格控制面,即确定所述至少部分肢体区域对应的至少一个网格控制面;可以理解,所述至少部分肢体区域在所述至少一个网格控制面对应的区域内。
本实施例中,网格控制面在初始状态下为矩形,网格控制面还具有多个虚拟的控制点(或者具有控制线);通过移动控制点(或控制线)从而改变组成网格控制面的各控制线的曲率,从而实现对网格控制面的变形处理,可以理解,变形处理后的网格控制面为曲面。
作为一种实施方式,所述网格控制面为第一类网格控制面;
所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
这里,所述第一类网格控制面包括多个第一类网格控制点;所述基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,包括:基 于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
具体的,所述第一类网格控制面具体可以为由贝塞尔曲线(Bézier curve)形成的贝塞尔曲面。贝塞尔曲线可具有多个控制点,可以理解,贝塞尔曲面可由多个贝塞尔曲线形成。通过对任一条贝塞尔曲线对应的多个控制点中至少部分控制点的移动实现对贝塞尔曲线的变形处理,可以理解,通过对多条贝塞尔曲线的控制点的移动从而实现多条贝塞尔曲线形成的贝塞尔曲面对应的肢体区域的变形处理。其中,贝塞尔曲面具有的多个控制点中,针对任一控制点的移动均会对贝塞尔曲面的全局进行变形。
其中,针对目标对象的完整肢体区域的变形处理,是通过参照第一变形参数对至少一个第一类网格控制面进行变形处理,即对第一类网格控制面中待进行调整的第一类网格控制点按照第一类变形参数进行变形处理,以实现目标对象的完整肢体区域均按照同一变形参数变形,例如,完整肢体区域整体压缩(“瘦身”)20%,这里的20%是相对于初始数据而言的,可以理解,腰部的宽度相比于变形前的腰部宽度压缩了20%,腿部的宽度相比于变形前的腿部宽度压缩了20%等等。
本实施方式适用于通过贝塞尔曲面对目标对象的完整肢体区域进行变形处理,从而实现针对目标对象的完整肢体区域的变形的全局平滑。
作为另一种实施方式,所述网格控制面为第二类网格控制面;
所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域, 以及压缩或拉伸所述目标对象以外的至少部分背景区域。
这里,所述第二类网格控制面包括多个第二类网格控制点;所述基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,包括:基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
具体的,所述第二类网格控制面具体由catmull rom样条曲线形成catmull rom曲面。catmull rom样条曲线可具有多个控制点,可以理解catmull rom曲面可由多个catmull rom样条曲线形成。通过对任一条catmull rom样条曲线对应的多个控制点中至少部分控制点的移动实现对catmull rom样条曲线的变形处理,可以理解,通过对多条catmull rom样条曲线的控制点的移动从而实现多条catmull rom样条曲线形成的catmull rom曲面对应的肢体区域的局部进行变形处理。
本实施例中第一类网格控制面和第二类网格控制面的区别在于,以第一类网格控制面为贝塞尔曲面、以第二类网格控制面为catmull rom曲面为例,在基于贝塞尔曲面或catmull rom曲面进行变形处理过程中,第一类网格控制点不在形成贝塞尔曲面的贝塞尔曲线上,通过对第一类网格控制点的移动改变贝塞尔曲线的曲率;可以理解,通过第一类网格控制点的移动能够改变对应的贝塞尔曲线的较大范围的曲率,从而实现对贝塞尔曲面的全局变形处理。而第二类网格控制点在形成catmull rom曲面的catmull rom曲线上,通过对第二类网格控制点的移动改变第二类网格控制点在catmull rom曲线上的所在位置的曲率和/或位置;可以理解,通过第二类网格控制点的移动能够改变对应的catmull rom曲线上的某一点或者该点附近的曲线的曲率,从而实现对catmull rom曲面中的局部区域的变形处理。
可以理解,通过catmull rom曲面的变形处理实现目标对象的部分肢体区域的变形,能够使局部变形更为精确,提升图像处理的效果。
其中,通过参照第二变形参数对至少一个第二类网格控制面进行变形处理,从而实现目标对象对应的部分肢体区域的变形处理。其中,针对不同的部分肢体区域对应的第二变形参数可相同也可不相同,从而实现不同的部分肢体区域具有不同的变形效果。例如,腰部的宽度相比于变形前的腰部宽度压缩了20%,腿部的宽度相比于变形前的腿部宽度压缩了10%。
本申请实施例中,无论采用上述第一类网格控制面或是第二类网格控制面,均需要基于待进行变形处理的至少部分肢体区域,以及变形处理的类型(例如压缩处理或是拉伸处理),确定网格控制面中待移动的网格控制点,进而按照对应的变形参数移动确定的网格控制点。
在一实施例中,本申请实施例的图像处理过程中还配置有标准参数;作为一种实施方式,所述标准参数表明处理后的目标对象的肢体区域满足的参数,也即当采用本申请实施例的图像处理方案对肢体区域进行变形处理后使得肢体区域满足所述标准参数后即终止对肢体区域进行变形处理;作为另一种实施方式,所述标准参数表明目标对象的肢体区域的调整比例,也即当采用本申请实施例的图像处理方案对肢体区域进行处理后使得肢体区域的调整变化量满足所述调整比例。基于此,本申请实施例可基于该标准参数确定变形参数(包括第一变形参数或第二变形参数)。
本申请实施例中,针对至少部分网格控制面的变形处理,一方面实现了针对至少部分肢体区域的变形,另一方面也实现了网格控制面对应的除至少部分肢体区域以外的至少部分背景区域的变形。
采用本申请实施例的技术方案,基于图像进行网格划分,获得多个网格控制面,基于网格控制面对目标对象的至少部分肢体区域进行变形处理,实现了对目标对象的肢体区域的自动调整,无需用户多次手动操作,大大 提升了用户的操作体验。
本申请实施例还提供了一种图像处理装置。图2为本申请实施例的图像处理装置的组成结构示意图;如图2所示,所述装置包括:获取单元21、网格划分单元22和图像处理单元23;其中,
所述获取单元21,配置为获得第一图像;
所述网格划分单元22,配置为将所述获取单元21获得的所述第一图像进行网格划分,获得多个网格控制面;
所述图像处理单元23,配置为确定所述获取单元21获得的所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
本实施例中,所述图像处理单元23,配置为获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
本实施例中,所述图像处理单元23,配置为确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
本实施例中,所述图像处理单元23,配置为基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
在一实施例中,所述网格控制面为第一类网格控制面;
所述图像处理单元23,配置为确定所述第一肢体检测信息对应的至少 一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
其中,所述第一类网格控制面包括多个第一类网格控制点;
所述图像处理单元23,配置为基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
在另一实施例中,所述网格控制面为第二类网格控制面;
所述图像处理单元23,配置为确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
其中,所述第二类网格控制面包括多个第二类网格控制点;
所述图像处理单元23,配置为基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
本申请实施例中,所述装置中的获取单元21、网格划分单元22和图像处理单元23,在实际应用中均可由中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
本申请实施例还提供了一种图像处理装置,图3为本申请实施例的图 像处理装置的硬件组成结构示意图,如图3所示,图像处理装置包括存储器32、处理器31及存储在存储器32上并可在处理器31上运行的计算机程序,所述处理器31执行所述程序时实现本申请实施例前述任一项所述图像处理方法。
可以理解,图像处理装置中的各个组件通过总线系统33耦合在一起。可理解,总线系统33用于实现这些组件之间的连接通信。总线系统33除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统33。
可以理解,存储器32可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,ferromagnetic random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态 随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器32旨在包括但不限于这些和任意其它适合类型的存储器。
上述本申请实施例揭示的方法可以应用于处理器31中,或者由处理器31实现。处理器31可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器31中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器31可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器31可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器32,处理器31读取存储器32中的信息,结合其硬件完成前述方法的步骤。
需要说明的是:上述实施例提供的图像处理装置在进行图像处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
在示例性实施例中,本申请实施例还提供了一种计算机可读存储介质,例如包括计算机程序的存储器32,上述计算机程序可由图像处理装置的处理器31执行,以完成前述方法所述步骤。计算机可读存储介质可以是 FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备,如移动电话、计算机、平板设备、个人数字助理等。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现本申请实施例前述任一项所述图像处理方法。
本申请实施例还提供了一种计算机程序,包括计算机可读取的指令,当该计算机可读取的指令在设备中运行时,该设备中的处理器执行用于实现本申请上述任一实施例的智能驾驶控制方法中的步骤的可执行指令。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步 骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像处理方法,所述方法包括:
    获得第一图像,将所述第一图像进行网格划分,获得多个网格控制面;
    确定所述第一图像中的目标对象;
    基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
  2. 根据权利要求1所述的方法,其中,所述确定所述第一图像中的目标对象,包括:
    获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;
    所述肢体关键点信息包括肢体关键点的坐标信息;
    所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
  3. 根据权利要求1或2所述的方法,其中,所述基于所述多个网格控制面中的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,包括:
    确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;
    确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
  4. 根据权利要求3所述的方法,其中,所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:
    基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;
    对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  5. 根据权利要求3或4所述的方法,其中,所述网格控制面为第一类网格控制面;
    所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:
    确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  6. 根据权利要求5所述的方法,其中,所述第一类网格控制面包括多个第一类网格控制点;
    所述基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,包括:
    基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;
    其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
  7. 根据权利要求3或4所述的方法,其中,所述网格控制面为第二类网格控制面;
    所述确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理,包括:
    确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于 第二变形参数对所述至少一个第二类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  8. 根据权利要求7所述的方法,其中,所述第二类网格控制面包括多个第二类网格控制点;
    所述基于第二变形参数对所述至少一个第二类网格控制面进行变形处理,包括:
    基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;
    其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
  9. 一种图像处理装置,所述装置包括:获取单元、网格划分单元和图像处理单元;其中,
    所述获取单元,配置为获得第一图像;
    所述网格划分单元,配置为将所述获取单元获得的所述第一图像进行网格划分,获得多个网格控制面;
    所述图像处理单元,配置为确定所述获取单元获得的所述第一图像中的目标对象;基于所述多个网格控制面的至少部分网格控制面对所述目标对象对应的至少部分肢体区域进行变形处理,生成第二图像。
  10. 根据权利要求9所述的装置,其中,所述图像处理单元,配置为获得所述第一图像中目标对象的肢体检测信息;所述肢体检测信息包括肢体关键点信息和/或肢体轮廓点信息;所述肢体关键点信息包括肢体关键点的坐标信息;所述肢体轮廓点信息包括肢体轮廓点的坐标信息。
  11. 根据权利要求9或10所述的装置,其中,所述图像处理单元,配 置为确定所述目标对象中待进行变形处理的至少部分肢体区域,获得所述至少部分肢体区域的第一肢体检测信息;确定所述第一肢体检测信息对应的第一组网格控制面,对所述第一组网格控制面进行变形处理。
  12. 根据权利要求11所述的装置,其中,所述图像处理单元,配置为基于所述第一肢体检测信息包括的第一肢体关键点信息和/或第一肢体轮廓点信息确定对应的第一组网格控制面;所述第一组网格控制面包括至少一个网格控制面;对所述至少一个网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的至少部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  13. 根据权利要求11或12所述的装置,其中,所述网格控制面为第一类网格控制面;
    所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第一类网格控制面,基于第一变形参数对所述至少一个第一类网格控制面进行变形处理,以压缩或拉伸所述目标对象对应的肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  14. 根据权利要求13所述的装置,其中,所述第一类网格控制面包括多个第一类网格控制点;
    所述图像处理单元,配置为基于第一变形参数移动第一类网格控制面包括的多个第一类网格控制点中的至少部分第一类网格控制点,以对所述第一类网络控制面进行变形处理;其中,所述多个第一类网格控制点中任一网格控制点的移动实现所述第一类网络控制面的变形。
  15. 根据权利要求11或12所述的装置,其中,所述网格控制面为第二类网格控制面;
    所述图像处理单元,配置为确定所述第一肢体检测信息对应的至少一个第二类网格控制面,基于第二变形参数对所述至少一个第二类网格控制 面进行变形处理,以压缩或拉伸所述目标对象对应的部分肢体区域,以及压缩或拉伸所述目标对象以外的至少部分背景区域。
  16. 根据权利要求15所述的装置,其中,所述第二类网格控制面包括多个第二类网格控制点;
    所述图像处理单元,配置为基于第二变形参数移动第二类网格控制面包括的多个第二类网格控制点中的至少部分第二类网格控制点,以对所述第二类网络控制面进行变形处理;其中,所述多个第二类网格控制点中任一网格控制点的移动实现所述第二类网络控制面中与所述网络控制点对应的区域的变形。
  17. 一种计算机可读存储介质,其上存储有计算机指令,其中,该指令被处理器执行时实现权利要求1至8任一项所述图像处理方法的步骤。
  18. 一种图像处理装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现权利要求1至8任一项所述图像处理方法的步骤。
  19. 一种计算机程序,包括计算机指令,当所述计算机指令在设备的处理器中运行时,实现上述权利要求1至8任一项所述的方法。
PCT/CN2019/092353 2018-07-25 2019-06-21 一种图像处理方法、装置和计算机存储介质 WO2020019915A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020207030087A KR20200133778A (ko) 2018-07-25 2019-06-21 이미지 처리 방법, 장치 및 컴퓨터 저장 매체
JP2021506036A JP7138769B2 (ja) 2018-07-25 2019-06-21 画像処理方法、装置及びコンピュータ記憶媒体
SG11202010404WA SG11202010404WA (en) 2018-07-25 2019-06-21 Image processing method and apparatus, and computer storage medium
US17/117,703 US20210097268A1 (en) 2018-07-25 2020-12-10 Image processing method and apparatus, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810829498.0A CN110766607A (zh) 2018-07-25 2018-07-25 一种图像处理方法、装置和计算机存储介质
CN201810829498.0 2018-07-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/117,703 Continuation US20210097268A1 (en) 2018-07-25 2020-12-10 Image processing method and apparatus, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020019915A1 true WO2020019915A1 (zh) 2020-01-30

Family

ID=69181302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/092353 WO2020019915A1 (zh) 2018-07-25 2019-06-21 一种图像处理方法、装置和计算机存储介质

Country Status (6)

Country Link
US (1) US20210097268A1 (zh)
JP (1) JP7138769B2 (zh)
KR (1) KR20200133778A (zh)
CN (1) CN110766607A (zh)
SG (1) SG11202010404WA (zh)
WO (1) WO2020019915A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651931A (zh) * 2020-12-15 2021-04-13 浙江大华技术股份有限公司 建筑物变形监测方法、装置和计算机设备
US11896769B2 (en) 2020-06-17 2024-02-13 Affirm Medical Technologies Ii, Llc Universal respiratory detector

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145084B (zh) * 2019-12-25 2023-06-16 北京市商汤科技开发有限公司 图像处理方法及装置、图像处理设备及存储介质
CN114913549B (zh) * 2022-05-25 2023-07-07 北京百度网讯科技有限公司 图像处理方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496140A (zh) * 2011-12-06 2012-06-13 中国科学院自动化研究所 一种基于多层嵌套笼体的实时交互式图像变形方法
CN102541488A (zh) * 2010-12-09 2012-07-04 深圳华强游戏软件有限公司 一种实现投影屏幕的无缝对齐的图像处理方法及系统
CN104537608A (zh) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 一种图像处理的方法及其装置
CN105989576A (zh) * 2015-03-18 2016-10-05 卡西欧计算机株式会社 校正图像的装置及其方法
US20170330375A1 (en) * 2015-02-04 2017-11-16 Huawei Technologies Co., Ltd. Data Processing Method and Apparatus
CN107590708A (zh) * 2016-07-07 2018-01-16 梁如愿 一种生成用户特定体形模型的方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3463125B2 (ja) * 1993-12-24 2003-11-05 カシオ計算機株式会社 画像変形方法およびその装置
JP2010176588A (ja) * 2009-01-30 2010-08-12 Sony Ericsson Mobilecommunications Japan Inc 端末装置、画像処理方法及びプログラム
JP5240795B2 (ja) * 2010-04-30 2013-07-17 オムロン株式会社 画像変形装置、電子機器、画像変形方法、および画像変形プログラム
JP2011259053A (ja) 2010-06-07 2011-12-22 Olympus Imaging Corp 画像処理装置および画像処理方法
CN104978707A (zh) * 2014-04-03 2015-10-14 陈鹏飞 基于轮廓线的图像变形技术
US9576385B2 (en) * 2015-04-02 2017-02-21 Sbitany Group LLC System and method for virtual modification of body parts
US10140764B2 (en) * 2016-11-10 2018-11-27 Adobe Systems Incorporated Generating efficient, stylized mesh deformations using a plurality of input meshes
CN107592708A (zh) 2017-10-25 2018-01-16 成都塞普奇科技有限公司 一种led用电源电路

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541488A (zh) * 2010-12-09 2012-07-04 深圳华强游戏软件有限公司 一种实现投影屏幕的无缝对齐的图像处理方法及系统
CN102496140A (zh) * 2011-12-06 2012-06-13 中国科学院自动化研究所 一种基于多层嵌套笼体的实时交互式图像变形方法
CN104537608A (zh) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 一种图像处理的方法及其装置
US20170330375A1 (en) * 2015-02-04 2017-11-16 Huawei Technologies Co., Ltd. Data Processing Method and Apparatus
CN105989576A (zh) * 2015-03-18 2016-10-05 卡西欧计算机株式会社 校正图像的装置及其方法
CN107590708A (zh) * 2016-07-07 2018-01-16 梁如愿 一种生成用户特定体形模型的方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11896769B2 (en) 2020-06-17 2024-02-13 Affirm Medical Technologies Ii, Llc Universal respiratory detector
CN112651931A (zh) * 2020-12-15 2021-04-13 浙江大华技术股份有限公司 建筑物变形监测方法、装置和计算机设备
CN112651931B (zh) * 2020-12-15 2024-04-26 浙江大华技术股份有限公司 建筑物变形监测方法、装置和计算机设备

Also Published As

Publication number Publication date
JP2021518964A (ja) 2021-08-05
KR20200133778A (ko) 2020-11-30
SG11202010404WA (en) 2020-11-27
US20210097268A1 (en) 2021-04-01
JP7138769B2 (ja) 2022-09-16
CN110766607A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2020019915A1 (zh) 一种图像处理方法、装置和计算机存储介质
WO2019227917A1 (zh) 一种图像处理方法、装置和计算机存储介质
US11244449B2 (en) Image processing methods and apparatuses
JP2018200690A (ja) 情報処理方法及び情報処理装置
WO2020057667A1 (zh) 一种图像处理方法、装置和计算机存储介质
US11501407B2 (en) Method and apparatus for image processing, and computer storage medium
WO2021208151A1 (zh) 一种模型压缩方法、图像处理方法以及装置
CN109584327B (zh) 人脸老化模拟方法、装置以及设备
CN108830200A (zh) 一种图像处理方法、装置和计算机存储介质
CN108830784A (zh) 一种图像处理方法、装置和计算机存储介质
CN110060348B (zh) 人脸图像整形方法及装置
JP7475287B2 (ja) ポイントクラウドデータの処理方法、装置、電子機器、記憶媒体及びコンピュータプログラム
JP2011107877A5 (zh)
CN108765274A (zh) 一种图像处理方法、装置和计算机存储介质
WO2022033513A1 (zh) 目标分割方法、装置、计算机可读存储介质及计算机设备
US11769310B2 (en) Combining three-dimensional morphable models
CN110060287B (zh) 人脸图像鼻部整形方法及装置
CN116824090A (zh) 一种曲面重建方法及装置
CN110766603B (zh) 一种图像处理方法、装置和计算机存储介质
CN111145204B (zh) 一种边数可设定的对轮廓曲线的多边形简化方法
CN110111240A (zh) 一种基于强结构的图像处理方法、装置和存储介质
CN114638923A (zh) 一种特征对齐方法及装置
CN118134977A (zh) 基于nurbs的医疗图像体数据配准方法、系统及计算机介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19839856

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207030087

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021506036

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19839856

Country of ref document: EP

Kind code of ref document: A1