WO2021218118A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2021218118A1
WO2021218118A1 PCT/CN2020/129503 CN2020129503W WO2021218118A1 WO 2021218118 A1 WO2021218118 A1 WO 2021218118A1 CN 2020129503 W CN2020129503 W CN 2020129503W WO 2021218118 A1 WO2021218118 A1 WO 2021218118A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key points
target
key
point
Prior art date
Application number
PCT/CN2020/129503
Other languages
English (en)
French (fr)
Inventor
闫鑫
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Priority to JP2022549510A priority Critical patent/JP2023514342A/ja
Publication of WO2021218118A1 publication Critical patent/WO2021218118A1/zh
Priority to US17/822,493 priority patent/US20220405879A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to an image processing method and device.
  • the embodiments of the present disclosure provide an image processing method and device, which can optimize the image processing effect.
  • the technical solution is as follows:
  • an image processing method including:
  • the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
  • the acquiring the second image includes:
  • the pixels in the second partial image are scattered and filled into the second partial image to obtain the second image.
  • the dispersing and filling the pixels in the second partial image into the second partial image to obtain the second image includes:
  • the second image is determined, and the second image is obtained by moving pixels in the second partial image by the first moving distance in the first moving direction.
  • the dispersing and filling the pixels in the second partial image into the second partial image to obtain the second image includes:
  • the second image is determined, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
  • the determining the target area of the first image includes:
  • a second key point is determined.
  • the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second distance.
  • a distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
  • the target area is determined, and the target area is determined based on a plurality of second key points.
  • the determining the target center point includes any one of the following:
  • the central point of a part of the first key points is determined as the target central point, and the part of the first key points is located in the central area of the area.
  • the determining multiple first key points in the first image includes:
  • the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image;
  • the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model
  • the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
  • the determining the plurality of first key points includes:
  • For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
  • the determining the plurality of first key points includes any one of the following:
  • a second target key point among the plurality of third key points is determined, where the second target key point is a key point corresponding to the occluded target part; and the plurality of first key points are acquired.
  • a key point, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
  • the plurality of third key points are used as the plurality of first key points.
  • the method further includes:
  • the method further includes:
  • an image processing device including:
  • the first determining module is configured to determine a plurality of first key points in the first image, and the plurality of first key points are key points of the target part;
  • a second determining module configured to determine a target area of the first image, the target area being obtained based on expanding the areas corresponding to the plurality of first key points;
  • An image acquisition module configured to acquire a second image obtained by adjusting the positions of pixels in the first partial image and the second partial image, and the adjustment is based on the center point of the region and the first adjustment parameter ,
  • the first partial image is an image corresponding to the area
  • the second partial image is an image in the target area excluding the first partial image.
  • the bit image acquisition module includes:
  • a shape adjustment unit configured to adjust the shape of the first partial image based on the center point of the region and the first adjustment parameter
  • the filling unit is configured to dispersely fill pixels in the second partial image into the second partial image to obtain the second image.
  • the filling unit is configured to determine a first movement direction in response to the first partial image being reduced, where the first movement direction is close to the center point; based on the first adjustment Parameters, determining the first moving distance; determining the second image, the second image is obtained by moving the pixels in the second partial image by the first moving distance in the first moving direction.
  • the filling unit is configured to determine a second movement direction in response to the first partial image being enlarged, the second movement direction being away from the center point; based on the first adjustment Parameter, determining a second moving distance; determining the second image, the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
  • the second determining module includes:
  • a first determining unit configured to determine a target center point, the target center point being obtained based on the plurality of first key points
  • the second determining unit is configured to determine a second key point for each first key point, where the second key point, the first key point, and the target center point are on the same straight line, and the first distance Greater than a second distance, where the first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point ;
  • the third determining unit is configured to determine the target area, and the target area is determined based on a plurality of second key points.
  • the first determining unit is configured to determine a center point of the plurality of first key points as the target center point
  • the first determining unit is configured to determine a center point of a part of the first key point as the target center point, and the part of the first key point is located in a central area of the area.
  • the first determining module includes:
  • the fourth determining unit is configured to determine a plurality of third key points in a third image, the plurality of third key points are key points of the target part, and the third image is the image of the first image Previous frame image;
  • the fifth determining unit is configured to determine a plurality of fourth key points in the first image, the plurality of fourth key points are key points of the target part, and the fourth key points are determined by key points Model determination;
  • the sixth determining unit is configured to determine the plurality of first key points, and the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
  • the sixth determining unit is configured to determine, for each fourth key point, a first target key point among the plurality of third key points, and the first target key point is related to the first target key point.
  • the fourth key point has the same pixel value; the average position of the first position and the second position is determined, the first position is the position of the first target key point, and the second position is the fourth key point Obtain the first key point, the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
  • the sixth determining unit is configured to determine a second target key point of the plurality of third key points in response to the target part being occluded, and the second target key point is Key points corresponding to the occluded target part; acquiring the plurality of first key points, and the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
  • the sixth determining unit is configured to, in response to the target part being blocked, use the plurality of third key points as the plurality of first key points.
  • the device further includes:
  • a third determining module configured to determine a second adjustment parameter in response to the target part being occluded, where the second adjustment parameter is a parameter for adjusting the third image;
  • the fourth determining module is configured to determine a first adjustment parameter, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
  • the device further includes:
  • a fifth determining module configured to determine the number of frames, where the number of frames is the number of consecutive frames of the image where the target part is blocked;
  • the image processing module is further configured to stop performing image processing on the next frame of image in response to the number of frames reaching the target number of frames.
  • an electronic device including: one or more processors,
  • Volatile or non-volatile memory for storing the one or more processor-executable instructions
  • the one or more processors are configured to perform the following steps:
  • the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
  • the one or more processors are configured to perform the following steps:
  • the pixel points in the second partial image are scattered and filled into the second partial image to obtain the second image.
  • the one or more processors are configured to perform the following steps:
  • the one or more processors are configured to perform the following steps:
  • the second image is determined, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
  • the one or more processors are configured to perform the following steps:
  • a second key point is determined.
  • the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second distance.
  • a distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
  • the target area is determined, and the target area is determined based on a plurality of second key points.
  • the one or more processors are configured to perform at least one of the following steps:
  • the central point of a part of the first key points is determined as the target central point, and the part of the first key points is located in the central area of the area.
  • the one or more processors are configured to perform the following steps:
  • the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image;
  • the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model
  • the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
  • the one or more processors are configured to perform the following steps:
  • For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
  • the one or more processors are configured to perform at least one of the following steps:
  • a second target key point among the plurality of third key points is determined, where the second target key point is a key point corresponding to the occluded target part; and the plurality of first key points are acquired.
  • a key point, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
  • the plurality of third key points are used as the plurality of first key points.
  • the one or more processors are configured to perform the following steps:
  • a first adjustment parameter is determined, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
  • the one or more processors are configured to perform the following steps:
  • a computer-readable storage medium having instructions stored on the computer-readable storage medium.
  • the instructions When executed by a processor of an electronic device, the electronic device can Perform the following steps:
  • the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
  • a computer program product When instructions in the computer program product are executed by a processor of an electronic device, the electronic device can execute the following steps:
  • the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • Fig. 1 is a schematic diagram showing an image processing method according to an exemplary embodiment
  • Fig. 2 is a block diagram showing a terminal according to an exemplary embodiment
  • Fig. 3 is a block diagram showing a server according to an exemplary embodiment
  • Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment
  • Fig. 5 is a flowchart showing an image processing method according to an exemplary embodiment
  • Fig. 6 is a flowchart showing an image processing method according to an exemplary embodiment
  • Fig. 7 is a schematic diagram showing key points of a facial area according to an exemplary embodiment
  • Fig. 8 is a schematic diagram showing a target area according to an exemplary embodiment.
  • the user information involved in this disclosure is information authorized by the user or fully authorized by all parties.
  • the present disclosure provides an image processing method.
  • the electronic device completes the image processing of the first image by adjusting the target part in the first image.
  • the implementation environment of the embodiment of the present disclosure includes a user and an electronic device.
  • the user triggers an image processing operation
  • the electronic device receives the image processing operation, and performs image processing on the first image according to the image processing operation.
  • the first image is a captured still image, or the first image is an image in a video stream. In the embodiments of the present disclosure, this is not specifically limited.
  • the electronic device determines the first image from the video stream.
  • the video stream is a video stream corresponding to a long video, or the video stream is a video stream corresponding to a short video.
  • the electronic device In response to the first image being a static image, the electronic device needs to collect the static image first, or the electronic device receives the static image sent by other electronic devices.
  • the electronic device has an image capture function.
  • the electronic device collects static images.
  • the process of the electronic device collecting the static image is: the electronic device displays the captured image in the viewfinder, and in response to receiving the user's confirmation operation, the electronic device determines that the image in the viewfinder is the first image based on the confirmation operation.
  • the electronic device receives a static image sent by another electronic device, and determines the received first image as a static image.
  • the process of collecting the static image by other electronic devices is similar to the process of collecting the static image by the electronic device, and will not be repeated here.
  • the electronic device In response to the first image being an image in a video stream, the electronic device needs to collect the video stream first, or the electronic device needs to receive the video stream sent by other electronic devices.
  • the electronic device has an image capture function.
  • the electronic device receives the shooting start instruction input by the user and starts to collect the video stream.
  • stop collecting the video stream determine the video stream between the start shooting instruction and the end shooting instruction, and determine any frame of image as the first image from the video stream.
  • the electronic device receives a video stream sent by another electronic device, and determines the first image from the received video stream.
  • the process of collecting video streams by other electronic devices is similar to the process of collecting video streams by electronic devices, and will not be repeated here.
  • the electronic device performs image processing on the first image, and directly outputs the second image after image processing.
  • the electronic device first collects the first image and outputs the first image; in response to receiving the image processing instruction, it performs image processing on the first image to obtain the second image, which is not specifically limited in the embodiment of the present disclosure.
  • the image processing instruction carries the target part of the current image processing and the first adjustment parameter
  • the electronic device determines the target part of the current image processing and the first adjustment parameter based on the image processing instruction.
  • the electronic device sets the target part and the first adjustment parameter for image processing in advance, and in response to receiving the image adjustment instruction, directly performs image processing on the first image based on the target part and the first adjustment parameter.
  • the target part is the facial features or facial contours in the facial area, for example, the target part is eyes, eyebrows, bridge of the nose, mouth, or cheeks. Or, the target part is another body part, for example, waist, legs, etc.
  • the electronic device is a terminal, for example, the electronic device is a camera, a mobile phone, a tablet computer, or a wearable device.
  • an image processing application is installed in the terminal, and image processing is performed through the image processing application.
  • the image processing application is a camera application, a beauty camera application, a video shooting application, and the like.
  • the electronic device is a server for image processing.
  • the electronic device receives the first image to be processed sent by the other electronic device, performs image processing on the first image to obtain the second image, and returns the obtained second image to the other electronic device.
  • the server is a single server, a server cluster composed of multiple servers, or a cloud server.
  • an electronic device is also provided, and the electronic device includes: one or more processors,
  • Volatile or non-volatile memory for storing the one or more processor-executable instructions
  • one or more processors are configured to perform the following steps:
  • the target area of the first image is obtained based on the expansion of the areas corresponding to the multiple first key points;
  • the second image is obtained by adjusting the position of the pixel points in the first partial image and the second partial image. The adjustment is based on the center point of the region and the first adjustment parameter.
  • the first partial image is the image corresponding to the region, and the The partial image is an image other than the first partial image in the target area.
  • one or more processors are configured to perform the following steps:
  • one or more processors are configured to perform the following steps:
  • the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the second moving distance in the second moving direction.
  • the target center point is obtained based on multiple first key points
  • For each first key point determine the second key point.
  • the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance.
  • the first distance is the second key point and the target center point.
  • the distance between the target center points, and the second distance is the distance between the first key point and the target center point;
  • the target area is determined, and the target area is determined based on a plurality of second key points.
  • one or more processors are configured to perform at least one of the following steps:
  • the center point of part of the first key point is determined as the target center point, and the part of the first key point is located in the central area of the area.
  • one or more processors are configured to perform the following steps:
  • Multiple first key points are determined, and multiple first key points are determined based on multiple third key points and multiple fourth key points.
  • For each fourth key point determine the first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
  • one or more processors are configured to perform at least one of the following steps:
  • the multiple third key points are used as multiple first key points.
  • one or more processors are configured to perform the following steps:
  • the first adjustment parameter is determined, and the first adjustment parameter is obtained by adjusting the second adjustment parameter based on the preset amplitude.
  • one or more processors are configured to perform the following steps:
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • the terminal 200 includes: one or more processors 201 and a volatile or non-volatile memory 202.
  • the processor 201 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 101 adopts at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array).
  • the processor 201 includes a main processor and a co-processor.
  • the main processor is a processor used to process data in an awake state, and is also called a CPU (Central Processing Unit, central processing unit);
  • the coprocessor is a low-power processor used to process data in the standby state.
  • the processor 201 is integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 201 further includes an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the terminal 200 may optionally further include: a peripheral device interface 203 and at least one peripheral device.
  • the processor 201, the memory 202, and the peripheral device interface 203 are connected by a bus or signal line.
  • Each peripheral device is connected to the peripheral device interface 203 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 204, a touch display screen 205, a camera component 206, an audio circuit 207, a positioning component 208, and a power supply 209.
  • the peripheral device interface 203 can be used to connect at least one peripheral device to the processor 201 and the memory 202.
  • the at least one peripheral device is an I/O (Input/Output, input/output) related peripheral device.
  • the processor 201, the memory 202, and the peripheral device interface 203 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 201, the memory 202, and the peripheral device interface 203 or The two are implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 204 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 204 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 204 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 204 communicates with other electronic devices through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 204 also includes NFC (Near Field Communication) related circuits, which is not limited in the present disclosure.
  • the display screen 205 is used to display a UI (User Interface, user interface).
  • the UI includes graphics, text, icons, videos, and any combination of them.
  • the display screen 205 also has the ability to collect touch signals on or above the surface of the display screen 205.
  • the touch signal is input to the processor 201 as a control signal for processing.
  • the display screen 205 is also used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • one display screen 205 is provided on the front panel of the terminal 200; in other embodiments, there are at least two display screens 205, which are respectively provided on different surfaces of the terminal 200 or in a folding design;
  • the display screen 205 is a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 200.
  • the display screen 205 is also configured as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 205 is made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera assembly 206 is used to capture images or videos.
  • the camera assembly 206 includes a front camera and a rear camera.
  • the front camera is arranged on the front panel of the terminal 200
  • the rear camera is arranged on the back of the terminal 200.
  • the camera assembly 206 also includes a flash.
  • the flash is a single-color temperature flash or a dual-color temperature flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash used for light compensation under different color temperatures.
  • the audio circuit 207 includes a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 201 for processing, or input to the radio frequency circuit 204 to implement voice communication.
  • the microphone is an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 201 or the radio frequency circuit 204 into sound waves.
  • the speakers are traditional thin-film speakers, or piezoelectric ceramic speakers.
  • the audio circuit 207 In response to the speaker being a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for purposes such as distance measurement.
  • the audio circuit 207 also includes a headphone jack.
  • the positioning component 208 is used to locate the current geographic location of the terminal 200 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 208 is a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
  • the power supply 209 is used to supply power to various components in the terminal 200.
  • the power source 209 is alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery supports wired charging or wireless charging.
  • the rechargeable battery is also used to support fast charging technology.
  • the terminal 200 further includes one or more sensors 210.
  • the one or more sensors 210 include, but are not limited to: an acceleration sensor 211, a gyroscope sensor 212, a pressure sensor 213, a fingerprint sensor 214, an optical sensor 215, and a proximity sensor 216.
  • the acceleration sensor 211 detects the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 200. For example, the acceleration sensor 211 is used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 201 controls the touch screen 205 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 211.
  • the acceleration sensor 211 is also used for the collection of game or user motion data.
  • the gyroscope sensor 212 detects the body direction and rotation angle of the terminal 200, and the gyroscope sensor 212 and the acceleration sensor 211 cooperate to collect the user's 3D actions on the terminal 200.
  • the processor 201 implements the following functions according to the data collected by the gyroscope sensor 212: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 213 is arranged on the side frame of the terminal 200 and/or the lower layer of the touch screen 205.
  • the processor 201 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 213.
  • the processor 201 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 205.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 214 is used to collect the user's fingerprint.
  • the processor 201 can identify the user's identity based on the fingerprint collected by the fingerprint sensor 214, or the fingerprint sensor 214 can identify the user's identity based on the collected fingerprint. If it is recognized that the user's identity is a trusted identity, the processor 201 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 214 is provided on the front, back or side of the terminal 200. In response to the physical buttons or manufacturer logo (logo) provided on the terminal 200, the fingerprint sensor 214 is integrated with the physical buttons or the manufacturer logo.
  • the optical sensor 215 is used to collect the ambient light intensity.
  • the processor 201 controls the display brightness of the touch screen 205 according to the ambient light intensity collected by the optical sensor 215. Specifically, in response to the high ambient light intensity, the display brightness of the touch screen 205 is increased; in response to the low ambient light intensity, the display brightness of the touch screen 205 is decreased.
  • the processor 201 also dynamically adjusts the shooting parameters of the camera assembly 206 according to the ambient light intensity collected by the optical sensor 215.
  • the proximity sensor 216 also called a distance sensor, is usually provided on the front panel of the terminal 200.
  • the proximity sensor 216 is used to collect the distance between the user and the front of the terminal 200.
  • the processor 201 controls the touch screen 205 to switch from the on-screen state to the off-screen state; in response to the proximity sensor 216 detects that the distance between the user and the front of the terminal 200 is gradually increasing, and the processor 201 controls the touch display screen 205 to switch from the on-screen state to the on-screen state.
  • FIG. 2 does not constitute a limitation on the terminal 200, and can include more or less components than shown in the figure, or combine some components, or adopt different component arrangements.
  • the electronic device is provided as a server.
  • FIG. 3 is a schematic diagram showing the structure of a server according to an exemplary embodiment.
  • the server 300 may have relatively large differences due to different configurations or performances, and includes one or more processors (Central Processing Units, CPU) 301 and One or more memories 302, where at least one instruction is stored in the memory 302, and the at least one instruction is loaded and executed by the processor 301 to implement the methods provided in the foregoing method embodiments.
  • the server 300 also has components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server also includes other components for implementing device functions, which will not be repeated here.
  • a computer-readable storage medium is also provided, and the computer-readable storage medium stores instructions.
  • the instructions When the instructions are executed by a processor of an electronic device, the electronic device can perform the following steps:
  • a second image which is obtained by adjusting the positions of the pixels in the first partial image and the second partial image.
  • the adjustment is based on the center point of the region and the first adjustment parameter.
  • the first partial image corresponds to the region
  • the second partial image is an image other than the first partial image in the target area.
  • the computer-readable storage medium is ROM (Read-Only Memory), RAM (Random Access Memory, Random Access Memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks and optical data storage devices, etc.
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • the present disclosure also provides a computer program product.
  • the instructions in the computer program product are executed by the processor of the electronic device, the electronic device can execute the following steps:
  • a second image which is obtained by adjusting the positions of the pixels in the first partial image and the second partial image.
  • the adjustment is based on the center point of the region and the first adjustment parameter.
  • the first partial image corresponds to the region
  • the second partial image is an image other than the first partial image in the target area.
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment. Referring to Figure 4, the device includes:
  • the first determining module 410 is configured to determine a plurality of first key points in the first image, and the plurality of first key points are key points of the target part;
  • the second determining module 420 is configured to determine a target area of the first image, the target area being obtained based on the expansion of the areas corresponding to the plurality of first key points;
  • the image acquisition module 430 is configured to acquire a second image, which is obtained by adjusting the positions of pixels in the first partial image and the second partial image, and the adjustment is based on the center point of the region and the first adjustment parameter.
  • the first partial image is an image corresponding to the area
  • the second partial image is an image other than the first partial image in the target area.
  • the bit image acquisition module 430 includes:
  • a shape adjustment unit configured to adjust the shape of the first partial image based on the center point of the region and the first adjustment parameter
  • the filling unit is configured to dispersely fill the pixels in the second partial image into the second partial image to obtain the second image.
  • the filling unit is configured to determine a first movement direction in response to the first partial image being reduced, where the first movement direction is close to the center point; and based on the first adjustment parameter, determine the first Moving distance; determine the second image, the second image is obtained by moving the pixel points in the second partial image to the first moving direction by the first moving distance.
  • the filling unit is configured to determine a second movement direction in response to the first partial image being enlarged, where the second movement direction is away from the center point; and based on the first adjustment parameter, determine the second Moving distance; determining the second image, the second image is obtained by moving the pixel points in the second partial image to the second moving direction by the second moving distance.
  • the second determining module 420 includes:
  • the first determining unit is configured to determine a target center point, the target center point being obtained based on the plurality of first key points;
  • the second determining unit is configured to determine a second key point for each first key point, where the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second key point. Distance, the first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
  • the first determining unit is configured to determine a center point of the plurality of first key points as the target center point
  • the first determining unit is configured to determine a center point of a part of the first key point as the target center point, and the part of the first key point is located in a central area of the area.
  • the first determining module 410 includes:
  • the fourth determining unit is configured to determine a plurality of third key points in the third image, the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image ;
  • the fifth determining unit is configured to determine a plurality of fourth key points in the first image, the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model;
  • the sixth determining unit is configured to determine the plurality of first key points, and the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
  • the sixth determining unit is configured to determine, for each fourth key point, a first target key point of the plurality of third key points, the first target key point and the fourth key point Points have the same pixel value; determine the average position of the first position and the second position, the first position is the position of the first target key point, the second position is the position of the fourth key point; the first key is obtained The first key point is obtained by rendering the pixel value of the fourth key point to the average position.
  • the sixth determining unit is configured to determine a second target key point of the plurality of third key points in response to the target part being occluded, and the second target key point is the occluded target Key points corresponding to the part; acquiring the plurality of first key points, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
  • the sixth determining unit is configured to, in response to the target part being blocked, use the plurality of third key points as the plurality of first key points.
  • the device further includes:
  • the third determining module is configured to determine a second adjustment parameter in response to the target part being occluded, where the second adjustment parameter is a parameter for adjusting the third image;
  • the fourth determining module is configured to determine a first adjustment parameter, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
  • the device further includes:
  • the fifth determining module is configured to determine the number of frames, where the number of frames is the number of consecutive frames of the image where the target part is blocked;
  • the image processing module is also configured to stop performing image processing on the next frame of image in response to the number of frames reaching the target number of frames.
  • the image processing device provided in the above embodiment performs image processing
  • only the division of the above functional modules is used as an example for illustration.
  • the above functions can be allocated by different functional modules as needed. That is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the functions described above.
  • the image processing device provided in the foregoing embodiment and the image processing method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • the electronic device in response to the user's desire to beautify a certain part of the image, the electronic device generally obtains multiple key points corresponding to the part in the face image; The position in the face image, realize the beauty treatment of the part. For example, if the user wants to enlarge the eyes in the face image, the electronic device moves multiple key points corresponding to the eyes outwards with the eyes as the center, so as to realize the enlargement of the eyes. For another example, if the user wants to thin the eyebrows in the face image, the electronic device moves multiple key points corresponding to the eyebrows inward with the eyebrows as the center, so as to realize the thin eyebrow processing.
  • the multiple key points corresponding to the eyes will move outwards with the eye as the center.
  • the moved key points will occupy the positions of the pixels around the eyes, so that the pixels around the eyes are Will pile up.
  • the key points corresponding to the eyebrows move inward with the eyebrows as the center, so that pixels around the eyebrows after the thin eyebrow processing will be missing. It can be seen that the image processing method in the related art will cause image distortion due to the sudden change of the position of the image pixel point, and the beauty effect of the face image is poor.
  • Fig. 5 is a flowchart showing an image processing method according to an exemplary embodiment. Referring to Fig. 5, the image processing method includes the following steps:
  • acquiring the second image includes:
  • the pixels in the second partial image are scattered and filled into the second partial image to obtain the second image.
  • the dispersive filling of pixels in the second partial image into the second partial image to obtain the second image includes:
  • the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the first moving distance in the first moving direction.
  • the dispersive filling of pixels in the second partial image into the second partial image to obtain the second image includes:
  • the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the second moving distance in the second moving direction.
  • the determining the target area of the first image includes:
  • For each first key point determine a second key point.
  • the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance, and the first distance is the The distance between the second key point and the target center point, where the second distance is the distance between the first key point and the target center point;
  • the target area is determined, and the target area is determined based on a plurality of second key points.
  • determining the target center point includes any one of the following:
  • the center point of a part of the first key point is determined as the target center point, and the part of the first key point is located in the central area of the area.
  • the determining multiple first key points in the first image includes:
  • the plurality of third key points are key points of the target part, and the third image is an image of a previous frame of the first image;
  • the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
  • the determining the plurality of first key points includes:
  • For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
  • the first key point is acquired, and the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
  • determining the plurality of first key points includes any one of the following:
  • the plurality of third key points are used as the plurality of first key points.
  • the method further includes:
  • a first adjustment parameter is determined, and the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
  • the method further includes:
  • the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
  • Fig. 6 is a flowchart showing an image processing method according to an exemplary embodiment. Referring to Fig. 6, the image processing method includes the following steps:
  • the electronic device determines multiple first key points in the first image.
  • the multiple first key points are key points of the target part.
  • the target part is the facial features or facial contours in the facial region in the first image.
  • the target part is eyes, eyebrows, bridge of nose, mouth or cheeks.
  • the target part is another body part, for example, waist, legs, etc.
  • the target part is a target part selected by the user, or the target part is a predetermined target part.
  • the electronic device receives the user's selection operation, and determines the target part selected by the user based on the selection operation.
  • the electronic device sets the target part for shape adjustment in advance. In this step, the electronic device directly calls the previously set target part. In the embodiments of the present disclosure, this is not specifically limited.
  • the target part is a part in the first image. Or, the target part is a plurality of parts in the first image. In the embodiments of the present disclosure, this is not specifically limited.
  • the electronic device determines multiple first key points of the target part.
  • FIG. 7 is a schematic diagram showing key points of a facial area according to an exemplary embodiment. If the target part is the eyebrows, the multiple first key points are the 10 key points numbered 19-28 in FIG. 7.
  • the electronic device only determines the multiple first key points of the target part based on the current first image. Or, the electronic device determines the multiple first key points corresponding to the target part through the previous frame of the first image. In some embodiments, the electronic device may directly determine multiple first key points corresponding to the target part from the first image. In this implementation manner, the electronic device directly determines multiple first key points corresponding to the target part from the first image, thereby simplifying the processing flow of determining multiple first key points and improving the efficiency of image processing.
  • the electronic device determines a plurality of first key points corresponding to the target part based on a plurality of third key points, where the plurality of third key points are the target part in the previous frame of the first image The key point.
  • the electronic device determines a plurality of third key points in the third image.
  • the plurality of third key points are key points of the target part, and the third image is the previous frame image of the first image.
  • the electronic device stores multiple key points of the third image. In this step, the electronic device directly determines the multiple third key points from the multiple key points according to the target part. In this implementation, the electronic device processes the acquired images in advance, and stores the corresponding relationship between each image and pixel, so that multiple third key points can be determined directly according to the target part, which simplifies the acquisition of multiple third key points. The key point of the process improves the processing efficiency.
  • the electronic device determines the plurality of third key points through the first determination model. Wherein, the electronic device inputs the third image into the first determination model to obtain all key points of the third image; and determines a plurality of third key points from all the key points. Alternatively, the electronic device inputs the third image into the second deterministic model, and the second deterministic model outputs the plurality of third key points.
  • the first determination model and the second determination model are any neural network model.
  • the electronic device performs model training as required, and the first deterministic model and the second deterministic model are obtained by training by adjusting model parameters.
  • the number of the key points can be set as required.
  • the number of the key points is not specifically limited.
  • the number of the key points is 100, 101, 105, and so on. See Fig. 7, which shows 101 key points of the face area.
  • multiple third key points are determined through the model, thereby improving the accuracy of determining multiple third key points.
  • the electronic device determines a plurality of fourth key points in the first image.
  • the multiple fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model.
  • This step is similar to the process of determining multiple third key points by the electronic device in step (A1), and will not be repeated here.
  • the electronic device determines the plurality of first key points.
  • the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
  • the electronic device renders the pixel values of the multiple fourth key points at the average position to obtain multiple first key points.
  • the process is implemented through the following steps (a1)-(a3), including:
  • the electronic device determines the first target key point among the plurality of third key points.
  • the first target key point and the fourth key point have the same pixel value.
  • the electronic device first selects any fourth key point, and then determines the first target key point having the same pixel value as the fourth key point from the plurality of third key points.
  • the electronic device determines the average position of the first position and the second position.
  • the first position is the position of the third key point
  • the second position is the position of the fourth key point.
  • the electronic device establishes the same coordinate system in the third image and the first image, and determines the coordinate positions of the third key point and the fourth key point with the same pixel value in the same coordinate system to obtain the first A position and a second position are averaged to obtain an average position.
  • the electronic device establishes different coordinate systems in the third image and the first image, respectively determines the coordinate positions of the third key point and the fourth key point in their respective coordinate systems, and then passes the coordinates between the coordinate systems.
  • the mapping relationship is used to obtain the coordinate positions of the third key point and the fourth key point in the same coordinate system.
  • the electronic device first determines the first positions of the multiple third key points, and then determines the second positions of the multiple fourth key points. Alternatively, the electronic device first determines the second positions of the multiple fourth key points, and then determines the first positions of the multiple third key points. Or, the electronic device simultaneously determines the first positions of the multiple third key points and the second positions of the multiple fourth key points. In the embodiments of the present disclosure, the order in which the electronic device determines the first position and the second position is not specifically limited.
  • the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
  • the electronic device can also average the positions of other pixels in the third image and the first image according to the correspondence between the third key point and the fourth key point, thereby averaging all the positions in the first image. Pixels are adjusted.
  • the electronic device also performs a weighted summation of the pixel value of the third key point and the pixel value of the fourth key point, and uses the weighted summation pixel value as the pixel value of the multiple first key points. Perform rendering.
  • the image may be blocked. If there is no occlusion of the target part in the first image, the electronic device directly uses multiple fourth key points as multiple first key points. If the target part in the first image is occluded, and the fourth key point acquired by the electronic device is missing the key point, the electronic device determines a plurality of first key points by acquiring the target part in the previous frame of image. Correspondingly, after the electronic device acquires the first image, it recognizes the key points in the first image. In response to the electronic device detecting that the target part in the first image is blocked, the electronic device acquires the previously acquired third image. The third image is an image with complete key points.
  • the electronic device selects the missing fourth key point from the plurality of third key points.
  • the process is: in response to the target part being occluded, the electronic device determines which of the plurality of third key points is The second target key point, the second target key point is the key point corresponding to the occluded target part; the multiple first key points are acquired, and the multiple first key points are based on the second target key point and the multiple The fourth key point is composed.
  • the electronic device directly uses a plurality of third key points as the first key points, and the process is: in response to the target part being blocked, the electronic device uses the plurality of third key points as the plurality of first key points. key point.
  • the electronic device determines a target area of the first image, where the target area is obtained based on the expansion of areas corresponding to the plurality of first key points.
  • the electronic device determines the area corresponding to the plurality of first key points in the first image, and then expands the area to obtain the target area.
  • the electronic device determines the area enclosed by the plurality of first key points.
  • the electronic device is sequentially connected to adjacent first key points, and an image area surrounded by a plurality of first key points is used as the area.
  • the electronic device expands the area based on the area and the multiple first key points to obtain the target area.
  • the target center point is obtained based on the plurality of first key points.
  • the electronic device determines the center point of the plurality of first key points as the target center point. For example, please continue to refer to FIG. 7 and take the target part as the eyebrows as an example.
  • the multiple first key points corresponding to the eyebrows are 10 key points 19-28, and the electronic device determines the center points corresponding to these 10 key points.
  • the electronic device first selects a part of the first key point in the central area from the plurality of first key points, and then determines the center point of the part of the first key point.
  • the electronic device determines the center point of a part of the first key point as the target center point, and the part of the first key point is located in the central area of the area.
  • the target part As an example.
  • the multiple first key points corresponding to the eyebrows are the 10 key points 19-28.
  • the electronic device first selects the key points 19-28
  • the electronic device determines a second key point, the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance , The first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point.
  • the electronic device uses the target center point as the end point to respectively connect to each first key point to make a ray. For example, continue to take the example in the above step (1) as an example, take the average points of the first key points 21, 22, 26, and 27 as the target center point, and make a ray in the direction of the first key point 19-28 respectively. , Get 10 rays. Then, the electronic device determines a second key point from each ray to obtain the multiple second key points. The distance between the second key point and the target center point is greater than the first key point on the ray where the second key point is located. The distance between the point and the center point of the target.
  • the electronic device intercepts a line segment on the obtained ray with the target center point as the end point, and the other end point of the line segment is the second key point.
  • the length of the line segment is greater than the length from the target center point to the first key point on the ray.
  • the length of the line segment is a preset length determined according to an empirical value. In the embodiment of the present disclosure, the length of the line segment is not specifically limited.
  • step (A1) of step S601 the process of determining multiple second key points is similar to the process of determining multiple third key points by the electronic device in step (A1) of step S601, and will not be repeated here.
  • the electronic device determines the target area.
  • the target area is determined based on a plurality of second key points.
  • the electronic device is connected to a plurality of second key points in sequence, and an image area surrounded by the plurality of second key points is used as the target area.
  • the electronic device uses a mesh algorithm to determine the mesh area corresponding to the plurality of second key points, and use the mesh area as the target area. The process is: the electronic device uses the plurality of second key points as endpoints, and performs grid expansion on the target parts corresponding to the plurality of first key points to obtain a grid area. The electronic device composes the obtained grid area and the area corresponding to the first key point to form a target area. Referring to Figure 8, the electronic device uses multiple second key points as endpoints, and connects them to multiple first key points to achieve grid expansion, to obtain a grid area in the form of triangles. The area corresponding to the point constitutes the target area.
  • the electronic device adjusts the shape of the first partial image based on the center point of the region and the first adjustment parameter.
  • the first partial image is an image corresponding to the region.
  • the first adjustment parameter is a parameter used to adjust the first partial image.
  • the first adjustment parameter is a system default integer parameter, or the first adjustment parameter is a parameter generated based on user settings, or the first adjustment parameter is a parameter determined based on the second adjustment parameter of the third image.
  • the first adjustment parameter includes at least an adjustment method and an adjustment intensity, as well as a color parameter, a brightness parameter, and the like.
  • the electronic device adjusts the shape of the target part based on the adjustment method in the first adjustment parameter, and the shape adjustment is to perform zoom-in adjustment, zoom-out adjustment, etc., to the target part.
  • the first adjustment parameter is an adjustment parameter input by the user and received by the electronic device.
  • the first adjustment parameter is an adjustment parameter set based on different target parts in the electronic device. For example, if the target part is the eyes, the adjustment parameter is to enlarge the eyes; if the target part is the eyebrows, the adjustment parameter is to narrow the eyebrows.
  • the adjustment of the eyes is a magnification adjustment, and the adjustment of the eyebrows is a reduction adjustment.
  • the electronic device adjusts the shape of the first partial image, so as to realize the enlargement or reduction of the target part.
  • the electronic device determines the relationship between pixels in the first partial image according to a grid algorithm, and adjusts the pixel list in the first partial image.
  • the electronic device adjusts the target part through a liquefaction algorithm.
  • the process is: the electronic device determines the center position corresponding to the target part, and makes a circle with the center position as the center, so that the first partial image corresponding to the target part Within the circle, the adjustment of the first partial image corresponding to the target part is achieved by changing the size of the circle.
  • the electronic device determines a first adjustment parameter for shape adjustment, and performs shape adjustment on the first partial image based on the first adjustment parameter.
  • the process of the electronic device performing shape adjustment on the first partial image based on the first adjustment parameter is implemented through the following steps (1)-(2), including:
  • the electronic device determines the first adjustment parameter.
  • the electronic device determines the current adjustment parameter as the first adjustment parameter. In some embodiments, when the target part is not blocked, the electronic device determines the current adjustment parameter as the first adjustment parameter. When the target part is blocked, the electronic device determines the first adjustment parameter based on the second adjustment parameter of the previous frame of image.
  • the process of determining the first adjustment parameter is: in response to the target part being occluded, determining the second adjustment parameter, the second adjustment parameter being a parameter for adjusting the third image; determining the first adjustment parameter, The first adjustment parameter is obtained by adjusting the second adjustment parameter based on the preset amplitude.
  • the second adjustment parameter is an adjustment parameter used by the electronic device when adjusting the third image.
  • the second adjustment parameter is a system default adjustment parameter, or an adjustment parameter generated based on user settings.
  • the process of adjusting the image can be smoother and preventing sudden changes when the image is adjusted.
  • the image may be blocked.
  • the electronic device in response to the adjustment of the target part by the electronic device, the electronic device performs fault-tolerant processing.
  • the target number of images as the long forward delay
  • the previous frame in the case of missing key frames, the previous frame can be continued and the target part of the current frame can be adjusted. And gradually weaken the adjustment range, and restore the first adjustment parameter to the original adjustment parameter when the first key point is no longer missing.
  • the electronic device determines the number of frames, which is the number of consecutive frames of the image in which the target part is blocked; in response to the number of frames reaching the target frame number, it stops performing image processing on the next frame of image.
  • the first adjustment parameter can be gradually restored to the second adjustment parameter. For example, in response to the current first image being an image with missing the fourth key point, the electronic device adds one to the number of frames of the image with consecutive missing key points. The number of frames of the current image is n, and in response to the lack of multiple fourth key points in the first image, the electronic device updates the current frame number to n+1. In response to the current first image is not an image missing the fourth key point, the electronic device clears the number of frames in which the key point is continuously missing. The electronic device then increases the current adjustment parameter based on the preset amplitude until the adjustment parameter is restored to the second adjustment parameter.
  • the electronic device adjusts each pixel in the first partial image based on the center point and the first adjustment parameter.
  • the electronic device keeps the position of the center point unchanged, and adjusts the position of the pixel point in the first partial image based on the first adjustment parameter to realize the shape adjustment of the first partial image.
  • the electronic device adjusts the position of each pixel in the first partial image based on the first adjustment parameter and the center point, so as to realize the adjustment of the target part and prevent the adjustment of the area other than the target part. , Improve the accuracy of adjustment.
  • step S604 the electronic device scatters and fills the pixels in the second partial image into the second partial image to obtain a second image.
  • the second partial image is an image other than the first partial image in the target area.
  • the electronic device adjusts the target part, so that the position of the pixel in the first partial image is changed, causing a deformed area in the target area.
  • the electronic device adjusts the positions of the pixels in the target area following the first partial image to achieve a smooth transition of the adjustment of the target part to other areas than the first partial image and prevent sudden changes in the image.
  • the following steps (A1)-(A3) are used to achieve diffusion and filling of pixels, including:
  • the electronic device determines a first movement direction, the first movement direction being close to the center point.
  • the first partial image is reduced and adjusted, and the moving direction of the pixels in the second partial image is consistent with the moving direction during the shrinking process of the first partial image, so as to determine the moving direction of the pixels in the second partial image Is the direction close to the center point.
  • the electronic device determines the first movement distance based on the first adjustment parameter.
  • the electronic device determines, based on the first adjustment parameter, the distortion area that may be generated by reducing and adjusting the first partial image, and using the distortion area to determine that the pixels in the second partial image fill the distorted image, The first movement distance that needs to be moved.
  • the electronic device determines a second image, and the second image is obtained by moving pixels in the second partial image by the first moving distance in the first moving direction.
  • the process of adjusting the first partial image and the second partial image by the electronic device is performed synchronously, or the first partial image is adjusted first, and then the second partial image is adjusted.
  • the electronic device adjusts the first partial image and the second partial image synchronously, the first movement distance is determined directly based on the first adjustment parameter and through a predetermined movement distance algorithm.
  • the electronic device moves the pixels in the second partial image and the pixels in the first partial image to implement image adjustment of the first image.
  • the electronic device uses the triangular patch corresponding to each key point in the plurality of first key points, and compares the triangular patch corresponding to the second partial image based on the movement distance and direction of the first key point. The position of the pixel point is adjusted, and the pixel point in the triangle patch is adjusted based on the moving distance and moving direction of the first key point.
  • the moving distance and moving direction of each pixel are the same or different. If the moving distance and moving direction of each pixel are the same, the electronic device directly averages the moving distance and moving direction based on the number of pixels; if the moving distance and moving direction of each pixel are different, the electronic device determines the difference between the different pixels. Moving distance and moving direction.
  • steps (B1)-(B3) are used to achieve diffusion and filling of pixels, including:
  • the electronic device determines a second moving direction, the second moving direction being away from the center point.
  • the first partial image is enlarged and adjusted, and the moving direction of the pixels in the second partial image is the same as the moving direction during the enlargement of the first partial image, so as to determine the moving direction of the pixels in the second partial image Is the direction away from the center point.
  • the electronic device determines the second moving distance based on the first adjustment parameter.
  • This step is similar to step (A2) in S605, and will not be repeated here.
  • the electronic device determines the second image, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
  • This step is similar to step (A3) in S605, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本公开关于一种图像处理方法及装置,属于计算机技术领域。方法包括:确定第一图像中的多个第一关键点;确定第一图像的目标区域,其中,目标区域基于对多个第一关键点对应的区域进行扩展得到;基于该区域和目标区域对目标部位进行调整,得到第二图像。通过上述方案,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对面部区域的其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。

Description

图像处理方法及装置
本公开要求在2020年04月30日提交的申请号为202010364388.9、发明名称为“图像处理方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机技术领域,特别涉及一种图像处理方法及装置。
背景技术
目前,用户对拍摄到的图像中人脸的美型需求越来越高,例如,用户希望人脸中的眼睛更大、眉毛更细、鼻梁更高等。因此,需要对拍摄到的图像进行图像处理,使得图像中的人脸能够满足用户的需求。
发明内容
本公开实施例提供了一种图像处理方法及装置,能够优化图像处理效果。所述技术方案如下:
根据本公开实施例的一方面,提供了一种图像处理方法,所述方法包括:
确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
在一些实施例中,所述获取第二图像,包括:
基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所 述第二图像。
在一些实施例中,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:
响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;
基于所述第一调整参数,确定第一移动距离;
确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
在一些实施例中,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:
响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;
基于所述第一调整参数,确定第二移动距离;
确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
在一些实施例中,所述确定所述第一图像的目标区域,包括:
确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
确定所述目标区域,所述目标区域基于多个第二关键点确定。
在一些实施例中,所述确定目标中心点,包括下述任一项:
将所述多个第一关键点的中心点确定为所述目标中心点;
将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
在一些实施例中,所述确定第一图像中的多个第一关键点,包括:
确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
在一些实施例中,所述确定所述多个第一关键点,包括:
对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;
确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;
获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
在一些实施例中,所述确定所述多个第一关键点,包括下述任一项:
响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
在一些实施例中,所述方法还包括:
响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;
确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
在一些实施例中,所述方法还包括:
确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
根据本公开实施例的另一方面,提供了一种图像处理装置,所述装置包括:
第一确定模块,被配置为确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
第二确定模块,被配置为确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
图像获取模块,被配置为获取第二图像,所述第二图像通过调整第一局部 图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
在一些实施例中,所述位图像获取模块包括:
形状调整单元,被配置为基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
填充单元,被配置为将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
在一些实施例中,所述填充单元,被配置为响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;基于所述第一调整参数,确定第一移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
在一些实施例中,所述填充单元,被配置为响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;基于所述第一调整参数,确定第二移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
在一些实施例中,所述第二确定模块包括:
第一确定单元,被配置为确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
第二确定单元,被配置为对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
第三确定单元,被配置为确定所述目标区域,所述目标区域基于多个第二关键点确定。
在一些实施例中,所述第一确定单元,被配置为将所述多个第一关键点的中心点确定为所述目标中心点;
所述第一确定单元,被配置为将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
在一些实施例中,所述第一确定模块包括:
第四确定单元,被配置为确定第三图像中的多个第三关键点,所述多个第 三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
第五确定单元,被配置为确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
第六确定单元,被配置为确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
在一些实施例中,所述第六确定单元,被配置为对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
在一些实施例中,所述第六确定单元,被配置为响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
所述第六确定单元,被配置为响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
在一些实施例中,所述装置还包括:
第三确定模块,被配置为响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;
第四确定模块,被配置确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
在一些实施例中,所述装置还包括:
第五确定模块,被配置为确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
图像处理模块,还被配置响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
根据本公开实施例的另一方面,提供了一种电子设备,所述电子设备包括: 一个或多个处理器,
用于存储所述一个或多个处理器可执行指令的易失性或非易失性存储器;
其中,所述一个或多个处理器被配置为执行以下步骤:
确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;
基于所述第一调整参数,确定第一移动距离;
确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;
基于所述第一调整参数,确定第二移动距离;
确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键 点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
确定所述目标区域,所述目标区域基于多个第二关键点确定。
在一些实施例中,所述一个或多个处理器被配置为执行以下至少一个步骤:
将所述多个第一关键点的中心点确定为所述目标中心点;
将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;
确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;
获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
在一些实施例中,所述一个或多个处理器被配置为执行以下至少一个步骤:
响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用 于调整所述第三图像的参数;
确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
在一些实施例中,所述一个或多个处理器被配置为执行以下步骤:
确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
根据本公开实施例的另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被电子设备的处理器执行时,使所述电子设备能够执行以下步骤:
确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
根据本公开实施例的另一方面,提供了一种计算机程序产品,当所述计算机程序产品中的指令由电子设备的处理器执行时,使所述电子设备能够执行以下步骤:
确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还能够根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种图像处理方法的原理图;
图2是根据一示例性实施例示出的一种终端的框图;
图3是根据一示例性实施例示出的一种服务器的框图;
图4是根据一示例性实施例示出的一种图像处理装置的框图;
图5是根据一示例性实施例示出的一种图像处理方法的流程图;
图6是根据一示例性实施例示出的一种图像处理方法的流程图;
图7是根据一示例性实施例示出的一种面部区域的关键点的示意图;
图8是根据一示例性实施例示出的一种目标区域的示意图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下能够互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开所涉及的用户信息为经用户授权或者经过各方充分授权的信息。
本公开提供了一种图像处理方法。其中,电子设备通过对第一图像中的目标部位进行调整,完成对第一图像的图像处理。参见图1,在本公开实施例的实施环境包括用户和电子设备。其中,用户触发图像处理操作,电子设备接收该图像处理操作,根据该图像处理操作对第一图像进行图像处理。
其中,该第一图像为拍摄的静态图像,或者,第一图像为视频流中的图像。在本公开实施例中,对此不作具体限定。响应于该第一图像为视频流中的图像,电子设备从视频流中确定该第一图像。其中,该视频流为长视频对应的视频流,或者,该视频流为短视频对应的视频流。
响应于该第一图像为静态图像,则电子设备需要先采集静态图像,或者,电子设备接收其他电子设备发送的静态图像。在一些实施例中,电子设备具有图像采集功能。相应的,电子设备采集静态图像。电子设备采集静态图像的过程为:电子设备在取景框中显示拍摄到的画面,响应于接收到用户的确认操作,电子设备基于该确认操作确定该取景框中的画面为第一图像。在一些实施例中,电子设备接收其他电子设备发送的静态图像,将接收到的第一图像确定为静态图像。其中,其他电子设备采集该静态图像的过程与电子设备采集静态图像的过程相似,在此不再赘述。
响应于该第一图像为视频流中的图像,则电子设备需要先采集视频流,或者电子设备接收其他电子设备发送的视频流。在一些实施例中,电子设备具有图像采集功能。相应的,电子设备接收用户输入的开始拍摄指令,开始采集视频流。响应于接收到用户输入的结束拍摄指令,停止采集视频流,确定开始拍摄指令和结束拍摄指令之间的视频流,从该视频流中确定任一帧图像为第一图像。在一些实施例中,电子设备接收其他电子设备发送的视频流,从接收到的视频流中确定第一图像。其中,其他电子设备采集视频流的过程与电子设备采集视频流的过程相似,在此不再赘述。
另外,电子设备在采集第一图像的过程中,就对第一图像进行图像处理,直接输出图像处理后的第二图像。或者,电子设备先采集第一图像,输出第一图像;响应于接收到图像处理指令,才对第一图像进行图像处理,得到第二图像,在本公开实施例中,对此不作具体限定。
在一些实施例中,图像处理指令中携带本次图像处理的目标部位和第一调整参数,电子设备基于该图像处理指令确定本次图像处理的目标部位的和第一 调整参数。在一些实施例中,电子设备事先设置图像处理的目标部位和第一调整参数,响应于接收到图像调整指令,直接基于该目标部位和第一调整参数对该第一图像进行图像处理。
另外,该目标部位为面部区域中的五官或面部轮廓,例如,该目标部位为眼睛、眉毛、鼻梁、嘴巴或脸颊等。或者,该目标部位为其他身体部位,例如,腰部、腿部等。
在一些实施例中,该电子设备为终端,例如,该电子设备为摄像机、手机、平板电脑或可穿戴设备等。其中,终端中安装图像处理应用,通过该图像处理应用进行图像处理。该图像处理应用为摄像机应用、美颜相机应用、视频拍摄应用等。
在一些实施例中,该电子设备为用于进行图像处理的服务器。相应的,电子设备接收其他电子设备发送的待处理的第一图像,对该第一图像进行图像处理,得到第二图像,将得到的第二图像返回给其他电子设备。服务器为单独的一个服务器、多个服务器组成的服务器集群或者云服务器等。
在示例性实施例中,还提供了一种电子设备,该电子设备包括:一个或多个处理器,
用于存储该一个或多个处理器可执行指令的易失性或非易失性存储器;
其中,一个或多个处理器被配置为执行以下步骤:
确定第一图像中的多个第一关键点,多个第一关键点为目标部位的关键点;
确定第一图像的目标区域,目标区域基于对多个第一关键点对应的区域进行扩展得到;
获取第二图像,第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,调整基于区域的中心点和第一调整参数,第一局部图像为区域对应的图像,第二局部图像为目标区域中除第一局部图像外的图像。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
基于区域的中心点和第一调整参数,调整第一局部图像的形状;
将第二局部图像中的像素点分散填充到第二局部图像中,得到第二图像。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
响应于第一局部图像被缩小,确定第一移动方向,第一移动方向为靠近中心点;
基于第一调整参数,确定第一移动距离;
确定第二图像,第二图像为将第二局部图像中的像素点,向第一移动方向移动第一移动距离得到。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
响应于第一局部图像被放大,确定第二移动方向,第二移动方向为远离中心点;
基于第一调整参数,确定第二移动距离;
确定第二图像,第二图像为将第二局部图像中的像素点,向第二移动方向移动第二移动距离得到。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
确定目标中心点,目标中心点基于多个第一关键点得到;
对于每个第一关键点,确定第二关键点,第二关键点、第一关键点和目标中心点在同一直线上,且第一距离大于第二距离,第一距离为第二关键点与目标中心点之间的距离,第二距离为第一关键点与目标中心点之间的距离;
确定目标区域,目标区域基于多个第二关键点确定。
在一些实施例中,一个或多个处理器被配置为执行以下至少一个步骤:
将多个第一关键点的中心点确定为目标中心点;
将部分第一关键点的中心点确定为目标中心点,部分第一关键点位于区域的中心区域内。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
确定第三图像中的多个第三关键点,多个第三关键点为目标部位的关键点,第三图像为第一图像的前一帧图像;
确定第一图像中的多个第四关键点,多个第四关键点为目标部位的关键点,且,第四关键点通过关键点确定模型确定;
确定多个第一关键点,多个第一关键点基于多个第三关键点和多个第四关键点确定。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
对于每个第四关键点,确定多个第三关键点中的第一目标关键点,第一目标关键点与第四关键点具有相同像素值;
确定第一位置和第二位置的平均位置,第一位置为第一目标关键点的位置,第二位置为第四关键点的位置;
获取第一关键点,第一关键点为通过将第四关键点的像素值渲染到平均位置得到。
在一些实施例中,一个或多个处理器被配置为执行以下至少一个步骤:
响应于目标部位被遮挡,确定多个第三关键点中的第二目标关键点,第二目标关键点为被遮挡的目标部位对应的关键点;获取多个第一关键点,多个第一关键点基于第二目标关键点和多个第四关键点组成;
响应于目标部位被遮挡,将多个第三关键点作为多个第一关键点。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
响应于目标部位被遮挡,确定第二调整参数,第二调整参数为用于调整第三图像的参数;
确定第一调整参数,第一调整参数基于预设幅度调整第二调整参数得到。
在一些实施例中,一个或多个处理器被配置为执行以下步骤:
确定帧数,帧数为目标部位被遮挡的图像的连续帧数;
响应于帧数达到目标帧数,停止对下一帧图像进行图像处理。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。
在一些实施例中,电子设备提供为终端。图2是根据一示例性实施例示出的一种终端的框图。在一些实施例中,该终端200是:智能手机、平板电脑、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端200还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端200包括有:一个或多个处理器201和易失性或非易失性存储器202。
在一些实施例中,处理器201包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器101采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。 在一些实施例中,处理器201包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器201在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。在一些实施例中,处理器201还包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器202包括一个或多个计算机可读存储介质,该计算机可读存储介质是非暂态的。存储器202还包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器202中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器201所执行以实现本公开中方法实施例提供的图像处理方法。
在一些实施例中,终端200还可选包括有:外围设备接口203和至少一个外围设备。处理器201、存储器202和外围设备接口203之间通过总线或信号线相连。各个外围设备通过总线、信号线或电路板与外围设备接口203相连。具体地,外围设备包括:射频电路204、触摸显示屏205、摄像头组件206、音频电路207、定位组件208和电源209中的至少一种。
外围设备接口203可被用于将至少一个外围设备连接到处理器201和存储器202。其中,该至少一个外围设备为I/O(Input/Output,输入/输出)相关的外围设备。在一些实施例中,处理器201、存储器202和外围设备接口203被集成在同一芯片或电路板上;在一些其他实施例中,处理器201、存储器202和外围设备接口203中的任意一个或两个在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路204用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路204通过电磁信号与通信网络以及其他通信设备进行通信。射频电路204将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路204包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路204通过至少一种无线通信协议来与其它电子设备进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例 中,射频电路204还包括NFC(Near Field Communication,近距离无线通信)有关的电路,本公开对此不加以限定。
显示屏205用于显示UI(User Interface,用户界面)。该UI包括图形、文本、图标、视频及其它们的任意组合。响应于显示屏205是触摸显示屏,显示屏205还具有采集在显示屏205的表面或表面上方的触摸信号的能力。该触摸信号作为控制信号输入至处理器201进行处理。此时,显示屏205还用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏205为一个,设置终端200的前面板;在另一些实施例中,显示屏205为至少两个,分别设置在终端200的不同表面或呈折叠设计;在再一些实施例中,显示屏205是柔性显示屏,设置在终端200的弯曲表面上或折叠面上。甚至,显示屏205还设置成非矩形的不规则图形,也即异形屏。显示屏205采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件206用于采集图像或视频。在一些实施例中,摄像头组件206包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端200的前面板,后置摄像头设置在终端200的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件206还包括闪光灯。闪光灯是单色温闪光灯,或者是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,用于不同色温下的光线补偿。
音频电路207包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器201进行处理,或者输入至射频电路204以实现语音通信。出于立体声采集或降噪的目的,麦克风为多个,分别设置在终端200的不同部位。在一些实施例中,麦克风是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器201或射频电路204的电信号转换为声波。扬声器是传统的薄膜扬声器,或者是压电陶瓷扬声器。响应于扬声器是压电陶瓷扬声器,不仅能够将电信号转换为人类可听见的声波,还能够将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路207还包括耳机插孔。
定位组件208用于定位终端200的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件208是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源209用于为终端200中的各个组件进行供电。电源209是交流电、直流电、一次性电池或可充电电池。响应于电源209包括可充电电池,该可充电电池支持有线充电或无线充电。该可充电电池还用于支持快充技术。
在一些实施例中,终端200还包括有一个或多个传感器210。该一个或多个传感器210包括但不限于:加速度传感器211、陀螺仪传感器212、压力传感器213、指纹传感器214、光学传感器215以及接近传感器216。
加速度传感器211检测以终端200建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器211用于检测重力加速度在三个坐标轴上的分量。处理器201根据加速度传感器211采集的重力加速度信号,控制触摸显示屏205以横向视图或纵向视图进行用户界面的显示。加速度传感器211还用于游戏或者用户的运动数据的采集。
陀螺仪传感器212检测终端200的机体方向及转动角度,陀螺仪传感器212与加速度传感器211协同采集用户对终端200的3D动作。处理器201根据陀螺仪传感器212采集的数据,实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器213设置在终端200的侧边框和/或触摸显示屏205的下层。响应于压力传感器213设置在终端200的侧边框,检测用户对终端200的握持信号,由处理器201根据压力传感器213采集的握持信号进行左右手识别或快捷操作。响应于压力传感器213设置在触摸显示屏205的下层,由处理器201根据用户对触摸显示屏205的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器214用于采集用户的指纹,由处理器201根据指纹传感器214采集到的指纹识别用户的身份,或者,由指纹传感器214根据采集到的指纹识别用户的身份。如果识别出用户的身份为可信身份,则由处理器201授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器214被设置终端200的正面、背面或侧面。响 应于终端200上设置有物理按键或厂商Logo(徽标),指纹传感器214与物理按键或厂商Logo集成在一起。
光学传感器215用于采集环境光强度。在一个实施例中,处理器201根据光学传感器215采集的环境光强度,控制触摸显示屏205的显示亮度。具体地,响应于环境光强度较高,调高触摸显示屏205的显示亮度;响应于环境光强度较低,调低触摸显示屏205的显示亮度。在另一个实施例中,处理器201还根据光学传感器215采集的环境光强度,动态调整摄像头组件206的拍摄参数。
接近传感器216,也称距离传感器,通常设置在终端200的前面板。接近传感器216用于采集用户与终端200的正面之间的距离。在一个实施例中,响应于接近传感器216检测到用户与终端200的正面之间的距离逐渐变小,由处理器201控制触摸显示屏205从亮屏状态切换为息屏状态;响应于接近传感器216检测到用户与终端200的正面之间的距离逐渐变大,由处理器201控制触摸显示屏205从息屏状态切换为亮屏状态。
本领域技术人员能够理解,图2中示出的结构并不构成对终端200的限定,能够包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一些实施例中,电子设备提供为服务器。图3是根据一示例性实施例示出的一种服务器的结构示意图,该服务器300可因配置或性能不同而产生比较大的差异,包括一个或一个以上处理器(Central Processing Units,CPU)301和一个或一个以上的存储器302,其中,所述存储器302中存储有至少一条指令,所述至少一条指令由所述处理器301加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器300还具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器还包括其他用于实现设备功能的部件,在此不做赘述。
在示例性实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,上述指令可被电子设备的处理器执行时,使该电子设备能够执行以下步骤:
确定第一图像中的多个第一关键点,该多个第一关键点为目标部位的关键点;
确定该第一图像的目标区域,该目标区域基于对该多个第一关键点对应的区域进行扩展得到;
获取第二图像,该第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,该调整基于该区域的中心点和第一调整参数,该第一局部图像为该区域对应的图像,该第二局部图像为该目标区域中除该第一局部图像外的图像。
其中,所述计算机可读存储介质是ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,只读光盘)、磁带、软盘和光数据存储设备等。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。
本公开还提供了一种计算机程序产品,当计算机程序产品中的指令由电子设备的处理器执行时,使该电子设备能够执行以下步骤:
确定第一图像中的多个第一关键点,该多个第一关键点为目标部位的关键点;
确定该第一图像的目标区域,该目标区域基于对该多个第一关键点对应的区域进行扩展得到;
获取第二图像,该第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,该调整基于该区域的中心点和第一调整参数,该第一局部图像为该区域对应的图像,该第二局部图像为该目标区域中除该第一局部图像外的图像。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。
图4是根据一示例性实施例示出的一种图像处理装置的框图。参见图4,该装置包括:
第一确定模块410,被配置为确定第一图像中的多个第一关键点,该多个第一关键点为目标部位的关键点;
第二确定模块420,被配置为确定该第一图像的目标区域,该目标区域基于对该多个第一关键点对应的区域进行扩展得到;
图像获取模块430,被配置为获取第二图像,该第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,该调整基于该区域的中心点和第一调整参数,该第一局部图像为该区域对应的图像,该第二局部图像为该目标区域中除该第一局部图像外的图像。
在一些实施例中,该位图像获取模块430包括:
形状调整单元,被配置为基于该区域的中心点和该第一调整参数,调整该第一局部图像的形状;
填充单元,被配置为将该第二局部图像中的像素点分散填充到该第二局部图像中,得到该第二图像。
在一些实施例中,该填充单元,被配置为响应于该第一局部图像被缩小,确定第一移动方向,该第一移动方向为靠近该中心点;基于该第一调整参数,确定第一移动距离;确定该第二图像,该第二图像为将该第二局部图像中的像素点,向该第一移动方向移动该第一移动距离得到。
在一些实施例中,该填充单元,被配置为响应于该第一局部图像被放大,确定第二移动方向,该第二移动方向为远离该中心点;基于该第一调整参数,确定第二移动距离;确定该第二图像,该第二图像为将该第二局部图像中的像素点,向该第二移动方向移动该第二移动距离得到。
在一些实施例中,该第二确定模块420包括:
第一确定单元,被配置为确定目标中心点,该目标中心点基于该多个第一关键点得到;
第二确定单元,被配置为对于每个第一关键点,确定第二关键点,该第二关键点、该第一关键点和该目标中心点在同一直线上,且第一距离大于第二距离,该第一距离为该第二关键点与该目标中心点之间的距离,该第二距离为该第一关键点与该目标中心点之间的距离;
第三确定单元,被配置为确定该目标区域,该目标区域基于多个第二关键点确定。
在一些实施例中,该第一确定单元,被配置为将该多个第一关键点的中心 点确定为该目标中心点;
该第一确定单元,被配置为将部分第一关键点的中心点确定为该目标中心点,该部分第一关键点位于该区域的中心区域内。
在一些实施例中,该第一确定模块410包括:
第四确定单元,被配置为确定第三图像中的多个第三关键点,该多个第三关键点为该目标部位的关键点,该第三图像为该第一图像的前一帧图像;
第五确定单元,被配置为确定第一图像中的多个第四关键点,该多个第四关键点为该目标部位的关键点,且,该第四关键点通过关键点确定模型确定;
第六确定单元,被配置为确定该多个第一关键点,该多个第一关键点基于该多个第三关键点和该多个第四关键点确定。
在一些实施例中,该第六确定单元,被配置为对于每个第四关键点,确定该多个第三关键点中的第一目标关键点,该第一目标关键点与该第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,该第一位置为该第一目标关键点的位置,该第二位置为该第四关键点的位置;获取该第一关键点,该第一关键点为通过将该第四关键点的像素值渲染到该平均位置得到。
在一些实施例中,该第六确定单元,被配置为响应于该目标部位被遮挡,确定该多个第三关键点中的第二目标关键点,该第二目标关键点为被遮挡的目标部位对应的关键点;获取该多个第一关键点,该多个第一关键点基于该第二目标关键点和该多个第四关键点组成;
该第六确定单元,被配置为响应于该目标部位被遮挡,将该多个第三关键点作为该多个第一关键点。
在一些实施例中,该装置还包括:
第三确定模块,被配置为响应于该目标部位被遮挡,确定第二调整参数,该第二调整参数为用于调整该第三图像的参数;
第四确定模块,被配置确定第一调整参数,该第一调整参数基于预设幅度调整该第二调整参数得到。
在一些实施例中,该装置还包括:
第五确定模块,被配置为确定帧数,该帧数为该目标部位被遮挡的图像的连续帧数;
图像处理模块,还被配置响应于该帧数达到目标帧数,停止对下一帧图像进行图像处理。
需要说明的是:上述实施例提供的图像处理装置在进行图像处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,能够根据需要而将上述功能分配由不同的功能模块完成,即将电子设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点造成影响,出现图像畸形,优化了图像处理效果。
相关技术中,响应于用户想要对图像中的某个部位进行美型,电子设备一般通过获取人脸图像中该部位对应的多个关键点;通过调整该部位对应的多个关键点在人脸图像中的位置,实现对该部位的美型处理。例如,如果用户想要放大人脸图像中的眼睛,则通过电子设备将眼睛对应的多个关键点,以眼睛为中心向外移动,从而实现放大眼睛。再如,如果用户想要对人脸图像中的眉毛进行细眉,则通过电子设备将眉毛对应的多个关键点,以眉毛为中心向内移动,从而实现细眉处理。
上述相关技术中,如果对眼睛进行放大处理,则眼睛对应的多个关键点以眼睛为中心向外移动,移动后的关键点会占据眼睛周围的像素点的位置,这样眼睛周围的像素点就会堆积起来。而如果对眉毛进行细眉处理,眉毛对应的多个关键点以眉毛为中心向内移动,这样细眉处理后的眉毛周围就会缺失像素点。由此可见,相关技术中的图像处理方式,会由于图像像素点位置的突变,导致图像畸形,人脸图像的美型效果差。
图5是根据一示例性实施例示出的一种图像处理方法的流程图,参见图5,该图像处理方法包括以下步骤:
S501、确定第一图像中的多个第一关键点,该多个第一关键点为目标部位的关键点。
S502、确定该第一图像的目标区域,该目标区域基于对该多个第一关键点对应的区域进行扩展得到。
S503、获取第二图像,该第二图像通过调整第一局部图像和第二局部图像 中像素点的位置得到,该调整基于该区域的中心点和第一调整参数,该第一局部图像为该区域对应的图像,该第二局部图像为该目标区域中除该第一局部图像外的图像。
在一些实施例中,该获取第二图像,包括:
基于该区域的中心点和该第一调整参数,调整该第一局部图像的形状;
将该第二局部图像中的像素点分散填充到该第二局部图像中,得到该第二图像。
在一些实施例中,该将该第二局部图像中的像素点分散填充到该第二局部图像中,得到该第二图像,包括:
响应于该第一局部图像被缩小,确定第一移动方向,该第一移动方向为靠近该中心点;
基于该第一调整参数,确定第一移动距离;
确定该第二图像,该第二图像为将该第二局部图像中的像素点,向该第一移动方向移动该第一移动距离得到。
在一些实施例中,该将该第二局部图像中的像素点分散填充到该第二局部图像中,得到该第二图像,包括:
响应于该第一局部图像被放大,确定第二移动方向,该第二移动方向为远离该中心点;
基于该第一调整参数,确定第二移动距离;
确定该第二图像,该第二图像为将该第二局部图像中的像素点,向该第二移动方向移动该第二移动距离得到。
在一些实施例中,该确定该第一图像的目标区域,包括:
确定目标中心点,该目标中心点基于该多个第一关键点得到;
对于每个第一关键点,确定第二关键点,该第二关键点、该第一关键点和该目标中心点在同一直线上,且第一距离大于第二距离,该第一距离为该第二关键点与该目标中心点之间的距离,该第二距离为该第一关键点与该目标中心点之间的距离;
确定该目标区域,该目标区域基于多个第二关键点确定。
在一些实施例中,该确定目标中心点,包括下述任一项:
将该多个第一关键点的中心点确定为该目标中心点;
将部分第一关键点的中心点确定为该目标中心点,该部分第一关键点位于 该区域的中心区域内。
在一些实施例中,该确定第一图像中的多个第一关键点,包括:
确定第三图像中的多个第三关键点,该多个第三关键点为该目标部位的关键点,该第三图像为该第一图像的前一帧图像;
确定第一图像中的多个第四关键点,该多个第四关键点为该目标部位的关键点,且,该第四关键点通过关键点确定模型确定;
确定该多个第一关键点,该多个第一关键点基于该多个第三关键点和该多个第四关键点确定。
在一些实施例中,该确定该多个第一关键点,包括:
对于每个第四关键点,确定该多个第三关键点中的第一目标关键点,该第一目标关键点与该第四关键点具有相同像素值;
确定第一位置和第二位置的平均位置,该第一位置为该第一目标关键点的位置,该第二位置为该第四关键点的位置;
获取该第一关键点,该第一关键点为通过将该第四关键点的像素值渲染到该平均位置得到。
在一些实施例中,该确定该多个第一关键点,包括下述任一项:
响应于该目标部位被遮挡,确定该多个第三关键点中的第二目标关键点,该第二目标关键点为被遮挡的目标部位对应的关键点;获取该多个第一关键点,该多个第一关键点基于该第二目标关键点和该多个第四关键点组成;
响应于该目标部位被遮挡,将该多个第三关键点作为该多个第一关键点。
在一些实施例中,该方法还包括:
响应于该目标部位被遮挡,确定第二调整参数,该第二调整参数为用于调整该第三图像的参数;
确定第一调整参数,该第一调整参数基于预设幅度调整该第二调整参数得到。
在一些实施例中,该方法还包括:
确定帧数,该帧数为该目标部位被遮挡的图像的连续帧数;
响应于该帧数达到目标帧数,停止对下一帧图像进行图像处理。
在本公开实施例中,通过对第一图像中目标部位所在的区域进行扩展,得到目标区域,使得调整得到的第二图像中目标部位的变化可以逐渐对第一图像中的其他区域产生影响,防止了调整目标部位时,对图像中其他区域的像素点 造成影响,出现图像畸形,优化了图像处理效果。
图6是根据一示例性实施例示出的一种图像处理方法的流程图,参见图6,该图像处理方法包括以下步骤:
S601、电子设备确定第一图像中的多个第一关键点。
其中,该多个第一关键点为目标部位的关键点。目标部位为第一图像中面部区域中的五官或面部轮廓,例如,该目标部位为眼睛、眉毛、鼻梁、嘴巴或脸颊等。或者,该目标部位为其他身体部位,例如,腰部、腿部等。
其中,该目标部位为用户选择的目标部位,或者,该目标部位为事先确定的目标部位。在一些实施例中,电子设备接收用户的选择操作,基于该选择操作确定用户选择的目标部位。在一些实施例中,电子设备事先设置进行形状调整的目标部位,在本步骤中,电子设备直接调用事先设置的目标部位。在本公开实施例中,对此不作具体限定。需要说明的一点是,该目标部位为第一图像中的一个部位。或者,该目标部位为第一图像中多个部位。在本公开实施例中,对此也不作具体限定。
在本步骤中,电子设备确定目标部位的多个第一关键点。例如,参见图7,图7是根据一示例性实施例示出的一种面部区域的关键点的示意图。如果目标部位为眉毛,则该多个第一关键点为图7中标号为19-28的10个关键点。
另外,电子设备仅基于当前的第一图像确定该目标部位的多个第一关键点。或者,电子设备通过第一图像的前一帧图像确定目标部位对应的多个第一关键点。在一些实施例中,电子设备可以直接从第一图像中确定目标部位对应的多个第一关键点。在本实现方式中,电子设备直接从第一图像中确定目标部位对应的多个第一关键点,从而简化了确定多个第一关键点的处理流程,提高了图像处理的效率。
在一些实施例中,电子设备基于多个第三关键点,确定该目标部位对应的多个第一关键点,其中,该多个第三关键点为第一图像的前一帧图像中目标部位的关键点。该过程通过以下步骤(A1)-(A3)实现,包括:
(A1)电子设备确定第三图像中的多个第三关键点。
其中,该多个第三关键点为该目标部位的关键点,该第三图像为该第一图像的前一帧图像。
在一些实施例中,电子设备中存储有第三图像的多个关键点,在本步骤中, 电子设备直接根据目标部位,从多个关键点中确定多个第三关键点。在本实现方式中,电子设备中事先对获取到的图像进行处理,存储每个图像和像素点的对应关系,从而能够直接根据目标部位确定多个第三关键点,简化了获取多个第三关键点的流程,提高了处理效率。
在一些实施例中,电子设备通过第一确定模型确定该多个第三关键点。其中,电子设备将该第三图像输入至第一确定模型中,得到该第三图像的所有关键点;从该所有关键点中确定多个第三关键点。或者,电子设备将该第三图像输入至第二确定模型中,该第二确定模型输出该多个第三关键点。
其中,该第一确定模型和第二确定模型为任一神经网络模型。相应的,在本步骤之前,电子设备根据需要进行模型训练,通过调整模型参数,训练得到该第一确定模型和第二确定模型。
另外,该关键点的数量根据需要进行设置,在本公开实施例中,对该关键点的数量不作具体限定,例如,该关键点的数量为100、101或105等。参见图7,图7示出了面部区域的101个关键点。
在本实施例中,通过模型确定多个第三关键点,从而提高了确定多个第三关键点的准确率。
(A2)电子设备确定第一图像中的多个第四关键点。
其中,该多个第四关键点为该目标部位的关键点,且,该第四关键点通过关键点确定模型确定。
本步骤与步骤(A1)中,电子设备确定多个第三关键点的过程相似,在此不再赘述。
(A3)电子设备确定该多个第一关键点。
其中,该多个第一关键点基于该多个第三关键点和该多个第四关键点确定。
在本步骤中,电子设备将多个第四关键点的像素值渲染在平均位置处,得到多个第一关键点,该过程通过以下步骤(a1)-(a3)实现,包括:
(a1)对于每个第四关键点,电子设备确定该多个第三关键点中的第一目标关键点。
其中,该第一目标关键点与该第四关键点具有相同像素值。
在一些实施例中,电子设备先选择任一第四关键点,然后从多个第三关键点中确定与该第四关键点具有相同像素值的第一目标关键点。
(a2)电子设备确定第一位置和第二位置的平均位置。
其中,该第一位置为该第三关键点的位置,该第二位置为该第四关键点的位置。在一些实施例中,电子设备在第三图像和第一图像中建立相同的坐标系,在同一坐标系中,分别确定具有相同像素值的第三关键点和第四关键点坐标位置,得到第一位置和第二位置,对该第一位置和第二位置进行平均,得到平均位置。
在一些实施例中,电子设备分别在第三图像和第一图像中建立不同的坐标系,分别确定第三关键点和第四关键点在各自坐标系下的坐标位置,然后通过坐标系间的映射关系,得到同一坐标系下第三关键点和第四关键点的坐标位置。
需要说明的一点是,电子设备先确定多个第三关键点的第一位置,再确定多个第四关键点的第二位置。或者,电子设备先确定多个第四关键点的第二位置,再确定多个第三关键点的第一位置。或者,电子设备同时确定多个第三关键点的第一位置和多个第四关键点的第二位置。在本公开实施例中,对电子设备确定第一位置和第二位置的顺序不作具体限定。
(a3)电子设备获取第一关键点。
其中,该第一关键点为通过将该第四关键点的像素值渲染到该平均位置得到。
需要说明的一点是,电子设备还能根据第三关键点和第四关键点的对应关系,将第三图像和第一图像中其他像素点的位置也进行平均,从而对第一图像中的所有像素点进行调整。
需要说明的另一点是,电子设备还将第三关键点的像素值和第四关键点的像素值也进行加权求和,将加权求和的像素值作为多个第一关键点的像素值,进行渲染。
在本实施例中,通过将像素值相同的多个第三关键点和多个第四关键点的位置进行平均,得到多个第一关键点,从而使第一图像中的多个第一关键点的位置可以平滑变化,防止采集到的第一图像中的第一关键点的位置发生突变,保证了在采集动态动画时,第一图像中的画面也可以平滑和稳定。
另外,在采集图像的过程中,可能会出现图像被遮挡的情况。如果第一图像中没有出现目标部位被遮挡的情况,则电子设备直接将多个第四关键点作为多个第一关键点。如果第一图像中的目标部位被遮挡,电子设备获取到的第四关键点会缺失关键点,则电子设备通过获取前一帧图像中目标部位确定多个第一关键点。相应的,电子设备在采集到第一图像后,对第一图像中的关键点进 行识别,响应于电子设备检测到第一图像中目标部分被遮挡,电子设备获取之前采集的第三图像,该第三图像为具有完整关键点的图像。
在一些实施例中,电子设备从多个第三关键点中选择缺失的第四关键点,相应的,该过程为:响应于目标部位被遮挡,电子设备确定该多个第三关键点中的第二目标关键点,该第二目标关键点为被遮挡的目标部位对应的关键点;获取该多个第一关键点,该多个第一关键点基于该第二目标关键点和该多个第四关键点组成。
在一些实施例中,电子设备将多个第三关键点直接作为第一关键点,该过程为:响应于该目标部位被遮挡,电子设备将该多个第三关键点作为该多个第一关键点。
S602、电子设备确定该第一图像的目标区域,该目标区域基于对该多个第一关键点对应的区域进行扩展得到。
在本步骤中,电子设备在该第一图像中确定该多个第一关键点对应的区域,再对该区域进行扩展,得到该目标区域。该过程通过以下步骤(1)-(2)实现,包括:
(1)电子设备确定该多个第一关键点围成的区域。
其中,电子设备依次连接相邻的第一关键点,将多个第一关键点包围的图像区域作为该区域。
(2)电子设备基于该区域和该多个第一关键点,对该区域进行扩展,得到该目标区域。
该过程通过以下步骤(2-1)-(2-3)实现,包括:
(2-1)电子设备确定目标中心点。
其中,该目标中心点基于该多个第一关键点得到。
在一些实施例中,电子设备将该多个第一关键点的中心点确定为该目标中心点。例如,请继续参见图7,以目标部位为眉毛为例进行说明,眉毛对应的多个第一关键点为19-28这10个关键点,电子设备确定这10个关键点对应的中心点。
在一些实施例中,电子设备先从多个第一关键点中选出中心区域的部分第一关键点,再确定该部分第一关键点的中心点。相应的,电子设备将部分第一关键点的中心点确定为该目标中心点,该部分第一关键点位于该区域的中心区域内。例如,请继续参见图7,以目标部位为眉毛为例进行说明,眉毛对应的多 个第一关键点为19-28这10个关键点,电子设备从关键点19-28中,先选择出中间位置的四个关键点,例如关键点21、22、26和27,确定这四个关键点的平均点作为中心点。
(2-2)对于每个第一关键点,电子设备确定第二关键点,该第二关键点、该第一关键点和该目标中心点在同一直线上,且第一距离大于第二距离,该第一距离为该第二关键点与该目标中心点之间的距离,该第二距离为该第一关键点与该目标中心点之间的距离。
在本步骤中,电子设备以该目标中心点为端点分别向每个第一关键点相连做射线。例如,继续以上述步骤(1)中的举例为例进行说明,以第一关键点21、22、26和27的平均点作为目标中心点,分别向第一关键点19-28的方向做射线,得到10条射线。然后,电子设备从每条射线上确定第二关键点,得到该多个第二关键点,该第二关键点和该目标中心点的距离大于该第二关键点所在的射线上的第一关键点与该目标中心点的距离。
在本步骤中,电子设备以目标中心点为端点在得到的射线上截取线段,该线段的另一个端点即为该第二关键点。其中,该线段的长度大于该射线上目标中心点到第一关键点的长度。其中,该线段的长度为根据经验值确定的预设长度,在本公开实施例中,对该线段的长度不作具体限定。
需要说明的一点是,该确定多个第二关键点的过程与步骤S601的步骤(A1)中,电子设备确定多个第三关键点的过程相似,在此不再赘述。
(2-3)电子设备确定该目标区域。
其中,该目标区域基于多个第二关键点确定。
在一些实施例中,电子设备依次连接多个第二关键点,将多个第二关键点包围的图像区域作为该目标区域。在一些实施例中,电子设备通过网格(mesh)算法,确定该多个第二关键点对应的网格区域,将该网格区域作为该目标区域。该过程为:电子设备将该多个第二关键点作为端点,对该多个第一关键点对应的目标部位进行网格扩展,得到网格区域。电子设备将得到的网格区域和第一关键点对应的区域组成目标区域。参见图8,电子设备将多个第二关键点作为端点,分别与多个第一关键点连接实现网格扩展,得到三角面片形式的网格区域,将多个三角面片和第一关键点对应的区域组成该目标区域。
S603、电子设备基于该区域的中心点和该第一调整参数,调整该第一局部图像的形状。
其中,该第一局部图像为该区域对应的图像。该第一调整参数为用于调整第一局部图像的参数。其中,第一调整参数为系统默认的整参数,或者,第一调整参数为基于用户设置生成的参数,或者,第一调整参数为基于第三图像的第二调整参数确定的参数。该第一调整参数至少包括调整方式、调整力度,还包括色彩参数、亮度参数等。电子设备基于第一调整参数中的调整方式,对目标部位进行形状调整,该形状调整为对该目标部位进行放大调整、缩小调整等。其中,该第一调整参数为电子设备接收到的用户输入的调整参数。或者,该第一调整参数为电子设备中,基于不同的目标部位设置的调整参数。例如,如果目标部位为眼睛,该调整参数为放大眼睛;如果该目标部位为眉毛,该调整参数为变细眉毛。相应的,对眼睛的调整为放大调整,对眉毛的调整为缩小调整。
电子设备对第一局部图像的形状进行调整,从而实现放大或缩小该目标部位。在一些实施例中,电子设备根据网格算法,确定第一局部图像中各像素点的关系,对第一局部图像中的像素单进行调整。
在一些实施例中,电子设备通过液化算法对目标部位进行调整,该过程为:电子设备确定目标部位对应的中心位置,以该中心位置为圆心做圆,使得该目标部位对应的第一局部图像在该圆内,通过更改该圆的大小实现对该目标部位对应的第一局部图像的调整。
其中,电子设备确定进行形状调整的第一调整参数,基于该第一调整参数对第一局部图像进行形状调整。相应的,电子设备基于该第一调整参数,对该第一局部图像进行形状调整的过程通过以下步骤(1)-(2)实现,包括:
(1)电子设备确定第一调整参数。
在一些实施例中,电子设备将当前的调整参数确定为第一调整参数。在一些实施例中,在目标部位不被遮挡的情况下,电子设备将当前调整参数确定为第一调整参数。在目标部位被遮挡的情况下,电子设备基于前一帧图像的第二调整参数,确定第一调整参数。相应的,该确定第一调整参数的过程为:响应于目标部位被遮挡,确定第二调整参数,该第二调整参数为用于调整该第三图像的参数;确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
其中,第二调整参数为电子设备在对第三图像进行调整时使用的调整参数。该第二调整参数为系统默认的调整参数,或者,为基于用户设置生成的调整参数。
在本实现方式中,通过逐渐改变对图像的第一调整参数,使得对图像调整的过程可以更平滑,防止对图像进行调整时产生突变。
例如,在采集图像的过程中,可能会出现图像被遮挡的情况,在这种情况下,响应于电子设备对目标部位的调整,电子设备进行容错处理。通过选取目标数量的图像作为顺延时长,从而在缺失关键帧的情况下,能延续前一帧,对当前帧的目标部位进行调整。且逐渐减弱调整幅度,在第一关键点不再缺失的情况下,还将第一调整参数再恢复为原来的调整参数。
相应的,电子设备确定帧数,该帧数为该目标部位被遮挡的图像的连续帧数;响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
而在电子设备记录帧数的过程中,如果在帧数达到目标帧数之前,检测到图像目标部位不再被遮挡,还能逐渐将第一调整参数再恢复到第二调整参数。例如,响应于当前第一图像为缺失第四关键点的图像,则电子设备将连续缺失关键点的图像的帧数加一。当前图像的帧数为n,响应于第一图像缺失多个第四关键点,电子设备更新当前帧数为n+1。响应于当前第一图像不是缺失第四关键点的图像,则电子设备将关键点连续缺失的帧数清零。电子设备再基于预设幅度增大当前的调整参数,直到调整参数恢复为第二调整参数。
需要说明的一点是,电子设备先对第一图像中目标部位进行调整,再确定连续缺失第四关键点的帧数。或者,电子设备先确定连续缺失第四关键点的帧数,再对第一图像中目标部位进行调整。或者,电子设备同时对第一图像中目标部位进行调整和确定连续缺失第四关键点的帧数。在本公开实施例中,对此不作具体限定。其中,该目标帧数根据需要进行设置,在本公开实施例中,对该目标帧数不作具体限定。例如,该目标帧数为50、60、或80等。
(2)电子设备基于中心点和该第一调整参数,对该第一局部图像内的每个像素点进行调整。
在本步骤中,电子设备保持中心点的位置不发生改变,基于第一调整参数,对第一局部图像中的像素点的位置进行调整,实现对该第一局部图像进行形状调整。
在本实现方式中,电子设备基于第一调整参数和中心点对第一局部图像中的每个像素点的位置进行调整,从而实现对目标部位进行调整,防止了对目标部位以外的区域进行调整,提高了调整的精度。
在步骤S604、电子设备将该第二局部图像中的像素点分散填充到该第二局 部图像中,得到第二图像。
其中,该第二局部图像为该目标区域中除该第一局部图像外的图像。电子设备对目标部位进行调整,使得第一局部图像中的像素点的位置产生变化,造成目标区域中出现畸形区域。在本步骤中,电子设备将目标区域中的像素点的位置也跟随第一局部图像进行调整,实现将目标部位的调整平滑过渡到第一局部图像以外的其他区域,防止图像出现突变。
其中,对第一局部图像进行调整的过程有两种,一种是对第一局部图像进行放大调整,另一种是对第一局部图像进行缩小调整。相应的,在本步骤中,电子设备将该第一图像中该目标区域内的第二局部图像的像素点分散填充到第二局部图像中,得到第二图像的方法也包括两种。
第一种实现方式,在进行缩小调整的情况下,通过以下步骤(A1)-(A3)实现对像素点的扩散填充,包括:
(A1)响应于第一局部图像被缩小,电子设备确定第一移动方向,该第一移动方向为靠近该中心点。
在本步骤中,对第一局部图像进行缩小调整,则第二局部图像中像素点的移动方向和第一局部图像缩小的过程中移动方向一致,从而确定该第二局部图像中像素点的移动方向为靠近该中心点的方向。
(A2)电子设备基于该第一调整参数,确定第一移动距离。
在本步骤中,电子设备基于第一调整参数,确定对第一局部图像进行缩小调整,可能会产生的畸变区域,通过该畸变区域确定第二局部图像中像素点填充该畸变图像的情况下,所需要移动的第一移动距离。
(A3)电子设备确定第二图像,该第二图像为将该第二局部图像中的像素点,向该第一移动方向移动该第一移动距离得到。
需要说明的一点是,电子设备对第一局部图像和对第二局部图像进行调整的过程为同步进行的,或者先对第一局部图像进行调整,再对第二局部图像进行调整。其中,电子设备同步对第一局部图像和第二局部图像进行调整的情况下,直接基于第一调整参数,通过事先确定的移动距离算法,确定第一移动距离。
在一些实施例中,电子设备将第二局部图像中的像素点和第一局部图像中的像素点进行移动,实现对第一图像的图像调整。在一些实施例中,电子设备通过多个第一关键点中每个关键点对应的三角面片,基于该第一关键点的移动 距离和移动方向对第二局部图像对应的三角面片中的像素点的位置进行调整,将三角面片中的像素点基于该第一关键点的移动距离和移动方向进行调整。
需要说明的一点是,该对于任一三角面片中的像素点,每个像素点的移动距离和移动方向相同或者不同。如果每个像素点的移动距离和移动方向相同,电子设备直接基于像素点的数量对移动距离和移动方向进行平均;如果每个像素点的移动距离和移动方向不同,电子设备确定不同像素点的移动距离和移动方向。
第二种实现方式,在进行放大调整的情况下,通过以下步骤(B1)-(B3)实现对像素点的扩散填充,包括:
(B1)响应于该第一局部图像被放大,电子设备确定第二移动方向,该第二移动方向为远离该中心点。
在本步骤中,对第一局部图像进行放大调整,则第二局部图像中像素点的移动方向和第一局部图像放大的过程中移动方向一致,从而确定该第二局部图像中像素点的移动方向为远离该中心点的方向。
(B2)电子设备基于该第一调整参数,确定第二移动距离。
本步骤与S605中的步骤(A2)相似,在此不再赘述。
(B3)电子设备确定所述第二图像,所述第二图像为将该第二局部图像中的像素点,向该第二移动方向移动该第二移动距离得到。
本步骤与S605中的步骤(A3)相似,在此不再赘述。
上述所有可选技术方案,能够采用任意结合形成本公开的可选实施例,在此不再一一赘述。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且能够在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (34)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
    确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
    获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第二图像,包括:
    基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
    将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:
    响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;
    基于所述第一调整参数,确定第一移动距离;
    确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
  4. 根据权利要求2所述的方法,其特征在于,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:
    响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;
    基于所述第一调整参数,确定第二移动距离;
    确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
  5. 根据权利要求1所述的方法,其特征在于,所述确定所述第一图像的目标区域,包括:
    确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
    对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
    确定所述目标区域,所述目标区域基于多个第二关键点确定。
  6. 根据权利要求5所述的方法,其特征在于,所述确定目标中心点,包括下述任一项:
    将所述多个第一关键点的中心点确定为所述目标中心点;
    将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
  7. 根据权利要求1所述的方法,其特征在于,所述确定第一图像中的多个第一关键点,包括:
    确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
    确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
    确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
  8. 根据权利要求7所述的方法,其特征在于,所述确定所述多个第一关键点,包括:
    对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所 述第一目标关键点与所述第四关键点具有相同像素值;
    确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;
    获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
  9. 根据权利要求7所述的方法,其特征在于,所述确定所述多个第一关键点,包括下述任一项:
    响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
    响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
  10. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;
    确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
    响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
  12. 一种图像处理装置,其特征在于,所述装置包括:
    第一确定模块,被配置为确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
    第二确定模块,被配置为确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
    图像获取模块,被配置为获取第二图像,所述第二图像通过调整第一局部 图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
  13. 根据权利要求12所述的装置,其特征在于,所述位图像获取模块包括:
    形状调整单元,被配置为基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
    填充单元,被配置为将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
  14. 根据权利要求13所述的装置,其特征在于,所述填充单元,被配置为响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;基于所述第一调整参数,确定第一移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
  15. 根据权利要求13所述的装置,其特征在于,所述填充单元,被配置为响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;基于所述第一调整参数,确定第二移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
  16. 根据权利要求12所述的装置,其特征在于,所述第二确定模块包括:
    第一确定单元,被配置为确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
    第二确定单元,被配置为对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
    第三确定单元,被配置为确定所述目标区域,所述目标区域基于多个第二关键点确定。
  17. 根据权利要求16所述的装置,其特征在于,所述第一确定单元,被配置为将所述多个第一关键点的中心点确定为所述目标中心点;
    所述第一确定单元,被配置为将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
  18. 根据权利要求12所述的装置,其特征在于,所述第一确定模块包括:
    第四确定单元,被配置为确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
    第五确定单元,被配置为确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
    第六确定单元,被配置为确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
  19. 根据权利要求18所述的装置,其特征在于,所述第六确定单元,被配置为对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
  20. 根据权利要求18所述的装置,其特征在于,所述第六确定单元,被配置为响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
    所述第六确定单元,被配置为响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
  21. 根据权利要求18所述的装置,其特征在于,所述装置还包括:
    第三确定模块,被配置为响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;
    第四确定模块,被配置确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
  22. 根据权利要求21所述的装置,其特征在于,所述装置还包括:
    第五确定模块,被配置为确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
    图像处理模块,还被配置响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
  23. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器,
    用于存储所述一个或多个处理器可执行指令的易失性或非易失性存储器;
    其中,所述一个或多个处理器被配置为执行以下步骤:
    确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
    确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
    获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
  24. 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;
    将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
  25. 根据权利要求24所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;
    基于所述第一调整参数,确定第一移动距离;
    确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
  26. 根据权利要求24所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;
    基于所述第一调整参数,确定第二移动距离;
    确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
  27. 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    确定目标中心点,所述目标中心点基于所述多个第一关键点得到;
    对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;
    确定所述目标区域,所述目标区域基于多个第二关键点确定。
  28. 根据权利要求27所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下至少一个步骤:
    将所述多个第一关键点的中心点确定为所述目标中心点;
    将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
  29. 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;
    确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;
    确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
  30. 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;
    确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;
    获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
  31. 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下至少一个步骤:
    响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;
    响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
  32. 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;
    确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
  33. 根据权利要求32所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:
    确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;
    响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
  34. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,所述指令被电子设备的处理器执行时,使所述电子设备能够执行以下步骤:
    确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;
    确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;
    获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
PCT/CN2020/129503 2020-04-30 2020-11-17 图像处理方法及装置 WO2021218118A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022549510A JP2023514342A (ja) 2020-04-30 2020-11-17 画像処理方法および画像処理装置
US17/822,493 US20220405879A1 (en) 2020-04-30 2022-08-26 Method for processing images and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010364388.9A CN113596314B (zh) 2020-04-30 2020-04-30 图像处理方法、装置及电子设备
CN202010364388.9 2020-04-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/822,493 Continuation US20220405879A1 (en) 2020-04-30 2022-08-26 Method for processing images and electronic device

Publications (1)

Publication Number Publication Date
WO2021218118A1 true WO2021218118A1 (zh) 2021-11-04

Family

ID=78237279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129503 WO2021218118A1 (zh) 2020-04-30 2020-11-17 图像处理方法及装置

Country Status (4)

Country Link
US (1) US20220405879A1 (zh)
JP (1) JP2023514342A (zh)
CN (1) CN113596314B (zh)
WO (1) WO2021218118A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787878A (zh) * 2016-02-25 2016-07-20 杭州格像科技有限公司 一种美颜处理方法及装置
CN108389155A (zh) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 图像处理方法、装置及电子设备
CN108564531A (zh) * 2018-05-08 2018-09-21 麒麟合盛网络技术股份有限公司 一种图像处理方法及装置
CN109376671A (zh) * 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质
CN109584153A (zh) * 2018-12-06 2019-04-05 北京旷视科技有限公司 修饰眼部的方法、装置和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787878A (zh) * 2016-02-25 2016-07-20 杭州格像科技有限公司 一种美颜处理方法及装置
CN108389155A (zh) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 图像处理方法、装置及电子设备
CN108564531A (zh) * 2018-05-08 2018-09-21 麒麟合盛网络技术股份有限公司 一种图像处理方法及装置
CN109376671A (zh) * 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质
CN109584153A (zh) * 2018-12-06 2019-04-05 北京旷视科技有限公司 修饰眼部的方法、装置和系统

Also Published As

Publication number Publication date
CN113596314A (zh) 2021-11-02
US20220405879A1 (en) 2022-12-22
CN113596314B (zh) 2022-11-11
JP2023514342A (ja) 2023-04-05

Similar Documents

Publication Publication Date Title
CN109712224B (zh) 虚拟场景的渲染方法、装置及智能设备
US11517099B2 (en) Method for processing images, electronic device, and storage medium
WO2021008456A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN108449641B (zh) 播放媒体流的方法、装置、计算机设备和存储介质
CN110427110B (zh) 一种直播方法、装置以及直播服务器
CN111324250B (zh) 三维形象的调整方法、装置、设备及可读存储介质
US11962897B2 (en) Camera movement control method and apparatus, device, and storage medium
CN111028144B (zh) 视频换脸方法及装置、存储介质
WO2022134632A1 (zh) 作品处理方法及装置
CN110853128B (zh) 虚拟物体显示方法、装置、计算机设备及存储介质
CN112565806B (zh) 虚拟礼物赠送方法、装置、计算机设备及介质
CN110839174A (zh) 图像处理的方法、装置、计算机设备以及存储介质
CN111586444B (zh) 视频处理方法、装置、电子设备及存储介质
CN110728744B (zh) 一种体绘制方法、装置及智能设备
CN110992268A (zh) 背景设置方法、装置、终端及存储介质
WO2021218926A1 (zh) 图像显示方法、装置和计算机设备
CN115904079A (zh) 显示设备调整方法、装置、终端及存储介质
WO2021218118A1 (zh) 图像处理方法及装置
CN112967261B (zh) 图像融合方法、装置、设备及存储介质
CN109685881B (zh) 一种体绘制方法、装置及智能设备
CN111757146B (zh) 视频拼接的方法、系统及存储介质
CN108881715B (zh) 拍摄模式的启用方法、装置、终端及存储介质
CN108881739B (zh) 图像生成方法、装置、终端及存储介质
CN112381729A (zh) 图像处理方法、装置、终端及存储介质
CN110942426A (zh) 图像处理的方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933277

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022549510

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20933277

Country of ref document: EP

Kind code of ref document: A1