WO2021218118A1 - 图像处理方法及装置 - Google Patents
图像处理方法及装置 Download PDFInfo
- Publication number
- WO2021218118A1 WO2021218118A1 PCT/CN2020/129503 CN2020129503W WO2021218118A1 WO 2021218118 A1 WO2021218118 A1 WO 2021218118A1 CN 2020129503 W CN2020129503 W CN 2020129503W WO 2021218118 A1 WO2021218118 A1 WO 2021218118A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- key points
- target
- key
- point
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims abstract description 65
- 230000004044 response Effects 0.000 claims description 83
- 230000033001 locomotion Effects 0.000 claims description 36
- 230000015654 memory Effects 0.000 claims description 16
- 238000009877 rendering Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 abstract description 9
- 230000000694 effects Effects 0.000 abstract description 9
- 230000001815 facial effect Effects 0.000 abstract description 9
- 230000008569 process Effects 0.000 description 32
- 210000004709 eyebrow Anatomy 0.000 description 21
- 230000002093 peripheral effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000003068 static effect Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000003796 beauty Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the field of computer technology, and in particular to an image processing method and device.
- the embodiments of the present disclosure provide an image processing method and device, which can optimize the image processing effect.
- the technical solution is as follows:
- an image processing method including:
- the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
- the acquiring the second image includes:
- the pixels in the second partial image are scattered and filled into the second partial image to obtain the second image.
- the dispersing and filling the pixels in the second partial image into the second partial image to obtain the second image includes:
- the second image is determined, and the second image is obtained by moving pixels in the second partial image by the first moving distance in the first moving direction.
- the dispersing and filling the pixels in the second partial image into the second partial image to obtain the second image includes:
- the second image is determined, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
- the determining the target area of the first image includes:
- a second key point is determined.
- the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second distance.
- a distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
- the target area is determined, and the target area is determined based on a plurality of second key points.
- the determining the target center point includes any one of the following:
- the central point of a part of the first key points is determined as the target central point, and the part of the first key points is located in the central area of the area.
- the determining multiple first key points in the first image includes:
- the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image;
- the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model
- the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
- the determining the plurality of first key points includes:
- For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
- the determining the plurality of first key points includes any one of the following:
- a second target key point among the plurality of third key points is determined, where the second target key point is a key point corresponding to the occluded target part; and the plurality of first key points are acquired.
- a key point, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
- the plurality of third key points are used as the plurality of first key points.
- the method further includes:
- the method further includes:
- an image processing device including:
- the first determining module is configured to determine a plurality of first key points in the first image, and the plurality of first key points are key points of the target part;
- a second determining module configured to determine a target area of the first image, the target area being obtained based on expanding the areas corresponding to the plurality of first key points;
- An image acquisition module configured to acquire a second image obtained by adjusting the positions of pixels in the first partial image and the second partial image, and the adjustment is based on the center point of the region and the first adjustment parameter ,
- the first partial image is an image corresponding to the area
- the second partial image is an image in the target area excluding the first partial image.
- the bit image acquisition module includes:
- a shape adjustment unit configured to adjust the shape of the first partial image based on the center point of the region and the first adjustment parameter
- the filling unit is configured to dispersely fill pixels in the second partial image into the second partial image to obtain the second image.
- the filling unit is configured to determine a first movement direction in response to the first partial image being reduced, where the first movement direction is close to the center point; based on the first adjustment Parameters, determining the first moving distance; determining the second image, the second image is obtained by moving the pixels in the second partial image by the first moving distance in the first moving direction.
- the filling unit is configured to determine a second movement direction in response to the first partial image being enlarged, the second movement direction being away from the center point; based on the first adjustment Parameter, determining a second moving distance; determining the second image, the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
- the second determining module includes:
- a first determining unit configured to determine a target center point, the target center point being obtained based on the plurality of first key points
- the second determining unit is configured to determine a second key point for each first key point, where the second key point, the first key point, and the target center point are on the same straight line, and the first distance Greater than a second distance, where the first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point ;
- the third determining unit is configured to determine the target area, and the target area is determined based on a plurality of second key points.
- the first determining unit is configured to determine a center point of the plurality of first key points as the target center point
- the first determining unit is configured to determine a center point of a part of the first key point as the target center point, and the part of the first key point is located in a central area of the area.
- the first determining module includes:
- the fourth determining unit is configured to determine a plurality of third key points in a third image, the plurality of third key points are key points of the target part, and the third image is the image of the first image Previous frame image;
- the fifth determining unit is configured to determine a plurality of fourth key points in the first image, the plurality of fourth key points are key points of the target part, and the fourth key points are determined by key points Model determination;
- the sixth determining unit is configured to determine the plurality of first key points, and the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
- the sixth determining unit is configured to determine, for each fourth key point, a first target key point among the plurality of third key points, and the first target key point is related to the first target key point.
- the fourth key point has the same pixel value; the average position of the first position and the second position is determined, the first position is the position of the first target key point, and the second position is the fourth key point Obtain the first key point, the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
- the sixth determining unit is configured to determine a second target key point of the plurality of third key points in response to the target part being occluded, and the second target key point is Key points corresponding to the occluded target part; acquiring the plurality of first key points, and the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
- the sixth determining unit is configured to, in response to the target part being blocked, use the plurality of third key points as the plurality of first key points.
- the device further includes:
- a third determining module configured to determine a second adjustment parameter in response to the target part being occluded, where the second adjustment parameter is a parameter for adjusting the third image;
- the fourth determining module is configured to determine a first adjustment parameter, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
- the device further includes:
- a fifth determining module configured to determine the number of frames, where the number of frames is the number of consecutive frames of the image where the target part is blocked;
- the image processing module is further configured to stop performing image processing on the next frame of image in response to the number of frames reaching the target number of frames.
- an electronic device including: one or more processors,
- Volatile or non-volatile memory for storing the one or more processor-executable instructions
- the one or more processors are configured to perform the following steps:
- the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
- the one or more processors are configured to perform the following steps:
- the pixel points in the second partial image are scattered and filled into the second partial image to obtain the second image.
- the one or more processors are configured to perform the following steps:
- the one or more processors are configured to perform the following steps:
- the second image is determined, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
- the one or more processors are configured to perform the following steps:
- a second key point is determined.
- the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second distance.
- a distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
- the target area is determined, and the target area is determined based on a plurality of second key points.
- the one or more processors are configured to perform at least one of the following steps:
- the central point of a part of the first key points is determined as the target central point, and the part of the first key points is located in the central area of the area.
- the one or more processors are configured to perform the following steps:
- the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image;
- the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model
- the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
- the one or more processors are configured to perform the following steps:
- For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
- the one or more processors are configured to perform at least one of the following steps:
- a second target key point among the plurality of third key points is determined, where the second target key point is a key point corresponding to the occluded target part; and the plurality of first key points are acquired.
- a key point, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
- the plurality of third key points are used as the plurality of first key points.
- the one or more processors are configured to perform the following steps:
- a first adjustment parameter is determined, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
- the one or more processors are configured to perform the following steps:
- a computer-readable storage medium having instructions stored on the computer-readable storage medium.
- the instructions When executed by a processor of an electronic device, the electronic device can Perform the following steps:
- the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
- a computer program product When instructions in the computer program product are executed by a processor of an electronic device, the electronic device can execute the following steps:
- the second image is obtained by adjusting the positions of pixels in the first partial image and the second partial image, the adjustment is based on the center point of the region and the first adjustment parameter, the first partial image Is an image corresponding to the area, and the second partial image is an image other than the first partial image in the target area.
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- Fig. 1 is a schematic diagram showing an image processing method according to an exemplary embodiment
- Fig. 2 is a block diagram showing a terminal according to an exemplary embodiment
- Fig. 3 is a block diagram showing a server according to an exemplary embodiment
- Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment
- Fig. 5 is a flowchart showing an image processing method according to an exemplary embodiment
- Fig. 6 is a flowchart showing an image processing method according to an exemplary embodiment
- Fig. 7 is a schematic diagram showing key points of a facial area according to an exemplary embodiment
- Fig. 8 is a schematic diagram showing a target area according to an exemplary embodiment.
- the user information involved in this disclosure is information authorized by the user or fully authorized by all parties.
- the present disclosure provides an image processing method.
- the electronic device completes the image processing of the first image by adjusting the target part in the first image.
- the implementation environment of the embodiment of the present disclosure includes a user and an electronic device.
- the user triggers an image processing operation
- the electronic device receives the image processing operation, and performs image processing on the first image according to the image processing operation.
- the first image is a captured still image, or the first image is an image in a video stream. In the embodiments of the present disclosure, this is not specifically limited.
- the electronic device determines the first image from the video stream.
- the video stream is a video stream corresponding to a long video, or the video stream is a video stream corresponding to a short video.
- the electronic device In response to the first image being a static image, the electronic device needs to collect the static image first, or the electronic device receives the static image sent by other electronic devices.
- the electronic device has an image capture function.
- the electronic device collects static images.
- the process of the electronic device collecting the static image is: the electronic device displays the captured image in the viewfinder, and in response to receiving the user's confirmation operation, the electronic device determines that the image in the viewfinder is the first image based on the confirmation operation.
- the electronic device receives a static image sent by another electronic device, and determines the received first image as a static image.
- the process of collecting the static image by other electronic devices is similar to the process of collecting the static image by the electronic device, and will not be repeated here.
- the electronic device In response to the first image being an image in a video stream, the electronic device needs to collect the video stream first, or the electronic device needs to receive the video stream sent by other electronic devices.
- the electronic device has an image capture function.
- the electronic device receives the shooting start instruction input by the user and starts to collect the video stream.
- stop collecting the video stream determine the video stream between the start shooting instruction and the end shooting instruction, and determine any frame of image as the first image from the video stream.
- the electronic device receives a video stream sent by another electronic device, and determines the first image from the received video stream.
- the process of collecting video streams by other electronic devices is similar to the process of collecting video streams by electronic devices, and will not be repeated here.
- the electronic device performs image processing on the first image, and directly outputs the second image after image processing.
- the electronic device first collects the first image and outputs the first image; in response to receiving the image processing instruction, it performs image processing on the first image to obtain the second image, which is not specifically limited in the embodiment of the present disclosure.
- the image processing instruction carries the target part of the current image processing and the first adjustment parameter
- the electronic device determines the target part of the current image processing and the first adjustment parameter based on the image processing instruction.
- the electronic device sets the target part and the first adjustment parameter for image processing in advance, and in response to receiving the image adjustment instruction, directly performs image processing on the first image based on the target part and the first adjustment parameter.
- the target part is the facial features or facial contours in the facial area, for example, the target part is eyes, eyebrows, bridge of the nose, mouth, or cheeks. Or, the target part is another body part, for example, waist, legs, etc.
- the electronic device is a terminal, for example, the electronic device is a camera, a mobile phone, a tablet computer, or a wearable device.
- an image processing application is installed in the terminal, and image processing is performed through the image processing application.
- the image processing application is a camera application, a beauty camera application, a video shooting application, and the like.
- the electronic device is a server for image processing.
- the electronic device receives the first image to be processed sent by the other electronic device, performs image processing on the first image to obtain the second image, and returns the obtained second image to the other electronic device.
- the server is a single server, a server cluster composed of multiple servers, or a cloud server.
- an electronic device is also provided, and the electronic device includes: one or more processors,
- Volatile or non-volatile memory for storing the one or more processor-executable instructions
- one or more processors are configured to perform the following steps:
- the target area of the first image is obtained based on the expansion of the areas corresponding to the multiple first key points;
- the second image is obtained by adjusting the position of the pixel points in the first partial image and the second partial image. The adjustment is based on the center point of the region and the first adjustment parameter.
- the first partial image is the image corresponding to the region, and the The partial image is an image other than the first partial image in the target area.
- one or more processors are configured to perform the following steps:
- one or more processors are configured to perform the following steps:
- the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the second moving distance in the second moving direction.
- the target center point is obtained based on multiple first key points
- For each first key point determine the second key point.
- the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance.
- the first distance is the second key point and the target center point.
- the distance between the target center points, and the second distance is the distance between the first key point and the target center point;
- the target area is determined, and the target area is determined based on a plurality of second key points.
- one or more processors are configured to perform at least one of the following steps:
- the center point of part of the first key point is determined as the target center point, and the part of the first key point is located in the central area of the area.
- one or more processors are configured to perform the following steps:
- Multiple first key points are determined, and multiple first key points are determined based on multiple third key points and multiple fourth key points.
- For each fourth key point determine the first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
- one or more processors are configured to perform at least one of the following steps:
- the multiple third key points are used as multiple first key points.
- one or more processors are configured to perform the following steps:
- the first adjustment parameter is determined, and the first adjustment parameter is obtained by adjusting the second adjustment parameter based on the preset amplitude.
- one or more processors are configured to perform the following steps:
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- the terminal 200 includes: one or more processors 201 and a volatile or non-volatile memory 202.
- the processor 201 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
- the processor 101 adopts at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array).
- the processor 201 includes a main processor and a co-processor.
- the main processor is a processor used to process data in an awake state, and is also called a CPU (Central Processing Unit, central processing unit);
- the coprocessor is a low-power processor used to process data in the standby state.
- the processor 201 is integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
- the processor 201 further includes an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
- AI Artificial Intelligence
- the terminal 200 may optionally further include: a peripheral device interface 203 and at least one peripheral device.
- the processor 201, the memory 202, and the peripheral device interface 203 are connected by a bus or signal line.
- Each peripheral device is connected to the peripheral device interface 203 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a radio frequency circuit 204, a touch display screen 205, a camera component 206, an audio circuit 207, a positioning component 208, and a power supply 209.
- the peripheral device interface 203 can be used to connect at least one peripheral device to the processor 201 and the memory 202.
- the at least one peripheral device is an I/O (Input/Output, input/output) related peripheral device.
- the processor 201, the memory 202, and the peripheral device interface 203 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 201, the memory 202, and the peripheral device interface 203 or The two are implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the radio frequency circuit 204 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 204 communicates with a communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 204 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
- the radio frequency circuit 204 communicates with other electronic devices through at least one wireless communication protocol.
- the wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
- the radio frequency circuit 204 also includes NFC (Near Field Communication) related circuits, which is not limited in the present disclosure.
- the display screen 205 is used to display a UI (User Interface, user interface).
- the UI includes graphics, text, icons, videos, and any combination of them.
- the display screen 205 also has the ability to collect touch signals on or above the surface of the display screen 205.
- the touch signal is input to the processor 201 as a control signal for processing.
- the display screen 205 is also used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- one display screen 205 is provided on the front panel of the terminal 200; in other embodiments, there are at least two display screens 205, which are respectively provided on different surfaces of the terminal 200 or in a folding design;
- the display screen 205 is a flexible display screen, which is arranged on the curved surface or the folding surface of the terminal 200.
- the display screen 205 is also configured as a non-rectangular irregular pattern, that is, a special-shaped screen.
- the display screen 205 is made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
- the camera assembly 206 is used to capture images or videos.
- the camera assembly 206 includes a front camera and a rear camera.
- the front camera is arranged on the front panel of the terminal 200
- the rear camera is arranged on the back of the terminal 200.
- the camera assembly 206 also includes a flash.
- the flash is a single-color temperature flash or a dual-color temperature flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash used for light compensation under different color temperatures.
- the audio circuit 207 includes a microphone and a speaker.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 201 for processing, or input to the radio frequency circuit 204 to implement voice communication.
- the microphone is an array microphone or an omnidirectional acquisition microphone.
- the speaker is used to convert the electrical signal from the processor 201 or the radio frequency circuit 204 into sound waves.
- the speakers are traditional thin-film speakers, or piezoelectric ceramic speakers.
- the audio circuit 207 In response to the speaker being a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for purposes such as distance measurement.
- the audio circuit 207 also includes a headphone jack.
- the positioning component 208 is used to locate the current geographic location of the terminal 200 to implement navigation or LBS (Location Based Service, location-based service).
- the positioning component 208 is a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Granus system of Russia, or the Galileo system of the European Union.
- the power supply 209 is used to supply power to various components in the terminal 200.
- the power source 209 is alternating current, direct current, disposable batteries, or rechargeable batteries.
- the rechargeable battery supports wired charging or wireless charging.
- the rechargeable battery is also used to support fast charging technology.
- the terminal 200 further includes one or more sensors 210.
- the one or more sensors 210 include, but are not limited to: an acceleration sensor 211, a gyroscope sensor 212, a pressure sensor 213, a fingerprint sensor 214, an optical sensor 215, and a proximity sensor 216.
- the acceleration sensor 211 detects the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 200. For example, the acceleration sensor 211 is used to detect the components of gravitational acceleration on three coordinate axes.
- the processor 201 controls the touch screen 205 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 211.
- the acceleration sensor 211 is also used for the collection of game or user motion data.
- the gyroscope sensor 212 detects the body direction and rotation angle of the terminal 200, and the gyroscope sensor 212 and the acceleration sensor 211 cooperate to collect the user's 3D actions on the terminal 200.
- the processor 201 implements the following functions according to the data collected by the gyroscope sensor 212: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
- the pressure sensor 213 is arranged on the side frame of the terminal 200 and/or the lower layer of the touch screen 205.
- the processor 201 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 213.
- the processor 201 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 205.
- the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
- the fingerprint sensor 214 is used to collect the user's fingerprint.
- the processor 201 can identify the user's identity based on the fingerprint collected by the fingerprint sensor 214, or the fingerprint sensor 214 can identify the user's identity based on the collected fingerprint. If it is recognized that the user's identity is a trusted identity, the processor 201 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
- the fingerprint sensor 214 is provided on the front, back or side of the terminal 200. In response to the physical buttons or manufacturer logo (logo) provided on the terminal 200, the fingerprint sensor 214 is integrated with the physical buttons or the manufacturer logo.
- the optical sensor 215 is used to collect the ambient light intensity.
- the processor 201 controls the display brightness of the touch screen 205 according to the ambient light intensity collected by the optical sensor 215. Specifically, in response to the high ambient light intensity, the display brightness of the touch screen 205 is increased; in response to the low ambient light intensity, the display brightness of the touch screen 205 is decreased.
- the processor 201 also dynamically adjusts the shooting parameters of the camera assembly 206 according to the ambient light intensity collected by the optical sensor 215.
- the proximity sensor 216 also called a distance sensor, is usually provided on the front panel of the terminal 200.
- the proximity sensor 216 is used to collect the distance between the user and the front of the terminal 200.
- the processor 201 controls the touch screen 205 to switch from the on-screen state to the off-screen state; in response to the proximity sensor 216 detects that the distance between the user and the front of the terminal 200 is gradually increasing, and the processor 201 controls the touch display screen 205 to switch from the on-screen state to the on-screen state.
- FIG. 2 does not constitute a limitation on the terminal 200, and can include more or less components than shown in the figure, or combine some components, or adopt different component arrangements.
- the electronic device is provided as a server.
- FIG. 3 is a schematic diagram showing the structure of a server according to an exemplary embodiment.
- the server 300 may have relatively large differences due to different configurations or performances, and includes one or more processors (Central Processing Units, CPU) 301 and One or more memories 302, where at least one instruction is stored in the memory 302, and the at least one instruction is loaded and executed by the processor 301 to implement the methods provided in the foregoing method embodiments.
- the server 300 also has components such as a wired or wireless network interface, a keyboard, an input and output interface for input and output, and the server also includes other components for implementing device functions, which will not be repeated here.
- a computer-readable storage medium is also provided, and the computer-readable storage medium stores instructions.
- the instructions When the instructions are executed by a processor of an electronic device, the electronic device can perform the following steps:
- a second image which is obtained by adjusting the positions of the pixels in the first partial image and the second partial image.
- the adjustment is based on the center point of the region and the first adjustment parameter.
- the first partial image corresponds to the region
- the second partial image is an image other than the first partial image in the target area.
- the computer-readable storage medium is ROM (Read-Only Memory), RAM (Random Access Memory, Random Access Memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks and optical data storage devices, etc.
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- the present disclosure also provides a computer program product.
- the instructions in the computer program product are executed by the processor of the electronic device, the electronic device can execute the following steps:
- a second image which is obtained by adjusting the positions of the pixels in the first partial image and the second partial image.
- the adjustment is based on the center point of the region and the first adjustment parameter.
- the first partial image corresponds to the region
- the second partial image is an image other than the first partial image in the target area.
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- Fig. 4 is a block diagram showing an image processing device according to an exemplary embodiment. Referring to Figure 4, the device includes:
- the first determining module 410 is configured to determine a plurality of first key points in the first image, and the plurality of first key points are key points of the target part;
- the second determining module 420 is configured to determine a target area of the first image, the target area being obtained based on the expansion of the areas corresponding to the plurality of first key points;
- the image acquisition module 430 is configured to acquire a second image, which is obtained by adjusting the positions of pixels in the first partial image and the second partial image, and the adjustment is based on the center point of the region and the first adjustment parameter.
- the first partial image is an image corresponding to the area
- the second partial image is an image other than the first partial image in the target area.
- the bit image acquisition module 430 includes:
- a shape adjustment unit configured to adjust the shape of the first partial image based on the center point of the region and the first adjustment parameter
- the filling unit is configured to dispersely fill the pixels in the second partial image into the second partial image to obtain the second image.
- the filling unit is configured to determine a first movement direction in response to the first partial image being reduced, where the first movement direction is close to the center point; and based on the first adjustment parameter, determine the first Moving distance; determine the second image, the second image is obtained by moving the pixel points in the second partial image to the first moving direction by the first moving distance.
- the filling unit is configured to determine a second movement direction in response to the first partial image being enlarged, where the second movement direction is away from the center point; and based on the first adjustment parameter, determine the second Moving distance; determining the second image, the second image is obtained by moving the pixel points in the second partial image to the second moving direction by the second moving distance.
- the second determining module 420 includes:
- the first determining unit is configured to determine a target center point, the target center point being obtained based on the plurality of first key points;
- the second determining unit is configured to determine a second key point for each first key point, where the second key point, the first key point, and the target center point are on the same straight line, and the first distance is greater than the second key point. Distance, the first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point;
- the first determining unit is configured to determine a center point of the plurality of first key points as the target center point
- the first determining unit is configured to determine a center point of a part of the first key point as the target center point, and the part of the first key point is located in a central area of the area.
- the first determining module 410 includes:
- the fourth determining unit is configured to determine a plurality of third key points in the third image, the plurality of third key points are key points of the target part, and the third image is an image of the previous frame of the first image ;
- the fifth determining unit is configured to determine a plurality of fourth key points in the first image, the plurality of fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model;
- the sixth determining unit is configured to determine the plurality of first key points, and the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
- the sixth determining unit is configured to determine, for each fourth key point, a first target key point of the plurality of third key points, the first target key point and the fourth key point Points have the same pixel value; determine the average position of the first position and the second position, the first position is the position of the first target key point, the second position is the position of the fourth key point; the first key is obtained The first key point is obtained by rendering the pixel value of the fourth key point to the average position.
- the sixth determining unit is configured to determine a second target key point of the plurality of third key points in response to the target part being occluded, and the second target key point is the occluded target Key points corresponding to the part; acquiring the plurality of first key points, the plurality of first key points are composed based on the second target key point and the plurality of fourth key points;
- the sixth determining unit is configured to, in response to the target part being blocked, use the plurality of third key points as the plurality of first key points.
- the device further includes:
- the third determining module is configured to determine a second adjustment parameter in response to the target part being occluded, where the second adjustment parameter is a parameter for adjusting the third image;
- the fourth determining module is configured to determine a first adjustment parameter, where the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
- the device further includes:
- the fifth determining module is configured to determine the number of frames, where the number of frames is the number of consecutive frames of the image where the target part is blocked;
- the image processing module is also configured to stop performing image processing on the next frame of image in response to the number of frames reaching the target number of frames.
- the image processing device provided in the above embodiment performs image processing
- only the division of the above functional modules is used as an example for illustration.
- the above functions can be allocated by different functional modules as needed. That is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the functions described above.
- the image processing device provided in the foregoing embodiment and the image processing method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- the electronic device in response to the user's desire to beautify a certain part of the image, the electronic device generally obtains multiple key points corresponding to the part in the face image; The position in the face image, realize the beauty treatment of the part. For example, if the user wants to enlarge the eyes in the face image, the electronic device moves multiple key points corresponding to the eyes outwards with the eyes as the center, so as to realize the enlargement of the eyes. For another example, if the user wants to thin the eyebrows in the face image, the electronic device moves multiple key points corresponding to the eyebrows inward with the eyebrows as the center, so as to realize the thin eyebrow processing.
- the multiple key points corresponding to the eyes will move outwards with the eye as the center.
- the moved key points will occupy the positions of the pixels around the eyes, so that the pixels around the eyes are Will pile up.
- the key points corresponding to the eyebrows move inward with the eyebrows as the center, so that pixels around the eyebrows after the thin eyebrow processing will be missing. It can be seen that the image processing method in the related art will cause image distortion due to the sudden change of the position of the image pixel point, and the beauty effect of the face image is poor.
- Fig. 5 is a flowchart showing an image processing method according to an exemplary embodiment. Referring to Fig. 5, the image processing method includes the following steps:
- acquiring the second image includes:
- the pixels in the second partial image are scattered and filled into the second partial image to obtain the second image.
- the dispersive filling of pixels in the second partial image into the second partial image to obtain the second image includes:
- the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the first moving distance in the first moving direction.
- the dispersive filling of pixels in the second partial image into the second partial image to obtain the second image includes:
- the second image is determined, and the second image is obtained by moving the pixels in the second partial image by the second moving distance in the second moving direction.
- the determining the target area of the first image includes:
- For each first key point determine a second key point.
- the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance, and the first distance is the The distance between the second key point and the target center point, where the second distance is the distance between the first key point and the target center point;
- the target area is determined, and the target area is determined based on a plurality of second key points.
- determining the target center point includes any one of the following:
- the center point of a part of the first key point is determined as the target center point, and the part of the first key point is located in the central area of the area.
- the determining multiple first key points in the first image includes:
- the plurality of third key points are key points of the target part, and the third image is an image of a previous frame of the first image;
- the multiple first key points are determined, and the multiple first key points are determined based on the multiple third key points and the multiple fourth key points.
- the determining the plurality of first key points includes:
- For each fourth key point determine a first target key point among the plurality of third key points, where the first target key point and the fourth key point have the same pixel value;
- the first key point is acquired, and the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
- determining the plurality of first key points includes any one of the following:
- the plurality of third key points are used as the plurality of first key points.
- the method further includes:
- a first adjustment parameter is determined, and the first adjustment parameter is obtained by adjusting the second adjustment parameter based on a preset amplitude.
- the method further includes:
- the target area is obtained by expanding the area where the target part is located in the first image, so that the change of the target part in the adjusted second image can gradually affect other areas in the first image. It prevents the adjustment of the target part from affecting the pixels in other areas of the image, resulting in image distortion, and optimizing the image processing effect.
- Fig. 6 is a flowchart showing an image processing method according to an exemplary embodiment. Referring to Fig. 6, the image processing method includes the following steps:
- the electronic device determines multiple first key points in the first image.
- the multiple first key points are key points of the target part.
- the target part is the facial features or facial contours in the facial region in the first image.
- the target part is eyes, eyebrows, bridge of nose, mouth or cheeks.
- the target part is another body part, for example, waist, legs, etc.
- the target part is a target part selected by the user, or the target part is a predetermined target part.
- the electronic device receives the user's selection operation, and determines the target part selected by the user based on the selection operation.
- the electronic device sets the target part for shape adjustment in advance. In this step, the electronic device directly calls the previously set target part. In the embodiments of the present disclosure, this is not specifically limited.
- the target part is a part in the first image. Or, the target part is a plurality of parts in the first image. In the embodiments of the present disclosure, this is not specifically limited.
- the electronic device determines multiple first key points of the target part.
- FIG. 7 is a schematic diagram showing key points of a facial area according to an exemplary embodiment. If the target part is the eyebrows, the multiple first key points are the 10 key points numbered 19-28 in FIG. 7.
- the electronic device only determines the multiple first key points of the target part based on the current first image. Or, the electronic device determines the multiple first key points corresponding to the target part through the previous frame of the first image. In some embodiments, the electronic device may directly determine multiple first key points corresponding to the target part from the first image. In this implementation manner, the electronic device directly determines multiple first key points corresponding to the target part from the first image, thereby simplifying the processing flow of determining multiple first key points and improving the efficiency of image processing.
- the electronic device determines a plurality of first key points corresponding to the target part based on a plurality of third key points, where the plurality of third key points are the target part in the previous frame of the first image The key point.
- the electronic device determines a plurality of third key points in the third image.
- the plurality of third key points are key points of the target part, and the third image is the previous frame image of the first image.
- the electronic device stores multiple key points of the third image. In this step, the electronic device directly determines the multiple third key points from the multiple key points according to the target part. In this implementation, the electronic device processes the acquired images in advance, and stores the corresponding relationship between each image and pixel, so that multiple third key points can be determined directly according to the target part, which simplifies the acquisition of multiple third key points. The key point of the process improves the processing efficiency.
- the electronic device determines the plurality of third key points through the first determination model. Wherein, the electronic device inputs the third image into the first determination model to obtain all key points of the third image; and determines a plurality of third key points from all the key points. Alternatively, the electronic device inputs the third image into the second deterministic model, and the second deterministic model outputs the plurality of third key points.
- the first determination model and the second determination model are any neural network model.
- the electronic device performs model training as required, and the first deterministic model and the second deterministic model are obtained by training by adjusting model parameters.
- the number of the key points can be set as required.
- the number of the key points is not specifically limited.
- the number of the key points is 100, 101, 105, and so on. See Fig. 7, which shows 101 key points of the face area.
- multiple third key points are determined through the model, thereby improving the accuracy of determining multiple third key points.
- the electronic device determines a plurality of fourth key points in the first image.
- the multiple fourth key points are key points of the target part, and the fourth key points are determined by a key point determination model.
- This step is similar to the process of determining multiple third key points by the electronic device in step (A1), and will not be repeated here.
- the electronic device determines the plurality of first key points.
- the plurality of first key points are determined based on the plurality of third key points and the plurality of fourth key points.
- the electronic device renders the pixel values of the multiple fourth key points at the average position to obtain multiple first key points.
- the process is implemented through the following steps (a1)-(a3), including:
- the electronic device determines the first target key point among the plurality of third key points.
- the first target key point and the fourth key point have the same pixel value.
- the electronic device first selects any fourth key point, and then determines the first target key point having the same pixel value as the fourth key point from the plurality of third key points.
- the electronic device determines the average position of the first position and the second position.
- the first position is the position of the third key point
- the second position is the position of the fourth key point.
- the electronic device establishes the same coordinate system in the third image and the first image, and determines the coordinate positions of the third key point and the fourth key point with the same pixel value in the same coordinate system to obtain the first A position and a second position are averaged to obtain an average position.
- the electronic device establishes different coordinate systems in the third image and the first image, respectively determines the coordinate positions of the third key point and the fourth key point in their respective coordinate systems, and then passes the coordinates between the coordinate systems.
- the mapping relationship is used to obtain the coordinate positions of the third key point and the fourth key point in the same coordinate system.
- the electronic device first determines the first positions of the multiple third key points, and then determines the second positions of the multiple fourth key points. Alternatively, the electronic device first determines the second positions of the multiple fourth key points, and then determines the first positions of the multiple third key points. Or, the electronic device simultaneously determines the first positions of the multiple third key points and the second positions of the multiple fourth key points. In the embodiments of the present disclosure, the order in which the electronic device determines the first position and the second position is not specifically limited.
- the first key point is obtained by rendering the pixel value of the fourth key point to the average position.
- the electronic device can also average the positions of other pixels in the third image and the first image according to the correspondence between the third key point and the fourth key point, thereby averaging all the positions in the first image. Pixels are adjusted.
- the electronic device also performs a weighted summation of the pixel value of the third key point and the pixel value of the fourth key point, and uses the weighted summation pixel value as the pixel value of the multiple first key points. Perform rendering.
- the image may be blocked. If there is no occlusion of the target part in the first image, the electronic device directly uses multiple fourth key points as multiple first key points. If the target part in the first image is occluded, and the fourth key point acquired by the electronic device is missing the key point, the electronic device determines a plurality of first key points by acquiring the target part in the previous frame of image. Correspondingly, after the electronic device acquires the first image, it recognizes the key points in the first image. In response to the electronic device detecting that the target part in the first image is blocked, the electronic device acquires the previously acquired third image. The third image is an image with complete key points.
- the electronic device selects the missing fourth key point from the plurality of third key points.
- the process is: in response to the target part being occluded, the electronic device determines which of the plurality of third key points is The second target key point, the second target key point is the key point corresponding to the occluded target part; the multiple first key points are acquired, and the multiple first key points are based on the second target key point and the multiple The fourth key point is composed.
- the electronic device directly uses a plurality of third key points as the first key points, and the process is: in response to the target part being blocked, the electronic device uses the plurality of third key points as the plurality of first key points. key point.
- the electronic device determines a target area of the first image, where the target area is obtained based on the expansion of areas corresponding to the plurality of first key points.
- the electronic device determines the area corresponding to the plurality of first key points in the first image, and then expands the area to obtain the target area.
- the electronic device determines the area enclosed by the plurality of first key points.
- the electronic device is sequentially connected to adjacent first key points, and an image area surrounded by a plurality of first key points is used as the area.
- the electronic device expands the area based on the area and the multiple first key points to obtain the target area.
- the target center point is obtained based on the plurality of first key points.
- the electronic device determines the center point of the plurality of first key points as the target center point. For example, please continue to refer to FIG. 7 and take the target part as the eyebrows as an example.
- the multiple first key points corresponding to the eyebrows are 10 key points 19-28, and the electronic device determines the center points corresponding to these 10 key points.
- the electronic device first selects a part of the first key point in the central area from the plurality of first key points, and then determines the center point of the part of the first key point.
- the electronic device determines the center point of a part of the first key point as the target center point, and the part of the first key point is located in the central area of the area.
- the target part As an example.
- the multiple first key points corresponding to the eyebrows are the 10 key points 19-28.
- the electronic device first selects the key points 19-28
- the electronic device determines a second key point, the second key point, the first key point and the target center point are on the same straight line, and the first distance is greater than the second distance , The first distance is the distance between the second key point and the target center point, and the second distance is the distance between the first key point and the target center point.
- the electronic device uses the target center point as the end point to respectively connect to each first key point to make a ray. For example, continue to take the example in the above step (1) as an example, take the average points of the first key points 21, 22, 26, and 27 as the target center point, and make a ray in the direction of the first key point 19-28 respectively. , Get 10 rays. Then, the electronic device determines a second key point from each ray to obtain the multiple second key points. The distance between the second key point and the target center point is greater than the first key point on the ray where the second key point is located. The distance between the point and the center point of the target.
- the electronic device intercepts a line segment on the obtained ray with the target center point as the end point, and the other end point of the line segment is the second key point.
- the length of the line segment is greater than the length from the target center point to the first key point on the ray.
- the length of the line segment is a preset length determined according to an empirical value. In the embodiment of the present disclosure, the length of the line segment is not specifically limited.
- step (A1) of step S601 the process of determining multiple second key points is similar to the process of determining multiple third key points by the electronic device in step (A1) of step S601, and will not be repeated here.
- the electronic device determines the target area.
- the target area is determined based on a plurality of second key points.
- the electronic device is connected to a plurality of second key points in sequence, and an image area surrounded by the plurality of second key points is used as the target area.
- the electronic device uses a mesh algorithm to determine the mesh area corresponding to the plurality of second key points, and use the mesh area as the target area. The process is: the electronic device uses the plurality of second key points as endpoints, and performs grid expansion on the target parts corresponding to the plurality of first key points to obtain a grid area. The electronic device composes the obtained grid area and the area corresponding to the first key point to form a target area. Referring to Figure 8, the electronic device uses multiple second key points as endpoints, and connects them to multiple first key points to achieve grid expansion, to obtain a grid area in the form of triangles. The area corresponding to the point constitutes the target area.
- the electronic device adjusts the shape of the first partial image based on the center point of the region and the first adjustment parameter.
- the first partial image is an image corresponding to the region.
- the first adjustment parameter is a parameter used to adjust the first partial image.
- the first adjustment parameter is a system default integer parameter, or the first adjustment parameter is a parameter generated based on user settings, or the first adjustment parameter is a parameter determined based on the second adjustment parameter of the third image.
- the first adjustment parameter includes at least an adjustment method and an adjustment intensity, as well as a color parameter, a brightness parameter, and the like.
- the electronic device adjusts the shape of the target part based on the adjustment method in the first adjustment parameter, and the shape adjustment is to perform zoom-in adjustment, zoom-out adjustment, etc., to the target part.
- the first adjustment parameter is an adjustment parameter input by the user and received by the electronic device.
- the first adjustment parameter is an adjustment parameter set based on different target parts in the electronic device. For example, if the target part is the eyes, the adjustment parameter is to enlarge the eyes; if the target part is the eyebrows, the adjustment parameter is to narrow the eyebrows.
- the adjustment of the eyes is a magnification adjustment, and the adjustment of the eyebrows is a reduction adjustment.
- the electronic device adjusts the shape of the first partial image, so as to realize the enlargement or reduction of the target part.
- the electronic device determines the relationship between pixels in the first partial image according to a grid algorithm, and adjusts the pixel list in the first partial image.
- the electronic device adjusts the target part through a liquefaction algorithm.
- the process is: the electronic device determines the center position corresponding to the target part, and makes a circle with the center position as the center, so that the first partial image corresponding to the target part Within the circle, the adjustment of the first partial image corresponding to the target part is achieved by changing the size of the circle.
- the electronic device determines a first adjustment parameter for shape adjustment, and performs shape adjustment on the first partial image based on the first adjustment parameter.
- the process of the electronic device performing shape adjustment on the first partial image based on the first adjustment parameter is implemented through the following steps (1)-(2), including:
- the electronic device determines the first adjustment parameter.
- the electronic device determines the current adjustment parameter as the first adjustment parameter. In some embodiments, when the target part is not blocked, the electronic device determines the current adjustment parameter as the first adjustment parameter. When the target part is blocked, the electronic device determines the first adjustment parameter based on the second adjustment parameter of the previous frame of image.
- the process of determining the first adjustment parameter is: in response to the target part being occluded, determining the second adjustment parameter, the second adjustment parameter being a parameter for adjusting the third image; determining the first adjustment parameter, The first adjustment parameter is obtained by adjusting the second adjustment parameter based on the preset amplitude.
- the second adjustment parameter is an adjustment parameter used by the electronic device when adjusting the third image.
- the second adjustment parameter is a system default adjustment parameter, or an adjustment parameter generated based on user settings.
- the process of adjusting the image can be smoother and preventing sudden changes when the image is adjusted.
- the image may be blocked.
- the electronic device in response to the adjustment of the target part by the electronic device, the electronic device performs fault-tolerant processing.
- the target number of images as the long forward delay
- the previous frame in the case of missing key frames, the previous frame can be continued and the target part of the current frame can be adjusted. And gradually weaken the adjustment range, and restore the first adjustment parameter to the original adjustment parameter when the first key point is no longer missing.
- the electronic device determines the number of frames, which is the number of consecutive frames of the image in which the target part is blocked; in response to the number of frames reaching the target frame number, it stops performing image processing on the next frame of image.
- the first adjustment parameter can be gradually restored to the second adjustment parameter. For example, in response to the current first image being an image with missing the fourth key point, the electronic device adds one to the number of frames of the image with consecutive missing key points. The number of frames of the current image is n, and in response to the lack of multiple fourth key points in the first image, the electronic device updates the current frame number to n+1. In response to the current first image is not an image missing the fourth key point, the electronic device clears the number of frames in which the key point is continuously missing. The electronic device then increases the current adjustment parameter based on the preset amplitude until the adjustment parameter is restored to the second adjustment parameter.
- the electronic device adjusts each pixel in the first partial image based on the center point and the first adjustment parameter.
- the electronic device keeps the position of the center point unchanged, and adjusts the position of the pixel point in the first partial image based on the first adjustment parameter to realize the shape adjustment of the first partial image.
- the electronic device adjusts the position of each pixel in the first partial image based on the first adjustment parameter and the center point, so as to realize the adjustment of the target part and prevent the adjustment of the area other than the target part. , Improve the accuracy of adjustment.
- step S604 the electronic device scatters and fills the pixels in the second partial image into the second partial image to obtain a second image.
- the second partial image is an image other than the first partial image in the target area.
- the electronic device adjusts the target part, so that the position of the pixel in the first partial image is changed, causing a deformed area in the target area.
- the electronic device adjusts the positions of the pixels in the target area following the first partial image to achieve a smooth transition of the adjustment of the target part to other areas than the first partial image and prevent sudden changes in the image.
- the following steps (A1)-(A3) are used to achieve diffusion and filling of pixels, including:
- the electronic device determines a first movement direction, the first movement direction being close to the center point.
- the first partial image is reduced and adjusted, and the moving direction of the pixels in the second partial image is consistent with the moving direction during the shrinking process of the first partial image, so as to determine the moving direction of the pixels in the second partial image Is the direction close to the center point.
- the electronic device determines the first movement distance based on the first adjustment parameter.
- the electronic device determines, based on the first adjustment parameter, the distortion area that may be generated by reducing and adjusting the first partial image, and using the distortion area to determine that the pixels in the second partial image fill the distorted image, The first movement distance that needs to be moved.
- the electronic device determines a second image, and the second image is obtained by moving pixels in the second partial image by the first moving distance in the first moving direction.
- the process of adjusting the first partial image and the second partial image by the electronic device is performed synchronously, or the first partial image is adjusted first, and then the second partial image is adjusted.
- the electronic device adjusts the first partial image and the second partial image synchronously, the first movement distance is determined directly based on the first adjustment parameter and through a predetermined movement distance algorithm.
- the electronic device moves the pixels in the second partial image and the pixels in the first partial image to implement image adjustment of the first image.
- the electronic device uses the triangular patch corresponding to each key point in the plurality of first key points, and compares the triangular patch corresponding to the second partial image based on the movement distance and direction of the first key point. The position of the pixel point is adjusted, and the pixel point in the triangle patch is adjusted based on the moving distance and moving direction of the first key point.
- the moving distance and moving direction of each pixel are the same or different. If the moving distance and moving direction of each pixel are the same, the electronic device directly averages the moving distance and moving direction based on the number of pixels; if the moving distance and moving direction of each pixel are different, the electronic device determines the difference between the different pixels. Moving distance and moving direction.
- steps (B1)-(B3) are used to achieve diffusion and filling of pixels, including:
- the electronic device determines a second moving direction, the second moving direction being away from the center point.
- the first partial image is enlarged and adjusted, and the moving direction of the pixels in the second partial image is the same as the moving direction during the enlargement of the first partial image, so as to determine the moving direction of the pixels in the second partial image Is the direction away from the center point.
- the electronic device determines the second moving distance based on the first adjustment parameter.
- This step is similar to step (A2) in S605, and will not be repeated here.
- the electronic device determines the second image, and the second image is obtained by moving pixels in the second partial image by the second moving distance in the second moving direction.
- This step is similar to step (A3) in S605, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
Claims (34)
- 一种图像处理方法,其特征在于,所述方法包括:确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
- 根据权利要求1所述的方法,其特征在于,所述获取第二图像,包括:基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
- 根据权利要求2所述的方法,其特征在于,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;基于所述第一调整参数,确定第一移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
- 根据权利要求2所述的方法,其特征在于,所述将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像,包括:响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;基于所述第一调整参数,确定第二移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
- 根据权利要求1所述的方法,其特征在于,所述确定所述第一图像的目标区域,包括:确定目标中心点,所述目标中心点基于所述多个第一关键点得到;对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;确定所述目标区域,所述目标区域基于多个第二关键点确定。
- 根据权利要求5所述的方法,其特征在于,所述确定目标中心点,包括下述任一项:将所述多个第一关键点的中心点确定为所述目标中心点;将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
- 根据权利要求1所述的方法,其特征在于,所述确定第一图像中的多个第一关键点,包括:确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
- 根据权利要求7所述的方法,其特征在于,所述确定所述多个第一关键点,包括:对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所 述第一目标关键点与所述第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
- 根据权利要求7所述的方法,其特征在于,所述确定所述多个第一关键点,包括下述任一项:响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
- 根据权利要求7所述的方法,其特征在于,所述方法还包括:响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
- 根据权利要求10所述的方法,其特征在于,所述方法还包括:确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
- 一种图像处理装置,其特征在于,所述装置包括:第一确定模块,被配置为确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;第二确定模块,被配置为确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;图像获取模块,被配置为获取第二图像,所述第二图像通过调整第一局部 图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
- 根据权利要求12所述的装置,其特征在于,所述位图像获取模块包括:形状调整单元,被配置为基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;填充单元,被配置为将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
- 根据权利要求13所述的装置,其特征在于,所述填充单元,被配置为响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;基于所述第一调整参数,确定第一移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
- 根据权利要求13所述的装置,其特征在于,所述填充单元,被配置为响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;基于所述第一调整参数,确定第二移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
- 根据权利要求12所述的装置,其特征在于,所述第二确定模块包括:第一确定单元,被配置为确定目标中心点,所述目标中心点基于所述多个第一关键点得到;第二确定单元,被配置为对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;第三确定单元,被配置为确定所述目标区域,所述目标区域基于多个第二关键点确定。
- 根据权利要求16所述的装置,其特征在于,所述第一确定单元,被配置为将所述多个第一关键点的中心点确定为所述目标中心点;所述第一确定单元,被配置为将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
- 根据权利要求12所述的装置,其特征在于,所述第一确定模块包括:第四确定单元,被配置为确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;第五确定单元,被配置为确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;第六确定单元,被配置为确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
- 根据权利要求18所述的装置,其特征在于,所述第六确定单元,被配置为对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
- 根据权利要求18所述的装置,其特征在于,所述第六确定单元,被配置为响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;所述第六确定单元,被配置为响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
- 根据权利要求18所述的装置,其特征在于,所述装置还包括:第三确定模块,被配置为响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;第四确定模块,被配置确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
- 根据权利要求21所述的装置,其特征在于,所述装置还包括:第五确定模块,被配置为确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;图像处理模块,还被配置响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
- 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器,用于存储所述一个或多个处理器可执行指令的易失性或非易失性存储器;其中,所述一个或多个处理器被配置为执行以下步骤:确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
- 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:基于所述区域的中心点和所述第一调整参数,调整所述第一局部图像的形状;将所述第二局部图像中的像素点分散填充到所述第二局部图像中,得到所述第二图像。
- 根据权利要求24所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:响应于所述第一局部图像被缩小,确定第一移动方向,所述第一移动方向为靠近所述中心点;基于所述第一调整参数,确定第一移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第一移动方向移动所述第一移动距离得到。
- 根据权利要求24所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:响应于所述第一局部图像被放大,确定第二移动方向,所述第二移动方向为远离所述中心点;基于所述第一调整参数,确定第二移动距离;确定所述第二图像,所述第二图像为将所述第二局部图像中的像素点,向所述第二移动方向移动所述第二移动距离得到。
- 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:确定目标中心点,所述目标中心点基于所述多个第一关键点得到;对于每个第一关键点,确定第二关键点,所述第二关键点、所述第一关键点和所述目标中心点在同一直线上,且第一距离大于第二距离,所述第一距离为所述第二关键点与所述目标中心点之间的距离,所述第二距离为所述第一关键点与所述目标中心点之间的距离;确定所述目标区域,所述目标区域基于多个第二关键点确定。
- 根据权利要求27所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下至少一个步骤:将所述多个第一关键点的中心点确定为所述目标中心点;将部分第一关键点的中心点确定为所述目标中心点,所述部分第一关键点位于所述区域的中心区域内。
- 根据权利要求23所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:确定第三图像中的多个第三关键点,所述多个第三关键点为所述目标部位的关键点,所述第三图像为所述第一图像的前一帧图像;确定第一图像中的多个第四关键点,所述多个第四关键点为所述目标部位的关键点,且,所述第四关键点通过关键点确定模型确定;确定所述多个第一关键点,所述多个第一关键点基于所述多个第三关键点和所述多个第四关键点确定。
- 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:对于每个第四关键点,确定所述多个第三关键点中的第一目标关键点,所述第一目标关键点与所述第四关键点具有相同像素值;确定第一位置和第二位置的平均位置,所述第一位置为所述第一目标关键点的位置,所述第二位置为所述第四关键点的位置;获取所述第一关键点,所述第一关键点为通过将所述第四关键点的像素值渲染到所述平均位置得到。
- 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下至少一个步骤:响应于所述目标部位被遮挡,确定所述多个第三关键点中的第二目标关键点,所述第二目标关键点为被遮挡的目标部位对应的关键点;获取所述多个第一关键点,所述多个第一关键点基于所述第二目标关键点和所述多个第四关键点组成;响应于所述目标部位被遮挡,将所述多个第三关键点作为所述多个第一关键点。
- 根据权利要求29所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:响应于所述目标部位被遮挡,确定第二调整参数,所述第二调整参数为用于调整所述第三图像的参数;确定第一调整参数,所述第一调整参数基于预设幅度调整所述第二调整参数得到。
- 根据权利要求32所述的方法,其特征在于,所述一个或多个处理器被配置为执行以下步骤:确定帧数,所述帧数为所述目标部位被遮挡的图像的连续帧数;响应于所述帧数达到目标帧数,停止对下一帧图像进行图像处理。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有指令,所述指令被电子设备的处理器执行时,使所述电子设备能够执行以下步骤:确定第一图像中的多个第一关键点,所述多个第一关键点为目标部位的关键点;确定所述第一图像的目标区域,所述目标区域基于对所述多个第一关键点对应的区域进行扩展得到;获取第二图像,所述第二图像通过调整第一局部图像和第二局部图像中像素点的位置得到,所述调整基于所述区域的中心点和第一调整参数,所述第一局部图像为所述区域对应的图像,所述第二局部图像为所述目标区域中除所述第一局部图像外的图像。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022549510A JP2023514342A (ja) | 2020-04-30 | 2020-11-17 | 画像処理方法および画像処理装置 |
US17/822,493 US20220405879A1 (en) | 2020-04-30 | 2022-08-26 | Method for processing images and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010364388.9A CN113596314B (zh) | 2020-04-30 | 2020-04-30 | 图像处理方法、装置及电子设备 |
CN202010364388.9 | 2020-04-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/822,493 Continuation US20220405879A1 (en) | 2020-04-30 | 2022-08-26 | Method for processing images and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021218118A1 true WO2021218118A1 (zh) | 2021-11-04 |
Family
ID=78237279
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/129503 WO2021218118A1 (zh) | 2020-04-30 | 2020-11-17 | 图像处理方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220405879A1 (zh) |
JP (1) | JP2023514342A (zh) |
CN (1) | CN113596314B (zh) |
WO (1) | WO2021218118A1 (zh) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787878A (zh) * | 2016-02-25 | 2016-07-20 | 杭州格像科技有限公司 | 一种美颜处理方法及装置 |
CN108389155A (zh) * | 2018-03-20 | 2018-08-10 | 北京奇虎科技有限公司 | 图像处理方法、装置及电子设备 |
CN108564531A (zh) * | 2018-05-08 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | 一种图像处理方法及装置 |
CN109376671A (zh) * | 2018-10-30 | 2019-02-22 | 北京市商汤科技开发有限公司 | 图像处理方法、电子设备及计算机可读介质 |
CN109584151A (zh) * | 2018-11-30 | 2019-04-05 | 腾讯科技(深圳)有限公司 | 人脸美化方法、装置、终端及存储介质 |
CN109584153A (zh) * | 2018-12-06 | 2019-04-05 | 北京旷视科技有限公司 | 修饰眼部的方法、装置和系统 |
-
2020
- 2020-04-30 CN CN202010364388.9A patent/CN113596314B/zh active Active
- 2020-11-17 JP JP2022549510A patent/JP2023514342A/ja active Pending
- 2020-11-17 WO PCT/CN2020/129503 patent/WO2021218118A1/zh active Application Filing
-
2022
- 2022-08-26 US US17/822,493 patent/US20220405879A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787878A (zh) * | 2016-02-25 | 2016-07-20 | 杭州格像科技有限公司 | 一种美颜处理方法及装置 |
CN108389155A (zh) * | 2018-03-20 | 2018-08-10 | 北京奇虎科技有限公司 | 图像处理方法、装置及电子设备 |
CN108564531A (zh) * | 2018-05-08 | 2018-09-21 | 麒麟合盛网络技术股份有限公司 | 一种图像处理方法及装置 |
CN109376671A (zh) * | 2018-10-30 | 2019-02-22 | 北京市商汤科技开发有限公司 | 图像处理方法、电子设备及计算机可读介质 |
CN109584151A (zh) * | 2018-11-30 | 2019-04-05 | 腾讯科技(深圳)有限公司 | 人脸美化方法、装置、终端及存储介质 |
CN109584153A (zh) * | 2018-12-06 | 2019-04-05 | 北京旷视科技有限公司 | 修饰眼部的方法、装置和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN113596314A (zh) | 2021-11-02 |
US20220405879A1 (en) | 2022-12-22 |
CN113596314B (zh) | 2022-11-11 |
JP2023514342A (ja) | 2023-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109712224B (zh) | 虚拟场景的渲染方法、装置及智能设备 | |
US11517099B2 (en) | Method for processing images, electronic device, and storage medium | |
WO2021008456A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN108449641B (zh) | 播放媒体流的方法、装置、计算机设备和存储介质 | |
CN110427110B (zh) | 一种直播方法、装置以及直播服务器 | |
CN111324250B (zh) | 三维形象的调整方法、装置、设备及可读存储介质 | |
US11962897B2 (en) | Camera movement control method and apparatus, device, and storage medium | |
CN111028144B (zh) | 视频换脸方法及装置、存储介质 | |
WO2022134632A1 (zh) | 作品处理方法及装置 | |
CN110853128B (zh) | 虚拟物体显示方法、装置、计算机设备及存储介质 | |
CN112565806B (zh) | 虚拟礼物赠送方法、装置、计算机设备及介质 | |
CN110839174A (zh) | 图像处理的方法、装置、计算机设备以及存储介质 | |
CN111586444B (zh) | 视频处理方法、装置、电子设备及存储介质 | |
CN110728744B (zh) | 一种体绘制方法、装置及智能设备 | |
CN110992268A (zh) | 背景设置方法、装置、终端及存储介质 | |
WO2021218926A1 (zh) | 图像显示方法、装置和计算机设备 | |
CN115904079A (zh) | 显示设备调整方法、装置、终端及存储介质 | |
WO2021218118A1 (zh) | 图像处理方法及装置 | |
CN112967261B (zh) | 图像融合方法、装置、设备及存储介质 | |
CN109685881B (zh) | 一种体绘制方法、装置及智能设备 | |
CN111757146B (zh) | 视频拼接的方法、系统及存储介质 | |
CN108881715B (zh) | 拍摄模式的启用方法、装置、终端及存储介质 | |
CN108881739B (zh) | 图像生成方法、装置、终端及存储介质 | |
CN112381729A (zh) | 图像处理方法、装置、终端及存储介质 | |
CN110942426A (zh) | 图像处理的方法、装置、计算机设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20933277 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022549510 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.03.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20933277 Country of ref document: EP Kind code of ref document: A1 |