CN114418865A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114418865A
CN114418865A CN202011173511.5A CN202011173511A CN114418865A CN 114418865 A CN114418865 A CN 114418865A CN 202011173511 A CN202011173511 A CN 202011173511A CN 114418865 A CN114418865 A CN 114418865A
Authority
CN
China
Prior art keywords
image
target
processed
target object
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011173511.5A
Other languages
Chinese (zh)
Inventor
俞盼
吴岳霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011173511.5A priority Critical patent/CN114418865A/en
Publication of CN114418865A publication Critical patent/CN114418865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Abstract

The present disclosure relates to an image processing method, apparatus, device, and storage medium, the method comprising: acquiring a target image to be processed; determining a face region and a target region in the target image, wherein the target region comprises the face region and a peripheral preset region; in response to detecting that a target object in the target area satisfies a processing condition, processing the target object; and determining a processed target image based on the processed face area and the processed target object. The method and the device can avoid the problem that the target area is distorted and deformed due to the fact that the face area is processed, can avoid image distortion, and further can improve the processing quality of the image.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
With the popularization of terminal devices such as smart phones, more and more users use mobile phones to capture images. Many users process the captured images due to the requirement of beautiful images. Many beauty applications have come to address this need. Nowadays, most of the cameras of the terminal devices are added with a beautifying function so as to process the shot user images in the shooting process of the user.
However, the image processing algorithms in the related art generally process only for the face of the user, resulting in occurrence of image distortion, affecting the quality of image processing.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an image processing method, an image processing apparatus, an image processing device, and a storage medium, which are used to solve the defects in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring a target image to be processed;
determining a target area in the target image, wherein the target area comprises a face area and a peripheral preset area;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
In an embodiment, the target object satisfies a processing condition including at least one of:
the position of the target object in the target image meets a set position condition;
the proportion of the target object in the target image meets a set proportion condition.
In an embodiment, the target object comprises a hand;
the processing the target object comprises:
determining characteristic information of the hand;
searching a sample image matched with the hand in a preset hand image library based on the characteristic information;
and processing the hand part based on the sample image to obtain a target object.
In one embodiment, the characteristic information includes a target ratio of the palm and the fingers;
the searching for the sample image matched with the hand in a preset hand image library based on the characteristic information comprises:
and searching a sample image with the palm and finger ratio closest to the target ratio in the preset hand image library.
In one embodiment, the processing the hand based on the sample image comprises:
adjusting a target ratio of the palm and fingers in the hand based on the ratio of the palm and fingers in the sample image.
In one embodiment, the processing the hand based on the sample image comprises:
acquiring a first image parameter of the face region and a second image parameter of the sample image, wherein the first image parameter and the second image parameter comprise at least one of color and texture;
processing image parameters of the hand based on the first image parameters and the second image parameters.
In one embodiment, the determining a processed target image based on the processed face region and the processed target object includes:
determining a second positional relationship between the processed face region and the processed target object based on a first relationship between the face region and the target object in the target image;
and adjusting the position relation between the first image layer corresponding to the processed face area and the second image layer corresponding to the processed target object based on the second position relation to obtain a processed target image.
In an embodiment, the method further comprises:
and in response to the fact that a blank area exists between the adjusted first image layer and the adjusted second image layer, filling the blank area by adopting a preset image filling mode.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
the target image acquisition module is used for acquiring a target image to be processed;
a target area determination module, configured to determine a target area in the target image, where the target area includes a face area and a peripheral preset area;
the target object processing module is used for processing the target object in response to the fact that the target object in the target area meets the processing condition;
and the processed image determining module is used for determining a processed target image based on the processed face area and the processed target object.
In an embodiment, the target object satisfies a processing condition including at least one of:
the position of the target object in the target image meets a set position condition;
the proportion of the target object in the target image meets a set proportion condition.
In one embodiment, the target area comprises a hand;
the target object processing module comprising:
a feature information determination unit configured to determine feature information of the hand;
the sample image searching unit is used for searching a sample image matched with the hand in a preset hand image library based on the characteristic information;
a target object processing unit for processing the hand based on the sample image.
In one embodiment, the characteristic information includes a target ratio of the palm and the fingers;
the sample image searching unit is further used for searching a sample image with the palm and finger ratio closest to the target ratio in the preset hand image library.
In an embodiment, the target object processing unit is further configured to adjust the target ratio of the palm and the fingers in the hand based on the ratio of the palm and the fingers in the sample image.
In one embodiment, the target object processing unit is further configured to:
acquiring a first image parameter of the face region and a second image parameter of the sample image, wherein the first image parameter and the second image parameter comprise at least one of color and texture;
processing image parameters of the hand based on the first image parameters and the second image parameters.
In one embodiment, the processing image determination module comprises:
a position relation determining unit configured to determine a second position relation between the processed face region and the processed target object based on a first relation between the face region and the target object in the target image;
and the processed image determining unit is used for adjusting the position relationship between the first image layer corresponding to the processed face area and the second image layer corresponding to the processed target object based on the second position relationship to obtain a processed target image.
In one embodiment, the processing image determination module further comprises:
and the blank area filling unit is used for responding to the fact that a blank area exists between the first image layer and the second image layer after the adjustment, and filling the blank area by adopting a preset image filling mode.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus, the apparatus comprising:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a target image to be processed;
determining a face region and a target region in the target image, wherein the target region comprises the face region and a peripheral preset region;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring a target image to be processed;
determining a face region and a target region in the target image, wherein the target region comprises the face region and a peripheral preset region;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, the target image to be processed is acquired, the face area and the target area in the target image are determined, then the target object is processed in response to the fact that the target object in the target area meets the processing condition, the processed target image is determined based on the processed face area and the processed target object, the problem that the target area is distorted and deformed due to the fact that the face area is processed can be avoided, image distortion can be avoided, and the processing quality of the image can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating how the target object is processed according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating how the target object is processed based on the sample image in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating how a processed target image may be determined based on a processed face region and a processed target object in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment;
fig. 6 is a block diagram illustrating an image processing apparatus according to yet another exemplary embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment; the method of the embodiment can be applied to a terminal device (e.g., a smart phone, a tablet computer, a notebook computer, or a wearable device) with an image processing function.
As shown in fig. 1, the method comprises the following steps S101-S104:
in step S101, a target image to be processed is acquired.
In this embodiment, after a user acquires a target image through an image acquisition device (e.g., a camera) of a terminal device, the terminal device may acquire the target image through its own image processing device.
The target image may be a preview image acquired by the terminal device in the process of acquiring the user image, or may be an image to be processed selected by the user from an electronic album stored in the terminal device, which is not limited in this embodiment.
In step S102, a face region and a target region in the target image are determined.
In this embodiment, after acquiring a target image to be processed, a terminal device may determine a face region and a target region in the target image, where the target region includes the face region and a peripheral preset region, and the face region may be a region in the target image corresponding to a face of a user.
For example, after the target image is obtained, face recognition may be performed on the target image to determine an area where a face is recognized as a face area; further, the peripheral preset region (e.g., a region with a preset size around the face region) of the face region may be determined as the target region.
In step S103, in response to detecting that a target object in the target area satisfies a processing condition, the target object is processed.
In this embodiment, after determining the face region and the target region in the target image, it may be detected whether a target object (e.g., a hand of a user, or tableware such as chopsticks, knives, forks) exists in the target region, and further, when the target object exists, it may be detected whether the target object satisfies a processing condition; on the basis, when the target object is determined to meet the processing condition, the target object is processed to obtain a processed target object.
In an embodiment, the target object satisfies a processing condition, and may include at least one of:
(1) the position of the target object in the target image meets a set position condition, such as the position of the target object in the central area of the image or in the preset areas of the front of the face or the two sides of the chin;
(2) the proportion of the target object in the target image meets a set proportion condition, for example, the proportion of the target object in the target image is greater than or equal to a set proportion threshold value.
For example, after the face area in the target image is determined, the face area in the target image may be beautified based on a preset beautifying method. The preset beautifying method may be set based on actual needs, for example, the preset beautifying method is set to perform treatments such as face thinning, whitening, acne removing, eye enlarging, skin polishing, and the like, and this embodiment does not limit this. On the other hand, after the face region in the target image is determined and the target region is determined based on the face region, whether the target object in the target region meets the processing condition or not may be detected, and when it is determined that the condition is met, the target object in the target image is processed. Taking the target object as a long strip-shaped object such as a chopstick as an example, when it is detected that the target object satisfies a processing condition, for example, the chopstick is located in a preset area such as the front of the face or both sides of the chin in the target image, and the proportion of the chopstick in the target image is greater than a set proportion threshold (e.g., 10%), the bent chopstick in the current target image can be processed according to the conventional image (e.g., a long strip) of the object, so that the shape of the processed chopstick is restored to the original shape, that is, the long strip-shaped object is restored to the long strip-shaped object presented before the face is beautified.
In another embodiment, the above-mentioned manner of processing the target object can be referred to the following embodiment shown in fig. 2, and will not be described in detail here.
In step S104, a processed target image is determined based on the processed face region and the processed target object.
In this embodiment, after processing the target object in response to detecting that the target object in the target area satisfies the processing condition, the processed target image may be determined based on the processed face area and the processed target object.
For example, after the target object in the target region is processed, the processed face region and the processed target object may be replaced on the basis of the original target image to obtain a processed target image.
In another embodiment, the above-mentioned manner of processing the target object can be referred to the following embodiment shown in fig. 2, and will not be described in detail here.
As can be seen from the above description, in the method of this embodiment, by acquiring a target image to be processed, determining a face region and a target region in the target image, then processing the target object in response to detecting that a target object in the target region satisfies a processing condition, and further determining the processed target image based on the processed face region and the processed target object, a problem of distortion of the target region due to the processing of the face region can be avoided, image distortion can be avoided, and the processing quality of the image can be improved.
FIG. 2 is a flow chart illustrating how the target object is processed according to an exemplary embodiment; the present embodiment is exemplified by how to process the target object on the basis of the above-described embodiments. The target object may include a hand. On this basis, as shown in fig. 2, the processing of the target object in the step S103 may include the following steps S201 to S203:
in step S201, feature information of the hand is determined.
In one embodiment, the characteristic information may include a target ratio of the palm and the fingers.
In step S202, a sample image matching the hand is searched in a preset hand image library based on the feature information.
For example, if the characteristic information includes a target ratio of the palm and the fingers, such as 25%, the ratio of the palm and the fingers in each sample image in the preset hand image library may be determined, and a sample image closest to the target ratio may be selected from the sample images. A plurality of sample images meeting aesthetic requirements can be preset in the hand image library, which is not limited in this embodiment.
In step S203, the hand is processed based on the sample image.
In this embodiment, after the sample image matched with the hand is searched in the preset hand image library based on the feature information, the hand may be processed based on the sample image.
For example, in the case that the characteristic information includes a target ratio of the palm and the fingers, the target ratio of the palm and the fingers in the hand portion may be adjusted based on the ratio of the palm and the fingers in the sample image, for example, the target ratio of the palm and the fingers in the target image may be the same as the ratio of the palm and the fingers in the sample image, and the shape of the hand portion may be corrected, adjusted and beautified, so that the hand portion is more beautiful and meets the set aesthetic requirement.
By way of further example, FIG. 3 is a flow chart illustrating how the target object is processed based on the sample image, according to an exemplary embodiment. As shown in fig. 3, the above step 303 may include the following steps S301 to S302:
in step S301, a first image parameter of the face region and a second image parameter of the sample image are acquired.
Wherein the first and second image parameters comprise at least one of color and texture. For example, the color in the image parameter may correspond to a skin color of a human body (e.g., white, black, yellow, or bright white, dark, etc.), and the texture in the image parameter may correspond to a skin texture of a human body (e.g., rough or fine, etc.).
In step S302, the image parameters of the hand are processed based on the first image parameters and the second image parameters.
In this embodiment, after the first image parameter of the face region and the second image parameter of the sample image are obtained, the hand in the current image to be processed may be processed based on the color and/or texture of the face region in the target image and the color and/or texture of the hand in the sample image, so as to make the hand in the current image to be processed more beautiful.
As can be seen from the above description, in this embodiment, by determining the feature information of the hand, searching the sample image matched with the hand in the preset hand image library based on the feature information, and processing the hand based on the sample image, the sample image matched with the hand in the preset hand image library can be obtained based on the feature information of the hand in the target area, and then the hand in the target area can be beautified based on the sample image, so that the quality of processing the target object in the target area other than the face area in the target image can be improved, and the processing quality of the image can be improved.
FIG. 4 is a flow diagram illustrating how a processed target image may be determined based on a processed face region and a processed target object in accordance with an exemplary embodiment; the present embodiment exemplifies how to determine the processed target image based on the processed face region and the processed target object on the basis of the above-described embodiments. As shown in fig. 4, the determining the processed target image based on the processed face region and the processed target object in step S104 may include the following steps S401 to S402:
in step S401, a second positional relationship between the processed face region and the processed target object is determined based on a first relationship between the face region and the target object in the target image.
In this embodiment, when it is detected that the target object in the target area satisfies the processing condition, after the target object is processed, a first relationship between the face area in the target image and the target object may be determined, and then a second positional relationship between the processed face area and the processed target object may be determined based on the first relationship.
In step S402, a position relationship between the first layer corresponding to the processed face region and the second layer corresponding to the processed target object is adjusted based on the second position relationship.
In this embodiment, after the first relationship between the face region in the target image and the target object is based, the second positional relationship between the processed face region and the processed target object may be determined based on the first relationship, and then the positional relationship between the first layer corresponding to the processed face region and the second layer corresponding to the processed target object may be adjusted based on the second positional relationship, so that the positional relationship between the processed target object and the face region is maintained.
In step S403, in response to detecting that a blank area exists between the adjusted first image layer and the second image layer, filling the blank area by using a preset image filling manner, so as to obtain a processed target image.
In this embodiment, after the position relationship between the first layer corresponding to the processed face area and the second layer corresponding to the processed target object is adjusted based on the second position relationship, whether a blank area exists between the adjusted first layer and the adjusted second layer may also be detected, and then when the blank area exists between the first layer and the second layer is detected, the blank area is filled in a preset image filling manner to obtain the processed target image.
The preset image filling manner may be set based on actual service needs, for example, set as a filling manner based on content identification in the related art, which is not limited in this embodiment.
As can be seen from the above description, in this embodiment, by determining a second positional relationship between the processed face region and the processed target object based on a first relationship between the face region in the target image and the target object, and adjusting a positional relationship between a first layer corresponding to the processed face region and a second layer corresponding to the processed target object based on the second positional relationship, it is possible to accurately determine a positional relationship between a first layer corresponding to the processed face region and a second layer corresponding to the processed target object based on a relationship between the face region and the target region in the target image, ensure that a positional relationship between a human face and the target region in the image is unchanged, and by responding to detection of a blank region existing between the adjusted first layer and the second layer, and filling the blank area by adopting a preset image filling mode, so that the blank area generated by image processing can be avoided, and the processing quality of the image is improved.
FIG. 5 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment; the apparatus of the embodiment may be applied to a terminal device (e.g., a smart phone, a tablet computer, a notebook computer, or a wearable device) having an image processing function. As shown in fig. 5, the apparatus includes: a target image acquisition module 110, a target area determination module 120, a target object processing module 130, and a processed image determination module 140, wherein:
a target image obtaining module 110, configured to obtain a target image to be processed;
a target region determining module 120, configured to determine a target region in the target image, where the target region includes a face region and a peripheral preset region;
a target object processing module 130, configured to process a target object in the target area in response to detecting that the target object satisfies a processing condition;
a processed image determining module 140, configured to determine a processed target image based on the processed face region and the processed target object.
As can be seen from the above description, the apparatus of this embodiment obtains a target image to be processed, determines a face region and a target region in the target image, then processes the target object in response to detecting that a target object in the target region satisfies a processing condition, and further determines a processed target image based on the processed face region and the processed target object, so that a problem of distortion of the target region due to the processing of the face region can be avoided, image distortion can be avoided, and the processing quality of the image can be improved.
Fig. 6 is a block diagram illustrating an image processing apparatus according to yet another exemplary embodiment; the apparatus of the embodiment may be applied to a terminal device (e.g., a smart phone, a tablet computer, a notebook computer, or a wearable device) having an image processing function. Wherein: the target image obtaining module 210, the target area determining module 220, the target object processing module 230, and the processed image determining module 240 are the same as the target image obtaining module 110, the target area determining module 120, the target object processing module 130, and the processed image determining module 140 in the foregoing embodiment shown in fig. 5, and are not repeated herein.
In this embodiment, the target object satisfying the processing condition may include at least one of the following:
the position of the target object in the target image meets a set position condition;
the target area is distorted and deformed after the face area is processed;
the proportion of the target object in the target image meets a set proportion condition.
As shown in fig. 6, the target object may include a hand; on this basis, the target object processing module 230 may include:
a feature information determination unit 231 for determining feature information of the hand;
a sample image searching unit 232, configured to search a sample image matched with the hand in a preset hand image library based on the feature information;
a target object processing unit 233 for processing the hand based on the sample image.
In one embodiment, the characteristic information may include a target ratio of the palm and the fingers;
on this basis, the sample image searching unit 232 may be further configured to search the preset hand image library for a sample image with the ratio of the palm and the fingers closest to the target ratio.
In an embodiment, the target object processing unit 233 may be further configured to adjust the target ratio of the palm and the fingers in the hand based on the ratio of the palm and the fingers in the sample image.
In an embodiment, the target object processing unit 233 may further be configured to:
acquiring a first image parameter of the face region and a second image parameter of the sample image, wherein the first image parameter and the second image parameter comprise at least one of color and texture;
processing image parameters of the hand based on the first image parameters and the second image parameters.
In an embodiment, the processing image determining module 240 may include:
a position relation determining unit 241, configured to determine a second position relation between the processed face region and the processed target object based on a first relation between the face region and the target object in the target image;
a processed image determining unit 242, configured to adjust a position relationship between the first layer corresponding to the processed face region and the second layer corresponding to the processed target object based on the second position relationship, so as to obtain a processed target image.
In an embodiment, the processing the image determination module 240 may further include:
and a blank area filling unit 243, configured to, in response to detecting that a blank area exists between the adjusted first image layer and the adjusted second image layer, fill the blank area in a preset image filling manner.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like. In this embodiment, the electronic device may include a normally open image capturing device for capturing image information.
Referring to fig. 7, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 906 provides power to the various components of device 900. The power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or a component of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. An image processing method, characterized in that the method comprises:
acquiring a target image to be processed;
determining a target area in the target image, wherein the target area comprises a face area and a peripheral preset area;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
2. The method of claim 1, wherein the target object satisfies a processing condition comprising at least one of:
the position of the target object in the target image meets a set position condition;
the proportion of the target object in the target image meets a set proportion condition.
3. The method of claim 1, wherein the target object comprises a hand;
the processing the target object comprises:
determining characteristic information of the hand;
searching a sample image matched with the hand in a preset hand image library based on the characteristic information;
and processing the hand part based on the sample image to obtain a target object.
4. The method of claim 3, wherein the characteristic information includes a target ratio of palm and fingers;
the searching for the sample image matched with the hand in a preset hand image library based on the characteristic information comprises:
and searching a sample image with the palm and finger ratio closest to the target ratio in the preset hand image library.
5. The method of claim 4, wherein the processing the hand based on the sample image comprises:
adjusting a target ratio of the palm and fingers in the hand based on the ratio of the palm and fingers in the sample image.
6. The method of claim 3, wherein the processing the hand based on the sample image comprises:
acquiring a first image parameter of the face region and a second image parameter of the sample image, wherein the first image parameter and the second image parameter comprise at least one of color and texture;
processing image parameters of the hand based on the first image parameters and the second image parameters.
7. The method of claim 1, wherein determining a processed target image based on the processed face region and the processed target object comprises:
determining a second positional relationship between the processed face region and the processed target object based on a first relationship between the face region and the target object in the target image;
and adjusting the position relation between the first image layer corresponding to the processed face area and the second image layer corresponding to the processed target object based on the second position relation to obtain a processed target image.
8. The method of claim 7, further comprising:
and in response to the fact that a blank area exists between the adjusted first image layer and the adjusted second image layer, filling the blank area by adopting a preset image filling mode.
9. An image processing apparatus, characterized in that the apparatus comprises:
the target image acquisition module is used for acquiring a target image to be processed;
a target area determination module, configured to determine a target area in the target image, where the target area includes a face area and a peripheral preset area;
the target object processing module is used for processing the target object in response to the fact that the target object in the target area meets the processing condition;
and the processed image determining module is used for determining a processed target image based on the processed face area and the processed target object.
10. The apparatus of claim 9, wherein the target object satisfies a processing condition comprising at least one of:
the position of the target object in the target image meets a set position condition;
the proportion of the target object in the target image meets a set proportion condition.
11. The device of claim 10, wherein the target area comprises a hand;
the target object processing module comprising:
a feature information determination unit configured to determine feature information of the hand;
the sample image searching unit is used for searching a sample image matched with the hand in a preset hand image library based on the characteristic information;
a target object processing unit for processing the hand based on the sample image.
12. The apparatus of claim 11, wherein the characteristic information comprises a target ratio of palm and fingers;
the sample image searching unit is further used for searching a sample image with the palm and finger ratio closest to the target ratio in the preset hand image library.
13. The apparatus of claim 12, wherein the target object processing unit is further configured to adjust the target palm and finger ratio in the hand based on the palm and finger ratio in the sample image.
14. The apparatus of claim 11, wherein the target object processing unit is further configured to:
acquiring a first image parameter of the face region and a second image parameter of the sample image, wherein the first image parameter and the second image parameter comprise at least one of color and texture;
processing image parameters of the hand based on the first image parameters and the second image parameters.
15. The apparatus of claim 9, wherein the process image determination module comprises:
a position relation determining unit configured to determine a second position relation between the processed face region and the processed target object based on a first relation between the face region and the target object in the target image;
and the processed image determining unit is used for adjusting the position relationship between the first image layer corresponding to the processed face area and the second image layer corresponding to the processed target object based on the second position relationship to obtain a processed target image.
16. The apparatus of claim 15, wherein the process image determination module further comprises:
and the blank area filling unit is used for responding to the fact that a blank area exists between the first image layer and the second image layer after the adjustment, and filling the blank area by adopting a preset image filling mode.
17. An electronic device, characterized in that the device comprises:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a target image to be processed;
determining a face region and a target region in the target image, wherein the target region comprises the face region and a peripheral preset region;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
18. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring a target image to be processed;
determining a face region and a target region in the target image, wherein the target region comprises the face region and a peripheral preset region;
in response to detecting that a target object in the target area satisfies a processing condition, processing the target object;
and determining a processed target image based on the processed face area and the processed target object.
CN202011173511.5A 2020-10-28 2020-10-28 Image processing method, device, equipment and storage medium Pending CN114418865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011173511.5A CN114418865A (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011173511.5A CN114418865A (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114418865A true CN114418865A (en) 2022-04-29

Family

ID=81260361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011173511.5A Pending CN114418865A (en) 2020-10-28 2020-10-28 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114418865A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236052A1 (en) * 2022-06-07 2023-12-14 北京小米移动软件有限公司 Input information determination method and apparatus, and device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023236052A1 (en) * 2022-06-07 2023-12-14 北京小米移动软件有限公司 Input information determination method and apparatus, and device and storage medium

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
EP3179711B1 (en) Method and apparatus for preventing photograph from being shielded
US10452890B2 (en) Fingerprint template input method, device and medium
CN108108418B (en) Picture management method, device and storage medium
CN107944367B (en) Face key point detection method and device
CN107341777B (en) Picture processing method and device
CN107480785B (en) Convolutional neural network training method and device
CN107507128B (en) Image processing method and apparatus
CN112331158B (en) Terminal display adjusting method, device, equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN112188096A (en) Photographing method and device, terminal and storage medium
CN114418865A (en) Image processing method, device, equipment and storage medium
CN108830194B (en) Biological feature recognition method and device
CN108596957B (en) Object tracking method and device
CN107832112B (en) Wallpaper setting method and device
CN107203315B (en) Click event processing method and device and terminal
CN115914721A (en) Live broadcast picture processing method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN114666490B (en) Focusing method, focusing device, electronic equipment and storage medium
CN113315904B (en) Shooting method, shooting device and storage medium
CN108769513B (en) Camera photographing method and device
CN107608506B (en) Picture processing method and device
CN107707819B (en) Image shooting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination