CN111093029B - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN111093029B
CN111093029B CN201911417636.5A CN201911417636A CN111093029B CN 111093029 B CN111093029 B CN 111093029B CN 201911417636 A CN201911417636 A CN 201911417636A CN 111093029 B CN111093029 B CN 111093029B
Authority
CN
China
Prior art keywords
image
sketch
face
line
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911417636.5A
Other languages
Chinese (zh)
Other versions
CN111093029A (en
Inventor
程冰
陈希超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911417636.5A priority Critical patent/CN111093029B/en
Publication of CN111093029A publication Critical patent/CN111093029A/en
Application granted granted Critical
Publication of CN111093029B publication Critical patent/CN111093029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image processing method and a related device, wherein the method comprises the following steps: determining a human face image of a person in an original image to be processed; acquiring image information of the face image; judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. The embodiment of the application is beneficial to improving the quality of the face image in the image.

Description

Image processing method and related device
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to an image processing method and a related apparatus.
Background
With the progress of electronic technology, the image function of the electronic device is more and more powerful at present, the imaging quality of the camera is more and more high, and the processing capability of the electronic device for images is more and more powerful. However, even under the condition of having powerful hardware functions, the user still can not avoid the situations of fuzzy shooting or incomplete shooting (such as objects are blocked, and the shot objects are incomplete) when using the electronic device for shooting.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which are beneficial to improving the quality of a face image in an image.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
determining a human face image of a person in an original image to be processed;
acquiring image information of the face image;
judging whether the quality of the face image meets a preset condition or not according to the image information;
and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person.
In a second aspect, an embodiment of the present application provides an image processing apparatus including a processing unit, wherein,
the processing unit is used for determining a human face image of a person in an original image to be processed; and image information for acquiring the face image; the image processing device is used for judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a controller, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the controller, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, the electronic device first determines a face image of a person in an original image to be processed; secondly, acquiring image information of the face image; secondly, judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. Therefore, in the embodiment of the application, when the image quality of the electronic equipment does not meet the preset condition, the electronic equipment can process the original image according to the preset image processing strategy, so that the finally obtained target image comprises a complete face image, and the image quality is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2B is a schematic diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a block diagram of functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device includes a processor, a Memory, a signal processor, a communication interface, a touch screen, a WiFi module, a speaker, a microphone, a Random Access Memory (RAM), a camera, and the like.
The storage, the signal processor, the WiFi module, the touch screen, the loudspeaker, the microphone, the RAM and the camera are connected with the processor, and the communication interface is connected with the signal processor.
The memory stores image data, which specifically includes an original image to be processed and a processed target image of the electronic device.
Among other things, the electronic devices may include various handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on, having wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Referring to fig. 2A, fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, and the image processing method is applied to an electronic device. As shown in the figure, the image processing method includes:
in step 201, the electronic device determines a face image of a person in an original image to be processed.
The face image refers to the region of the face of a person in the original image; the face image may be obtained by photographing the face of the person in a state where the face is blocked by a part of the obstacle.
Step 202, the electronic device acquires image information of a face image.
The image information includes color value information, specifically, RGB color values.
And step 203, the electronic equipment judges whether the quality of the face image meets a preset condition according to the image information.
The preset condition can be that the face image has no obstruction; and under the condition that the face image has no shielding object, determining that the quality of the face image meets the preset condition, otherwise, not meeting the preset condition. For example, in the original image, a person is behind frosted glass or a medium (such as a door curtain, a curtain and the like), the face of the person is blurred after frosted glass, and the face image of the person is incomplete after the medium, which are both cases of the blocking object.
And 204, if not, the electronic equipment processes the original image according to a preset image processing strategy to obtain a processed target image.
Wherein the target image comprises a face image of a complete person.
It can be seen that, in the embodiment of the application, the electronic device first determines a face image of a person in an original image to be processed; secondly, acquiring image information of the face image; secondly, judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. Therefore, in the embodiment of the application, the electronic device can process the original image according to the image processing strategy under the condition that the image quality does not meet the preset condition, so that the finally obtained target image comprises the complete face image, and the image quality is improved.
In one possible example, the determining whether the quality of the face image meets a preset condition according to the image information includes: judging whether the human face image has a shielded area or not according to the image information; and if so, determining that the quality of the face image does not meet a preset condition.
In one possible example, the processing the original image according to a preset image processing policy to obtain a processed target image includes: generating a sketch image corresponding to the face image according to the pixel information of the face image; determining a first type sketch line belonging to an occlusion object in the sketch image; deleting the first type sketch lines in the sketch image to enable the corresponding area of the first type sketch lines in the sketch image to be a vacant area; determining a face part corresponding to the vacant area; determining a second type sketch line intersected with the edge line of the vacancy area in the sketch image; inquiring a preset database by taking the face part and the second type sketch line as inquiry identifications to obtain filling lines corresponding to the inquiry identifications; adding the fill line to the vacant area of the sketch image; and generating the target image according to the sketch image, wherein the target image is a color image.
The electronic equipment is provided with a neural network model, and a large number of sketch images of the face are input into the neural network model in advance for training, so that the electronic equipment can identify lines belonging to the human face of a person and lines belonging to a shielding object in the sketch images.
The preset database stores a large number of sketch images of human faces, and the electronic device can query the preset database through human face parts and sketch lines in the sketch images currently processed, obtain filling lines (sketch lines) with the highest similarity to the sketch lines of the human faces in the sketch images currently processed, and further fill vacant areas in the sketch images currently processed.
The first type sketch lines refer to lines generated by converting shielding objects in an original image into sketch images; the second type of sketch lines refer to lines generated by converting a face image in an original image into a sketch image.
For example, referring to fig. 2B, fig. 2B is a schematic diagram of image processing according to an embodiment of the present application, after an original image is converted into a sketch image, a face image has a first type sketch line caused by two shielding objects shielding nose parts; then, the electronic equipment deletes the first type sketch lines, so that two vacant areas are arranged at the nose part of the face image; the method comprises the steps that the position where a vacant area is located is a nose, and a second type sketch line exists in the nose part of a face image, the electronic equipment determines sketch data of the nose part in a preset database, the second type sketch line is used as an inquiry mark to inquire the sketch data of the nose part in the preset database, a filling line matched with the second type sketch line is obtained, and the filling line is filled into the vacant area to obtain a final complete face sketch image.
Therefore, in this example, the electronic device can identify and delete the first type sketch lines generated by the shielding object, determine the filling lines of the vacant region according to the existing second type sketch lines in the face image, fill the vacant region with the filling lines, generate a complete sketch image, and further enable the finally generated target image to include the complete face image.
In one possible example, the generating the target image from the sketch image comprises: dividing the sketch image into a preset number of areas according to sketch lines in the sketch image; determining color value information of pixels in each region according to the color value information of the original image, the face part of each region in the preset number of regions and the sketch lines included in each region; and generating the target image according to the color value information of the pixels in each area.
The electronic device may preset a partitioning policy, specifically, the partitioning policy may be partitioned according to a part included by a human face, for example: determining the regions in the sketch image according to the positions and the characteristics of sketch lines in the sketch image, wherein the regions comprise a forehead region, an eye region, a nose region, a mouth region, an ear region and the like; secondly, the electronic device can determine color value information in the original image, such as an RGB color value of the skin and an RGB color value of the eye, and then accurately determine the subdivided features in each region, such as an eye region, according to the sketch lines in the region, determine a skin part and an eye part, where the color value of the pixels in the skin part is the RGB color value of the skin and the color value of the pixels in the eye part is the RGB color value of the eye. For the vacant areas, filling is performed according to RGB color values of areas where the vacant areas are in contact, for example, in fig. 2B, the vacant areas are only a small part of areas in the nose part area, most of the nose part area is not blocked by the blocking object, and the vacant areas are filled according to RGB color values of areas not blocked by the blocking object; or using the face part corresponding to the vacant region as the query identifier, querying the database, obtaining the preset color value corresponding to the face part, and filling the face part with the preset color value, for example, in fig. 2B, if the vacant region is at the nose part, querying the database with the nose part, obtaining the preset RGB color value of the nose part, and filling the face part with the preset RGB color value.
Therefore, in this example, the electronic device can divide the face image into a plurality of regions, and determine color value information according to the face part to which each region belongs, so that the color value of the finally generated target image is more matched with the face image, and the intelligence of the electronic device is improved.
In one possible example, before querying the preset database, the method further includes: acquiring a preset number of face sketch images; decomposing each face sketch image in the preset number of face sketch images into a first number of face parts to obtain a preset number of sketch images corresponding to each face part in the first number of face parts, wherein the first number is a positive integer; forming a sub-database corresponding to each face part according to a preset number of sketch images corresponding to each face part; and forming the preset database according to the sub-database of each face part.
Wherein the first number of individual face positions may be determined in advance according to the proportion of the human face, such as eye position, ear position, nose position, mouth; or inputting a large number of sketch images of the human face into a preset neural network model, and analyzing the neural network model according to the characteristics of sketch lines in the large number of sketch images of the human face to further determine each part, such as an eye part, an eyebrow part, an ear part and a mouth part.
Therefore, in this example, the electronic device may divide the face sketch image into a plurality of portions, construct a sub-database corresponding to each face portion, and further form a preset database, so that in a subsequent processing process, a targeted search is facilitated according to different face portions, and the efficiency of data processing is improved.
In one possible example, the querying a preset database by using the first face part and the second type sketch line as query identifiers to obtain a filling line corresponding to the query identifier includes: determining a first sub-database corresponding to the first face position in the preset database according to the first face position; comparing the second type sketch lines with the preset number of sketch images in the first sub-database, and determining the similarity between the second type sketch lines and each sketch image in the first sub-database; determining the sketch image with the maximum similarity in the preset number of sketch images in the first sub-database as a target sub-sketch image; placing the target sub-sketch image and the second type sketch line on the same coordinate plane at the maximum overlapping rate; and determining a line which is not overlapped with the second type sketch line in the target sub sketch image as the filling line.
Therefore, in this example, the electronic device can determine the similarity between each sketch image in the first sub-database and the second type line, and finally determine to generate a filling line according to the sketch image with the maximum similarity to the second type line, so that the filling engagement degree is improved, and the intelligence of the electronic device is improved.
In one possible example, the determining the similarity between the second type sketch line and each sketch image in the first sub-database comprises: executing a preset operation on each sketch image in the preset number of sketch images in the first sub-database to obtain the similarity of each sketch image in the preset number of sketch images; the preset operation comprises the following steps: placing a first sketch image with similarity to be calculated and the second type sketch line in the same coordinate plane; adjusting the position of the first sketch image or the second type sketch line to determine the maximum overlapping rate of the first sketch image and the second type sketch line; and determining the maximum overlapping rate as the similarity of the first sketch image and the second type sketch line.
Wherein adjusting the position of the first sketch image or the second type sketch line comprises: the electronic equipment firstly fixes one of the first sketch image or the second type sketch in the coordinate plane, secondly repeatedly adjusts the position of the other unfixed sketch image, determines the overlapping rate corresponding to each position after each adjustment, obtains a plurality of overlapping rates, and finally determines the maximum overlapping rate in the plurality of overlapping rates; specifically, the calculation method of the overlap ratio is as follows: determining a first number of pixels occupied in a coordinate plane for displaying a second type of sketch image; determining a second number of pixels occupied when the first sketch image is displayed, determining a third number of pixels shared by the first number of pixels and the second number of pixels, wherein the overlapping rate is equal to the ratio of the third number to the first number.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, and the image processing method is applied to an electronic device, consistent with the embodiment shown in fig. 2A. As shown in the figure, the image processing method includes:
in step 301, the electronic device determines a face image of a person in an original image to be processed.
Step 302, the electronic device obtains image information of a face image.
And step 303, the electronic equipment judges whether the face image has a blocked area according to the image information.
And 304, if yes, the electronic equipment generates a sketch image corresponding to the face image according to the pixel information of the face image.
In step 305, the electronic device determines a first type of sketch line in the sketch image that belongs to an occluding object.
Step 306, the electronic device deletes the first type sketch line in the sketch image, so that a region of the first type sketch line corresponding to the sketch image is a blank region.
And 307, the electronic equipment determines the face part corresponding to the vacant area.
In step 308, the electronic device determines a second type sketch line intersecting with an edge line of the blank area in the sketch image.
And 309, the electronic equipment queries a preset database by taking the face part and the second type sketch line as query identifiers, and acquires filling lines corresponding to the query identifiers.
In step 310, the electronic device adds a fill line to the blank area of the sketch image.
In step 311, the electronic device generates a target image according to the sketch image, wherein the target image is a color image.
It can be seen that, in the embodiment of the application, the electronic device first determines a face image of a person in an original image to be processed; secondly, acquiring image information of the face image; secondly, judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. Therefore, in the embodiment of the application, when the image quality of the electronic equipment does not meet the preset condition, the electronic equipment can process the original image according to the preset image processing strategy, so that the finally obtained target image comprises a complete face image, and the image quality is improved.
Consistent with the embodiments shown in fig. 2A and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, as shown in the figure, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for performing the following steps:
determining a human face image of a person in an original image to be processed;
acquiring image information of the face image;
judging whether the quality of the face image meets a preset condition or not according to the image information;
and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person.
It can be seen that, in the embodiment of the application, the electronic device first determines a face image of a person in an original image to be processed; secondly, acquiring image information of the face image; secondly, judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. Therefore, in the embodiment of the application, when the image quality of the electronic equipment does not meet the preset condition, the electronic equipment can process the original image according to the preset image processing strategy, so that the finally obtained target image comprises a complete face image, and the image quality is improved.
In one possible example, in terms of the determining whether the quality of the face image meets the preset condition according to the image information, the instructions in the program are specifically configured to perform the following operations: judging whether the human face image has a shielded area or not according to the image information; and if so, determining that the quality of the face image does not meet a preset condition.
In a possible example, in terms of processing the original image according to a preset image processing policy to obtain a processed target image, the instructions in the program are specifically configured to perform the following operations: generating a sketch image corresponding to the face image according to the pixel information of the face image; determining a first type sketch line belonging to an occlusion object in the sketch image; deleting the first type sketch lines in the sketch image to enable the corresponding area of the first type sketch lines in the sketch image to be a vacant area; determining a face part corresponding to the vacant area; determining a second type sketch line intersected with the edge line of the vacancy area in the sketch image; inquiring a preset database by taking the face part and the second type sketch line as inquiry identifications to obtain filling lines corresponding to the inquiry identifications; adding the fill line to the vacant area of the sketch image; and generating the target image according to the sketch image, wherein the target image is a color image.
In one possible example, in the generating the target image from the sketch image, the instructions in the program are specifically configured to: dividing the sketch image into a preset number of areas according to sketch lines in the sketch image; determining color value information of pixels in each region according to the color value information of the original image, the face part of each region in the preset number of regions and the sketch lines included in each region; and generating the target image according to the color value information of the pixels in each area.
In one possible example, in a previous aspect to the querying the preset database, the instructions in the program are further configured to: acquiring a preset number of face sketch images; decomposing each face sketch image in the preset number of face sketch images into a first number of face parts to obtain a preset number of sketch images corresponding to each face part in the first number of face parts, wherein the first number is a positive integer; forming a sub-database corresponding to each face part according to a preset number of sketch images corresponding to each face part; and forming the preset database according to the sub-database of each face part.
In a possible example, in the aspect that the first face part and the second type sketch line are used as query identifiers, a preset database is queried, and a filling line corresponding to the query identifier is obtained, the instructions in the program are specifically configured to perform the following operations: determining a first sub-database corresponding to the first face position in the preset database according to the first face position; comparing the second type sketch lines with the preset number of sketch images in the first sub-database, and determining the similarity between the second type sketch lines and each sketch image in the first sub-database; determining the sketch image with the maximum similarity in the preset number of sketch images in the first sub-database as a target sub-sketch image; placing the target sub-sketch image and the second type sketch line on the same coordinate plane at the maximum overlapping rate; and determining a line which is not overlapped with the second type sketch line in the target sub sketch image as the filling line.
In one possible example, in the determining the similarity between the second type sketch line and each sketch image in the first sub-database, the instructions in the program are specifically configured to: executing a preset operation on each sketch image in the preset number of sketch images in the first sub-database to obtain the similarity of each sketch image in the preset number of sketch images; the preset operation comprises the following steps: placing a first sketch image with similarity to be calculated and the second type sketch line in the same coordinate plane; adjusting the position of the first sketch image or the second type sketch line to determine the maximum overlapping rate of the first sketch image and the second type sketch line; and determining the maximum overlapping rate as the similarity of the first sketch image and the second type sketch line.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a block diagram showing functional units of an image processing apparatus 500 according to an embodiment of the present application. The image processing apparatus 500 is applied to an electronic device, and the image processing apparatus 500 includes a processing unit 501 and a communication unit 502, where:
the processing unit 501 is configured to determine a face image of a person in an original image to be processed; and image information for acquiring the face image; the image processing device is used for judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image.
The image processing apparatus 500 may further include a communication unit 502 and a storage unit 503, where the storage unit 503 is used for storing program codes and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be a touch display screen or a transceiver, and the storage unit 503 may be a memory.
It can be seen that, in the embodiment of the application, the electronic device first determines a face image of a person in an original image to be processed; secondly, acquiring image information of the face image; secondly, judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person. Therefore, in the embodiment of the application, when the image quality of the electronic equipment does not meet the preset condition, the electronic equipment can process the original image according to the preset image processing strategy, so that the finally obtained target image comprises a complete face image, and the image quality is improved.
In a possible example, in terms of the determining whether the quality of the face image meets a preset condition according to the image information, the processing unit 501 is specifically configured to: judging whether the human face image has a shielded area or not according to the image information; and if so, determining that the quality of the face image does not meet a preset condition.
In a possible example, in terms of processing the original image according to a preset image processing policy to obtain a processed target image, the processing unit 501 is specifically configured to: generating a sketch image corresponding to the face image according to the pixel information of the face image; determining a first type sketch line belonging to an occlusion object in the sketch image; deleting the first type sketch lines in the sketch image to enable the corresponding area of the first type sketch lines in the sketch image to be a vacant area; determining a face part corresponding to the vacant area; determining a second type sketch line intersected with the edge line of the vacancy area in the sketch image; inquiring a preset database by taking the face part and the second type sketch line as inquiry identifications to obtain filling lines corresponding to the inquiry identifications; adding the fill line to the vacant area of the sketch image; and generating the target image according to the sketch image, wherein the target image is a color image.
In one possible example, in the aspect of generating the target image according to the sketch image, the processing unit 501 is specifically configured to: dividing the sketch image into a preset number of areas according to sketch lines in the sketch image; determining color value information of pixels in each region according to the color value information of the original image, the face part of each region in the preset number of regions and the sketch lines included in each region; and generating the target image according to the color value information of the pixels in each area.
In one possible example, in a previous aspect to the query of the preset database, the processing unit 501 is further configured to: acquiring a preset number of face sketch images; decomposing each face sketch image in the preset number of face sketch images into a first number of face parts to obtain a preset number of sketch images corresponding to each face part in the first number of face parts, wherein the first number is a positive integer; forming a sub-database corresponding to each face part according to a preset number of sketch images corresponding to each face part; and forming the preset database according to the sub-database of each face part.
In a possible example, in the aspect that the first face part and the second type sketch line are used as query identifiers, a preset database is queried, and a filling line corresponding to the query identifier is obtained, the processing unit 501 is specifically configured to: determining a first sub-database corresponding to the first face position in the preset database according to the first face position; comparing the second type sketch lines with the preset number of sketch images in the first sub-database, and determining the similarity between the second type sketch lines and each sketch image in the first sub-database; determining the sketch image with the maximum similarity in the preset number of sketch images in the first sub-database as a target sub-sketch image; placing the target sub-sketch image and the second type sketch line on the same coordinate plane at the maximum overlapping rate; and determining a line which is not overlapped with the second type sketch line in the target sub sketch image as the filling line.
In one possible example, in the aspect of determining the similarity between the second type sketch line and each sketch image in the first sub-database, the processing unit 501 is specifically configured to: executing a preset operation on each sketch image in the preset number of sketch images in the first sub-database to obtain the similarity of each sketch image in the preset number of sketch images; the preset operation comprises the following steps: placing a first sketch image with similarity to be calculated and the second type sketch line in the same coordinate plane; adjusting the position of the first sketch image or the second type sketch line to determine the maximum overlapping rate of the first sketch image and the second type sketch line; and determining the maximum overlapping rate as the similarity of the first sketch image and the second type sketch line.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated into one control unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (7)

1. An image processing method, characterized in that the method comprises:
determining a human face image of a person in an original image to be processed;
acquiring image information of the face image;
judging whether the quality of the face image meets a preset condition or not according to the image information;
if not, processing the original image according to a preset image processing strategy to obtain a processed target image, wherein the target image comprises a complete human face image of the person;
the processing the original image according to a preset image processing strategy to obtain a processed target image includes: generating a sketch image corresponding to the face image according to the pixel information of the face image; determining a first type sketch line belonging to an occlusion object in the sketch image; deleting the first type sketch lines in the sketch image to enable the corresponding area of the first type sketch lines in the sketch image to be a vacant area; determining a first face part corresponding to the vacant area; determining a second type sketch line intersected with the edge line of the vacancy area in the sketch image; inquiring a preset database by taking the first face part and the second type sketch line as inquiry identifications to obtain filling lines corresponding to the inquiry identifications; adding the fill line to the vacant area of the sketch image; generating the target image according to the sketch image, wherein the target image is a color image;
the method for querying a preset database by using the first face part and the second type sketch line as query identifiers to obtain a filling line corresponding to the query identifiers comprises the following steps: determining a first sub-database corresponding to the first face position in the preset database according to the first face position; comparing the second type sketch lines with a preset number of sketch images in the first sub-database, and determining the similarity between the second type sketch lines and each sketch image in the first sub-database; determining the sketch image with the maximum similarity in the preset number of sketch images in the first sub-database as a target sub-sketch image; placing the target sub-sketch image and the second type sketch line on the same coordinate plane at the maximum overlapping rate; determining a line which is not overlapped with the second type sketch line in the target sub sketch image as the filling line;
wherein, the judging whether the quality of the face image meets the preset condition according to the image information comprises: judging whether the human face image has a shielded area or not according to the image information; and if so, determining that the quality of the face image does not meet a preset condition, otherwise, determining that the quality of the face image meets the preset condition.
2. The method of claim 1, wherein the generating the target image from the sketch image comprises:
dividing the sketch image into a preset number of areas according to sketch lines in the sketch image;
determining color value information of pixels in each region according to the color value information of the original image, the face part of each region in the preset number of regions and the sketch lines included in each region;
and generating the target image according to the color value information of the pixels in each area.
3. The method of claim 2, wherein prior to querying the predetermined database, the method further comprises:
acquiring a preset number of face sketch images;
decomposing each face sketch image in the preset number of face sketch images into a first number of face parts to obtain a preset number of sketch images corresponding to each face part in the first number of face parts, wherein the first number is a positive integer;
forming a sub-database corresponding to each face part according to a preset number of sketch images corresponding to each face part;
and forming the preset database according to the sub-database of each face part.
4. The method of claim 1, wherein determining the similarity of the second type sketch line to each sketch image in the first sub-database comprises:
executing a preset operation on each sketch image in the preset number of sketch images in the first sub-database to obtain the similarity of each sketch image in the preset number of sketch images;
the preset operation comprises the following steps:
placing a first sketch image with similarity to be calculated and the second type sketch line in the same coordinate plane;
adjusting the position of the first sketch image or the second type sketch line to determine the maximum overlapping rate of the first sketch image and the second type sketch line; and determining the maximum overlapping rate as the similarity of the first sketch image and the second type sketch line.
5. An image processing apparatus is characterized in that,
the image processing apparatus includes a processing unit, wherein,
the processing unit is used for determining a human face image of a person in an original image to be processed; and image information for acquiring the face image; the image processing device is used for judging whether the quality of the face image meets a preset condition or not according to the image information; and if not, processing the original image according to a preset image processing strategy to obtain a processed target image;
the processing the original image according to a preset image processing strategy to obtain a processed target image includes: the processing unit is used for generating a sketch image corresponding to the face image according to the pixel information of the face image; and a first type sketch line used for determining the sketch image which belongs to the sheltered object; the first type sketch lines in the sketch image are deleted, so that the corresponding area of the first type sketch lines in the sketch image is a vacant area; the first face part corresponding to the vacant area is determined; and a second type sketch line which is used for determining that the edge line of the vacancy area intersects with the sketch line in the sketch image; the system comprises a first face part, a second type sketch line, a preset database and a filling line, wherein the first face part and the second type sketch line are used as query identifiers, and the filling line is used for querying the preset database to obtain a filling line corresponding to the query identifiers; and adding the fill line to the vacant area of the sketch image; generating the target image according to the sketch image, wherein the target image is a color image;
the method for querying a preset database by using the first face part and the second type sketch line as query identifiers to obtain a filling line corresponding to the query identifiers comprises the following steps: the processing unit is used for determining a first sub-database corresponding to the first face position in the preset database according to the first face position; the second type sketch lines are used for being compared with a preset number of sketch images in the first sub-database, and the similarity between the second type sketch lines and each sketch image in the first sub-database is determined; and the sketch image with the maximum similarity in the preset number of sketch images in the first sub-database is determined as a target sub-sketch image; and the second type sketch line is used for placing the target sub sketch image and the second type sketch line on the same coordinate plane at the maximum overlapping rate; determining a line which is not overlapped with the second type sketch line in the target sub sketch image as the filling line;
wherein, the judging whether the quality of the face image meets the preset condition according to the image information comprises: judging whether the human face image has a shielded area or not according to the image information; and if so, determining that the quality of the face image does not meet a preset condition, otherwise, determining that the quality of the face image meets the preset condition.
6. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-4.
7. A computer-readable storage medium, characterized in that,
a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-4.
CN201911417636.5A 2019-12-31 2019-12-31 Image processing method and related device Active CN111093029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911417636.5A CN111093029B (en) 2019-12-31 2019-12-31 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911417636.5A CN111093029B (en) 2019-12-31 2019-12-31 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN111093029A CN111093029A (en) 2020-05-01
CN111093029B true CN111093029B (en) 2021-07-06

Family

ID=70397019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911417636.5A Active CN111093029B (en) 2019-12-31 2019-12-31 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN111093029B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688645B (en) * 2021-08-11 2023-11-03 广州爱格尔智能科技有限公司 Identification method, system and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9131150B1 (en) * 2014-06-06 2015-09-08 Amazon Technologies, Inc. Automatic exposure control and illumination for head tracking
CN106791393A (en) * 2016-12-20 2017-05-31 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107066955A (en) * 2017-03-24 2017-08-18 武汉神目信息技术有限公司 A kind of method that whole face is reduced from local facial region
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN108259766A (en) * 2018-03-29 2018-07-06 宁波大学 A kind of mobile intelligent terminal is taken pictures image pickup processing method
CN108932693A (en) * 2018-06-15 2018-12-04 中国科学院自动化研究所 Face editor complementing method and device based on face geological information
CN109785439A (en) * 2018-12-27 2019-05-21 深圳云天励飞技术有限公司 Human face sketch image generating method and Related product

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10839575B2 (en) * 2018-03-15 2020-11-17 Adobe Inc. User-guided image completion with image completion neural networks
CN110414378A (en) * 2019-07-10 2019-11-05 南京信息工程大学 A kind of face identification method based on heterogeneous facial image fusion feature

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9131150B1 (en) * 2014-06-06 2015-09-08 Amazon Technologies, Inc. Automatic exposure control and illumination for head tracking
CN106791393A (en) * 2016-12-20 2017-05-31 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107066955A (en) * 2017-03-24 2017-08-18 武汉神目信息技术有限公司 A kind of method that whole face is reduced from local facial region
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN107862270A (en) * 2017-10-31 2018-03-30 深圳云天励飞技术有限公司 Face classification device training method, method for detecting human face and device, electronic equipment
CN108259766A (en) * 2018-03-29 2018-07-06 宁波大学 A kind of mobile intelligent terminal is taken pictures image pickup processing method
CN108932693A (en) * 2018-06-15 2018-12-04 中国科学院自动化研究所 Face editor complementing method and device based on face geological information
CN109785439A (en) * 2018-12-27 2019-05-21 深圳云天励飞技术有限公司 Human face sketch image generating method and Related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于条件生成对抗网络的人脸补全算法;曹琨等;《传感器与微系统》;20190630;第38卷(第6期);第129-132页 *
基于样本学习的肖像画自动生成算法;陈洪等;《计算机学报》;20030228;第26卷(第2期);第147-152页 *

Also Published As

Publication number Publication date
CN111093029A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN109784255B (en) Neural network training method and device and recognition method and device
JP7038853B2 (en) Image processing methods and devices, electronic devices and computer-readable storage media
US11132544B2 (en) Visual fatigue recognition method, visual fatigue recognition device, virtual reality apparatus and storage medium
JP2018503152A (en) Training method and apparatus for convolutional neural network model
CN111160309B (en) Image processing method and related equipment
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
CN106560840A (en) Recognition processing method and device of image information
CN110177210B (en) Photographing method and related device
CN110689479A (en) Face makeup method, device, equipment and medium
CN108921856A (en) Image cropping method, apparatus, electronic equipment and computer readable storage medium
CN107786780A (en) Video image noise reducing method, device and computer-readable recording medium
CN108769636B (en) Projection method and device and electronic equipment
CN111093029B (en) Image processing method and related device
CN109559288A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN116051439A (en) Method, equipment and storage medium for removing rainbow-like glare of under-screen RGB image by utilizing infrared image
CN111881846B (en) Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program
CN110933314B (en) Focus-following shooting method and related product
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN112055961A (en) Shooting method, shooting device and terminal equipment
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN113496527B (en) Vehicle surrounding image calibration method, device and system and storage medium
CN110266947B (en) Photographing method and related device
CN110784682B (en) Video processing method and device and electronic equipment
CN117197855A (en) Face key point labeling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant