CN107578006B - Photo processing method and mobile terminal - Google Patents

Photo processing method and mobile terminal Download PDF

Info

Publication number
CN107578006B
CN107578006B CN201710772830.XA CN201710772830A CN107578006B CN 107578006 B CN107578006 B CN 107578006B CN 201710772830 A CN201710772830 A CN 201710772830A CN 107578006 B CN107578006 B CN 107578006B
Authority
CN
China
Prior art keywords
photo
value
preset
mobile terminal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710772830.XA
Other languages
Chinese (zh)
Other versions
CN107578006A (en
Inventor
俞丹凤
曾星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710772830.XA priority Critical patent/CN107578006B/en
Publication of CN107578006A publication Critical patent/CN107578006A/en
Application granted granted Critical
Publication of CN107578006B publication Critical patent/CN107578006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a photo processing method and a mobile terminal, which are applied to the mobile terminal comprising a camera, and the method comprises the following steps: if a photographing instruction is received, acquiring a first photo generated by the image captured by the camera; matching the first face image characteristics in the first photo with a preset face database; marking the facial image features which fail to be matched with the preset facial image database as first target facial image features; calculating an irrelevance value of a target character main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value or not; and if the irrelevance value is larger than a first preset value, deleting the first photo. Therefore, the first photo with the irrelevant value larger than the first preset value is directly deleted, and the time consumed by the mobile terminal for screening the photo greatly influenced by other people is reduced.

Description

Photo processing method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a processing method and a mobile terminal.
Background
With the rapid development of mobile terminals, mobile terminals have become an essential tool in people's life, and bring great convenience to various aspects of users' life. For example: people can record beautiful scenes or beautiful moments in life at any time and any place by using the photographing function on the mobile terminal. However, when people take pictures in some occasions with many people, the body shadow of other people can be inevitably recorded in the pictures. Generally, after the photos are shot, people screen the photos at the mobile terminal, and the photos greatly affected by other people are deleted. Therefore, the mobile terminal takes more time to screen the photos which are greatly influenced by other people.
Disclosure of Invention
The embodiment of the invention provides a photo processing method and a mobile terminal, and aims to solve the problem that time consumed by the mobile terminal for screening photos greatly influenced by others is long.
In a first aspect, an embodiment of the present invention provides a photo processing method, which is applied to a mobile terminal including a camera, and includes:
if a photographing instruction is received, acquiring a first photo generated by the image captured by the camera;
matching the first face image characteristics in the first photo with a preset face database;
marking the facial image features which fail to be matched with the preset facial image database as first target facial image features;
calculating an irrelevance value of a target character main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and if the irrelevance value is larger than a first preset value, deleting the first photo.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the acquisition module is used for acquiring a first photo generated by the image captured by the camera if a photographing instruction is received;
the matching module is used for matching the first face image characteristics in the first photo with a preset portrait database;
the first marking module is used for marking the first face image characteristic which is unsuccessfully matched with the preset face database as a first target face image characteristic;
the calculation module is used for calculating an irrelevance value of a target character main body area corresponding to the first target face image characteristic and judging whether the irrelevance value is larger than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and the deleting module is used for deleting the first photo if the irrelevance value is larger than a first preset value.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including: the photo processing device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps in the photo processing method when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned photo processing method.
Therefore, in the embodiment of the invention, if the mobile terminal receives the photographing instruction, the mobile terminal collects a first photo formed by capturing an image by the camera; then matching the first face image characteristics included in the first photo with a preset face database; then, marking the facial image features which fail to be matched with the first facial image features and the preset facial image database as first target facial image features; and calculating an irrelevance value of the target person main body region corresponding to the first target face image characteristic, and deleting the first photo if the irrelevance value is greater than a first preset value. Through the steps, the first photo with the irrelevance value larger than the first preset value is directly deleted, so that the time consumed by the mobile terminal for screening the photo greatly influenced by other people is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of a photo processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another photo processing method provided by the embodiment of the invention;
fig. 3 is a block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 4 is a block diagram of another mobile terminal according to an embodiment of the present invention;
fig. 5 is a block diagram of another mobile terminal according to an embodiment of the present invention;
fig. 6 is a block diagram of another mobile terminal according to an embodiment of the present invention;
fig. 7 is a block diagram of another mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a photo processing method according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, collecting a first photo generated by the image captured by the camera if a photographing instruction is received.
The camera can be a camera carried by the mobile terminal; it should be noted that, the number and the position of the cameras on the mobile terminal are not particularly limited, for example: the number of the cameras can be 1, 2 or 3, etc., the position of the camera can be a front camera or a rear camera of the mobile terminal, and in addition, the front camera or the rear camera can capture images to generate the first picture.
And 102, matching the first face image characteristics in the first photo with a preset face database.
The mobile terminal can detect the first face image feature in the first photo according to the face detection function of the mobile terminal, and the specific principle is as follows: the mobile terminal conducts multi-scale window scanning on the first photo to obtain a large number of Harr features, then a cascade classifier is built, each sample of the image is classified by using a self-adaptive enhanced adaboost frame, and therefore the first face image features contained in the first photo are detected. And then matching each detected first face image characteristic with a face image characteristic in a preset face database through a Scale-invariant feature Transform (SIFT) algorithm, and recording whether the first face image characteristic is one member in the preset face database. If the face image characteristic matched with the face image characteristic can not be found in the face database, marking the first face image characteristic as a first target face image characteristic.
The first facial image features may include facial image features of a user or facial image features of a person that the user wants to capture when taking a picture, and may further include facial image features of a passerby, and it should be noted that the facial image features of the user or the facial image features of the person that the user wants to capture when taking a picture may be stored in a preset portrait database, and the facial image features of the passerby are not stored in the preset portrait database.
Step 103, marking the facial image features which fail to be matched with the preset facial database as first target facial image features.
The first target face image feature may be a face image feature of a passerby in step 102. For example: when a user takes a picture in a busy scenic spot by using the mobile terminal, the taken picture may include the user himself or friends and family, and other visitors in the scenic spot, so the first facial image feature may include facial image features of the user himself and the friends and family, and in addition, may also include facial image features of other visitors in the scenic spot. The mobile terminal can store the face image characteristics of the user and friends and relatives in the preset face database, and the face image characteristics of other visitors in the scenic region are not stored in the preset face database, so that when the first face image characteristic is matched with the preset face database, the face image characteristic of which the matching between the first face image characteristic and the preset face database fails can be marked as a first target face image characteristic, namely the first target face image characteristic is the face image characteristic of other visitors in the scenic region. Through the steps, the facial image characteristics of other tourists in the scenic spot are successfully distinguished from the facial image characteristics of the user and the facial image characteristics of relatives and friends.
104, calculating an irrelevance value of a target person main body area corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value; wherein the target person main body region is an outline region of a person main body on the first photograph corresponding to the first target face image feature.
The target human body region may be an outline region of the human body on the first photo corresponding to the first target human face image feature, and the human body may be the body of the person included in the first photo, such as the whole body or half body of the person. And the target person subject area may refer to the area that the person subject occupies on the first photograph.
Wherein the irrelevance value is used to reflect the degree of influence of the subject region of the target person on the picture quality of the first photograph. For example: when a user uses the mobile terminal to take a picture, the user's own picture is originally taken, and as a result, because of carelessness or too many people, the shot picture contains the head portrait features of other tourists, and if the head portrait features of other tourists occupy a larger area in the first picture or occupy the center position of the first picture, the influence degree of the head portrait features of other tourists on the picture quality of the first picture is serious. It should be noted that the closer the position occupied by the avatar characteristics of other guests on the first photo is to the center of the first photo, the larger the irrelevance value is, i.e. the more serious the influence on the picture quality of the first photo is; the larger the area occupied by the avatar characteristics of other guests on the first photo, the larger the irrelevance value, i.e. the more serious the influence on the picture quality of the first photo.
And 105, deleting the first photo if the irrelevance value is larger than a first preset value.
In addition, in the using process of the mobile terminal, the value of the first preset value can be adjusted on the mobile terminal by the user according to the requirement of the user. It should be noted that the specific value of the first preset value is not limited herein.
The Mobile terminal may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
Therefore, in the embodiment of the invention, if the mobile terminal receives the photographing instruction, the mobile terminal collects a first photo formed by capturing an image by the camera; then matching the first face image characteristics included in the first photo with a preset face database; then, marking the facial image features which fail to be matched with the first facial image features and the preset facial image database as first target facial image features; and calculating an irrelevance value of the target person main body region corresponding to the first target face image characteristic, and deleting the first photo if the irrelevance value is greater than a first preset value. Through the steps, the first photo with the irrelevance value larger than the first preset value is directly deleted, so that the time consumed for screening the photo greatly influenced by other people is reduced.
Referring to fig. 2, fig. 2 is a flowchart of a photo processing method according to an embodiment of the present invention. The main difference between this embodiment and the previous embodiment is that when the irrelevance value is less than or equal to the first preset value and the irrelevance value is greater than the second preset value, it is necessary to detect the distance between the center of the main body area of the target person and the preset position of the first photo, determine whether the distance is greater than the third preset value, and specifically calculate the irrelevance value. As shown in fig. 2, the method comprises the following steps:
step 201, if a photographing instruction is received, acquiring a first photo generated by capturing an image by the camera.
The camera can be a camera carried by the mobile terminal; it should be noted that, the number and the position of the cameras on the mobile terminal are not particularly limited, for example: the number of the cameras can be 1, 2 or 3, etc., the position of the camera can be a front camera or a rear camera of the mobile terminal, and in addition, the front camera or the rear camera can capture images to generate the first picture.
Step 202, matching the first face image feature in the first photo with a preset face database.
The mobile terminal can detect the first face image feature in the first photo according to the face detection function of the mobile terminal, and the specific principle is as follows: the mobile terminal conducts multi-scale window scanning on the first photo to obtain a large number of Harr features, then a cascade classifier is built, and each sample of the image is classified by using an adaboost frame, so that the first face image features contained in the first photo are detected. And then matching each detected first face image characteristic with a face image characteristic in a preset face database through an SIFT algorithm, and recording whether the first face image characteristic is one member in the preset face database. If the face image characteristic matched with the face image characteristic can not be found in the face database, marking the first face image characteristic as a first target face image characteristic.
The first facial image features may include facial image features of a user or facial image features of a person that the user wants to capture when taking a picture, and may further include facial image features of a passerby, and it should be noted that the facial image features of the user or the facial image features of the person that the user wants to capture when taking a picture may be stored in a preset portrait database, and the facial image features of the passerby are not stored in the preset portrait database.
Step 203, marking the facial image features which fail to be matched with the preset facial database as first target facial image features.
The first target face image feature may be a face image feature of a passerby in step 202. For example: when a user takes a picture in a busy scenic spot by using the mobile terminal, the taken picture may include the user himself or friends and family, and other visitors in the scenic spot, so the first facial image feature may include facial image features of the user himself and the friends and family, and in addition, may also include facial image features of other visitors in the scenic spot. The mobile terminal can store the face image characteristics of the user and friends and relatives in the preset face database, and the face image characteristics of other visitors in the scenic region are not stored in the preset face database, so that when the first face image characteristic is matched with the preset face database, the face image characteristic of which the matching between the first face image characteristic and the preset face database fails can be marked as a first target face image characteristic, namely the first target face image characteristic is the face image characteristic of other visitors in the scenic region. Through the steps, the facial image characteristics of other tourists in the scenic spot are successfully distinguished from the facial image characteristics of the user and the facial image characteristics of relatives and friends.
Step 204, calculating an irrelevance value of the target person main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value; wherein the target person main body region is an outline region of a person main body on the first photograph corresponding to the first target face image feature.
The target human body region may be an outline region of the human body on the first photo corresponding to the first target human face image feature, and the human body may be the body of the person included in the first photo, such as the whole body or half body of the person. And the target person subject area may refer to the area that the person subject occupies on the first photograph.
Wherein the degree of independence value may be a degree of influence of the target person main body area on the first photograph. For example: when a user uses the mobile terminal to take a picture, the user originally takes the picture of the user, and as a result, other tourists take the picture, and the area occupied by the tourists in the picture or the central position of the picture is occupied, so that the quality of the picture is seriously influenced. It should be noted that the closer the position on the photo occupied by other guests is to the center position, the larger the irrelevance value is; the larger the area on the photograph occupied by other guests, the greater its value of independence.
And step 205, deleting the first photo if the irrelevance value is larger than a first preset value.
In addition, in the using process of the mobile terminal, the value of the first preset value can be adjusted on the mobile terminal by the user according to the requirement of the user. It should be noted that the specific value of the first preset value is not limited herein.
It should be noted that steps 206, 207, 208, and 209 are optional.
Step 206, if the irrelevance value is smaller than or equal to a first preset value and the irrelevance value is larger than a second preset value, detecting a distance between the center position of the target person main body area and a preset position of the first photo, wherein the preset position comprises a focusing position or a center position.
The specific size of the second preset value may be set by a user on the mobile terminal, or may be automatically obtained by the mobile terminal at the beginning of starting, and in addition, in the using process of the mobile terminal, the value of the second preset value may be adjusted by the user on the mobile terminal according to the need of the user. It should be noted that the specific value of the second preset value is not limited herein.
The mobile terminal may set an angle of the first photograph as an origin of coordinates, establish a coordinate system using the origin of coordinates, and scan coordinates of the center position of the main area of the target person in horizontal and vertical directions in the first photograph.
And step 207, judging whether the distance is larger than a third preset value.
The specific size of the third preset value may be set by a user on the mobile terminal, or may be automatically obtained by the mobile terminal at the beginning of starting, and in addition, in the using process of the mobile terminal, the value of the third preset value may be adjusted by the user on the mobile terminal according to the need of the user. It should be noted that the specific value of the third preset value is not limited herein.
And 208, if the distance is larger than the third preset value, blurring the main body area of the target character.
The blurring process may be to blur the main body area of the target person, so that when people see the first photo, the focus is placed on the part which is not blurred, and an effect of highlighting the focus can be achieved.
And 209, marking the first photo if the distance is smaller than or equal to the third preset value.
If the distance is less than or equal to the third preset value, the first photo meeting the condition is marked, and when the user wants to delete the first photo, the marked first photo can be deleted by one key.
In the embodiment, the marked first photo is deleted by one key through the steps, so that the efficiency of deleting the first photo by the user is improved, the operation of the user is more convenient, and the mobile terminal is more intelligent.
Optionally, if a photographing instruction is received, before the first photo generated by capturing the image by the camera is collected, the method further includes:
scanning a second face image feature in a second photo acquired by the mobile terminal;
and storing the second face image characteristics in the preset face database.
The second photo may be a photo including facial image features of the user, or a photo including facial image features of friends and relatives of the user, and optionally, the second photo may be a photo stored in an album of the mobile terminal, or the second photo may be a photo including real facial image features in a social tool scanned by the mobile terminal. For example: in the social tool, the head portrait generally uses a photo containing the head portrait of a user, the mobile terminal judges that the head portrait is a friend of the user, the photo containing the face image characteristics of the friend of the user can be scanned, the face image characteristics of the friend of the user can be automatically added into a preset face database, and certainly, the face image characteristics of the friend of the user can be manually added into the preset face database by the user.
In the embodiment, through the steps, the preset face database can be set more completely, and the probability of mistakenly marking the first target face image characteristics of relatives and friends of the user is reduced.
Optionally, the calculating the independence value of the target person main body region corresponding to the first target face image feature includes:
analyzing the human body outline of the main body area of the target person based on a clustering method;
calculating the geometric center and the area of the human body outline in the first picture;
and calculating the geometric center and the area according to a preset formula to obtain an independence value of the main body area of the target character.
Wherein, the geometric center and the area of the human body outline in the first picture are calculated as follows: one corner of the first picture can be set as a coordinate origin, a coordinate system is established by the coordinate origin, the horizontal and vertical coordinates of the human body contour in the first picture are scanned, and the geometric center and the area of the human body contour in the first picture are calculated according to the horizontal and vertical coordinates of the human body contour in the first picture.
If the obtained geometric center month is close to the preset position, the irrelevant value is larger; the larger the area obtained, the larger the irrelevance value.
In the embodiment, the geometric center and the area of the human body outline in the first picture are calculated in a mode of establishing a coordinate system, and the irrelevance value is calculated according to the geometric center and the area, so that the irrelevance value is more accurate, and the error is reduced.
Optionally, in this embodiment, the preset formula is
Figure GDA0002490361980000091
The f (n) represents the irrelevance value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, and the S (x, y) represents the human body contour at the second positionAn area in a photograph, said
Figure GDA0002490361980000092
And the above-mentioned
Figure GDA0002490361980000093
Is a constant number, said x0A horizontal vertical scale representing the preset position, y0A vertical coordinate representing the preset position.
When there is one target person main area in the first photograph, the above formula may be used, and when there are n target person main areas in the first photograph, the value of n may be an integer greater than 1, and a formula f (∑ f) (n) may be used, where f (n) represents an irrelevance value, and f is an irrelevance value of the n target person main areas.
In this embodiment, on the basis of the embodiment shown in fig. 1, a step of detecting a distance between the center position of the target person main body area and the preset position of the first photo, and determining whether the distance is greater than a third preset value, and specifically how to calculate the irrelevance value is added, so as to reduce time consumed by the mobile terminal for screening photos greatly affected by other people, and improve accuracy of deleting photos greatly affected by other people.
Referring to fig. 3, fig. 3 is a structural diagram of a mobile terminal according to an embodiment of the present invention, which can implement details of a photo processing method in the foregoing embodiment and achieve the same effect. As shown in fig. 3, the mobile terminal 300 includes:
the acquisition module 301 is configured to acquire a first photo generated by capturing an image by the camera if a photographing instruction is received;
a matching module 302, configured to match a first facial image feature in the first photo with a preset portrait database;
a first marking module 303, configured to mark, as a first target face image feature, a face image feature that fails to be matched with the preset face database;
a calculating module 304, configured to calculate an irrelevance value of the target person main body region corresponding to the first target face image feature, and determine whether the irrelevance value is greater than a first preset value; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
a deleting module 305, configured to delete the first photo if the irrelevance value is greater than a first preset value.
Optionally, as shown in fig. 4, the mobile terminal 300 further includes:
a detecting module 306, configured to detect a distance between a center position of the target person main body area and a preset position of the first photo if the irrelevance value is smaller than or equal to a first preset value and the irrelevance value is greater than a second preset value, where the preset position includes a focus position or a center position;
a judging module 307, configured to judge whether the distance is greater than a third preset value;
a blurring processing module 308, configured to perform blurring processing on the main body area of the target character if the distance is greater than the third preset value;
a second marking module 309, configured to mark the first photo if the distance is less than or equal to the third preset value.
Optionally, as shown in fig. 5, the mobile terminal 300 further includes:
the scanning module 3010 is configured to scan a second face image feature in a second photo obtained by the mobile terminal;
a storage module 3011, configured to store the second facial image feature in the preset portrait database.
Optionally, as shown in fig. 6, the calculating module 304 includes:
an analysis submodule 3041 for analyzing a human body contour of the target person main body region based on a clustering method;
a first calculating submodule 3042 for calculating a geometric center and an area of the human body contour in the first picture;
the second calculating submodule 3043 is configured to calculate the geometric center and the area according to a preset formula to obtain an independence value of the main body region of the target person.
Optionally, the preset formula is
Figure GDA0002490361980000111
The f (n) represents the independence value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, the S (x, y) represents the area of the human body outline in the first picture, and the
Figure GDA0002490361980000112
And the above-mentioned
Figure GDA0002490361980000113
Is a constant number, said x0A horizontal vertical scale representing the preset position, y0A vertical coordinate representing the preset position.
Optionally, the second photo is a photo stored in an album of the mobile terminal, or the second photo is a photo containing real face image features in a social tool scanned by the mobile terminal.
It should be noted that, in this embodiment, the mobile terminal 300 may be a mobile terminal according to any implementation manner in the method embodiment of the present invention, and any implementation manner of the mobile terminal in the method embodiment of the present invention may be implemented by the mobile terminal 300 in this embodiment, so as to achieve the same beneficial effects, and details are not described here again.
Referring to fig. 7, fig. 7 is a structural diagram of a mobile terminal according to an embodiment of the present invention, which can implement details of a photo processing method in the foregoing embodiment and achieve the same effects. As shown in fig. 7, the mobile terminal 700 includes: at least one processor 701, a memory 702, at least one network interface 704, and a user interface 703. The various components in the mobile terminal 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, track ball, touch pad, or touch screen, etc.).
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration, and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous D RAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (ddr Data RateSD RAM, ddr SDRAM), Enhanced Synchronous SD RAM (ESDRAM), Synchronous link Dynamic random access memory (Synch link D RAM, SLDRAM), and Direct memory bus random access memory (DRRAM). The memory 702 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 702 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, by calling the program or the instruction stored in the memory 702, specifically, the program or the instruction stored in the application 7022, when the computer program is executed by the processor 701, the following steps can be implemented:
if a photographing instruction is received, acquiring a first photo generated by the image captured by the camera;
matching the first face image characteristics in the first photo with a preset face database;
marking the facial image features which fail to be matched with the preset facial image database as first target facial image features;
calculating an irrelevance value of a target character main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and if the irrelevance value is larger than a first preset value, deleting the first photo.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, the computer program may further implement the following steps when executed by the processor 701: after calculating the irrelevance value of the target person main body area corresponding to the first target face image characteristic and judging whether the irrelevance value is greater than a first preset value, the method further comprises the following steps:
if the irrelevance value is smaller than or equal to a first preset value and the irrelevance value is larger than a second preset value, detecting the distance between the center position of the main body area of the target person and the preset position of the first photo, wherein the preset position comprises a focusing position or a center position;
judging whether the distance is larger than a third preset value or not;
if the distance is larger than the third preset value, blurring the main body area of the target character;
and if the distance is smaller than or equal to the third preset value, marking the first photo.
Optionally, the computer program may further implement the following steps when executed by the processor 701: if receiving a photographing instruction, before acquiring a first photo generated by the image captured by the camera, the method further comprises the following steps:
scanning a second face image feature in a second photo acquired by the mobile terminal;
and storing the second face image characteristics in the preset face database.
Optionally, the computer program may further implement the following steps when executed by the processor 701: the calculating the irrelevance value of the target person main body region corresponding to the first target face image feature comprises:
analyzing the human body outline of the main body area of the target person based on a clustering method;
calculating the geometric center and the area of the human body outline in the first picture;
and calculating the geometric center and the area according to a preset formula to obtain an independence value of the main body area of the target character.
Optionally, the computer program may further implement the following steps when executed by the processor 701: the preset formula is
Figure GDA0002490361980000141
The f (n) represents the independence value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, the S (x, y) represents the area of the human body outline in the first picture, and the
Figure GDA0002490361980000142
And the above-mentioned
Figure GDA0002490361980000143
Is a constant number, said x0A horizontal vertical scale representing the preset position, y0A vertical coordinate representing the preset position.
Optionally, the second photo is a photo stored in an album of the mobile terminal, or the second photo is a photo containing real face image features in a social tool scanned by the mobile terminal.
It should be noted that, in this embodiment, the mobile terminal 700 may be a mobile terminal according to any implementation manner in the method embodiment of the present invention, and any implementation manner of the mobile terminal in the method embodiment of the present invention may be implemented by the mobile terminal 700 in this embodiment, so as to achieve the same beneficial effects, and details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the following steps:
if a photographing instruction is received, acquiring a first photo generated by the image captured by the camera;
matching the first face image characteristics in the first photo with a preset face database;
marking the facial image features which fail to be matched with the preset facial image database as first target facial image features;
calculating an irrelevance value of a target character main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and if the irrelevance value is larger than a first preset value, deleting the first photo.
Optionally, the computer program when executed may further implement the steps of:
if the irrelevance value is smaller than or equal to a first preset value and the irrelevance value is larger than a second preset value, detecting the distance between the center position of the main body area of the target person and the preset position of the first photo, wherein the preset position comprises a focusing position or a center position;
judging whether the distance is larger than a third preset value or not;
if the distance is larger than the third preset value, blurring the main body area of the target character;
and if the distance is smaller than or equal to the third preset value, marking the first photo.
Optionally, the computer program when executed may further implement the steps of:
scanning a second face image feature in a second photo acquired by the mobile terminal;
and storing the second face image characteristics in the preset face database.
Optionally, the computer program when executed may further implement the steps of:
the calculating the irrelevance value of the target person main body region corresponding to the first target face image feature comprises:
analyzing the human body outline of the main body area of the target person based on a clustering method;
calculating the geometric center and the area of the human body outline in the first picture;
and calculating the geometric center and the area according to a preset formula to obtain an independence value of the main body area of the target character.
Optionally, the preset formula is
Figure GDA0002490361980000161
The f (n) represents the independence value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, the S (x,y) represents the area of the human body outline in the first picture, the
Figure GDA0002490361980000162
And the above-mentioned
Figure GDA0002490361980000163
Is a constant number, said x0A horizontal vertical scale representing the preset position, y0A vertical coordinate representing the preset position.
Optionally, the second photo is a photo stored in an album of the mobile terminal, or the second photo is a photo containing real face image features in a social tool scanned by the mobile terminal.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A photo processing method is applied to a mobile terminal comprising a camera, and is characterized by comprising the following steps:
if a photographing instruction is received, acquiring a first photo generated by the image captured by the camera;
matching the first face image characteristics in the first photo with a preset face database;
marking the facial image features which fail to be matched with the preset facial image database as first target facial image features;
calculating an irrelevance value of a target character main body region corresponding to the first target face image characteristic, and judging whether the irrelevance value is greater than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and if the irrelevance value is larger than a first preset value, deleting the first photo.
2. The method of claim 1, wherein after calculating the irrelevance value of the subject region of the target person corresponding to the first target facial image feature and determining whether the irrelevance value is greater than a first predetermined value, the method further comprises:
if the irrelevance value is smaller than or equal to a first preset value and the irrelevance value is larger than a second preset value, detecting the distance between the center position of the main body area of the target person and the preset position of the first photo, wherein the preset position comprises a focusing position or a center position;
judging whether the distance is larger than a third preset value or not;
if the distance is larger than the third preset value, blurring the main body area of the target character;
and if the distance is smaller than or equal to the third preset value, marking the first photo.
3. The method of claim 1 or 2, wherein before the step of capturing the first photo generated by the image captured by the camera if the photographing instruction is received, the method further comprises:
scanning a second face image feature in a second photo acquired by the mobile terminal;
and storing the second face image characteristics in the preset face database.
4. The method of claim 2, wherein the calculating the degree of independence of the subject region of the target person corresponding to the first target face image feature comprises:
analyzing the human body outline of the main body area of the target person based on a clustering method;
calculating the geometric center and the area of the human body outline in the first picture;
and calculating the geometric center and the area according to a preset formula to obtain an independence value of the main body area of the target character.
5. The method of claim 4, wherein the predetermined formula is
Figure FDA0002490361970000021
The f (n) represents the independence value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, the S (x, y) represents the area of the human body outline in the first picture, and the
Figure FDA0002490361970000022
And the above-mentioned
Figure FDA0002490361970000023
Is a constant number, said x0A horizontal vertical scale representing the preset position, y0A vertical coordinate representing the preset position.
6. The photo processing method according to claim 3, wherein the second photo is a photo stored in an album of the mobile terminal, or the second photo is a photo of a social tool scanned by the mobile terminal and containing a real face image feature.
7. A mobile terminal, comprising a camera, comprising:
the acquisition module is used for acquiring a first photo generated by the image captured by the camera if a photographing instruction is received;
the matching module is used for matching the first face image characteristics in the first photo with a preset portrait database;
the first marking module is used for marking the first face image characteristic which is unsuccessfully matched with the preset face database as a first target face image characteristic;
the calculation module is used for calculating an irrelevance value of a target character main body area corresponding to the first target face image characteristic and judging whether the irrelevance value is larger than a first preset value or not; wherein the target person subject region is an outline region of a person subject on the first photograph corresponding to the first target facial image feature;
and the deleting module is used for deleting the first photo if the irrelevance value is larger than a first preset value.
8. The mobile terminal of claim 7, wherein the mobile terminal further comprises:
a detection module, configured to detect a distance between a center position of the target person main body area and a preset position of the first photograph if the irrelevance value is less than or equal to a first preset value and the irrelevance value is greater than a second preset value, where the preset position includes a focus position or a center position;
the judging module is used for judging whether the distance is larger than a third preset value or not;
the blurring processing module is used for blurring the main body area of the target character if the distance is larger than the third preset value;
and the second marking module is used for marking the first photo if the distance is less than or equal to the third preset value.
9. The mobile terminal according to claim 7 or 8, characterized in that the mobile terminal further comprises:
the scanning module is used for scanning second face image characteristics in a second photo acquired by the mobile terminal;
and the storage module is used for storing the second face image characteristics in the preset portrait database.
10. The mobile terminal of claim 8, wherein the computing module comprises:
the analysis submodule is used for analyzing the human body outline of the target character main body area based on a clustering method;
the first calculation submodule is used for calculating the geometric center and the area of the human body outline in the first picture;
and the second calculation submodule is used for calculating the geometric center and the area according to a preset formula to obtain an independence value of the main body region of the target character.
11. The mobile terminal of claim 10, wherein the predetermined formula is
Figure FDA0002490361970000031
The f (n) represents the independence value, the x represents the abscissa of the geometric center, the y represents the ordinate of the geometric center, the S (x, y) represents the area of the human body outline in the first picture, and the
Figure FDA0002490361970000032
And the above-mentioned
Figure FDA0002490361970000033
Is a constant number, said x0Presentation instrumentThe horizontal and vertical marks of the preset position, y0A vertical coordinate representing the preset position.
12. The mobile terminal according to claim 9, wherein the second photo is a photo stored in an album of the mobile terminal, or the second photo is a photo of a social tool scanned by the mobile terminal and containing a real face image feature.
13. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps in the photo processing method according to any one of claims 1 to 6.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the photo processing method according to any one of claims 1 to 6.
CN201710772830.XA 2017-08-31 2017-08-31 Photo processing method and mobile terminal Active CN107578006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710772830.XA CN107578006B (en) 2017-08-31 2017-08-31 Photo processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710772830.XA CN107578006B (en) 2017-08-31 2017-08-31 Photo processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107578006A CN107578006A (en) 2018-01-12
CN107578006B true CN107578006B (en) 2020-06-23

Family

ID=61030678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710772830.XA Active CN107578006B (en) 2017-08-31 2017-08-31 Photo processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107578006B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040594B (en) * 2018-08-24 2020-12-18 创新先进技术有限公司 Photographing method and device
CN109271963A (en) * 2018-10-10 2019-01-25 杭州德肤修生物科技有限公司 The cosmetic industry method for quantitatively evaluating compared based on time-series image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW525375B (en) * 2000-09-26 2003-03-21 Inst Information Industry Digital image processing device and the digital camera using the same
JP4681863B2 (en) * 2004-11-30 2011-05-11 キヤノン株式会社 Image processing apparatus and control method thereof
JP2010226558A (en) * 2009-03-25 2010-10-07 Sony Corp Apparatus, method, and program for processing image
CN102542299B (en) * 2011-12-07 2015-03-25 惠州Tcl移动通信有限公司 Face recognition method, device and mobile terminal capable of recognizing face
CN105260732A (en) * 2015-11-26 2016-01-20 小米科技有限责任公司 Image processing method and device
CN105912997B (en) * 2016-04-05 2019-05-28 福建兴宇信息科技有限公司 Face recognition method and system

Also Published As

Publication number Publication date
CN107578006A (en) 2018-01-12

Similar Documents

Publication Publication Date Title
US10706892B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
US8391645B2 (en) Detecting orientation of digital images using face detection information
US8081844B2 (en) Detecting orientation of digital images using face detection information
US20110305394A1 (en) Object Detection Metadata
WO2018112788A1 (en) Image processing method and device
CN108200335B (en) Photographing method based on double cameras, terminal and computer readable storage medium
CN109274891B (en) Image processing method, device and storage medium thereof
JP2007272685A (en) Automatic trimming method, device and program
KR20140090078A (en) Method for processing an image and an electronic device thereof
US11514713B2 (en) Face quality of captured images
CN111163265A (en) Image processing method, image processing device, mobile terminal and computer storage medium
JP2016212784A (en) Image processing apparatus and image processing method
US20200267331A1 (en) Capturing a photo using a signature motion of a mobile device
CN112017137A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107578006B (en) Photo processing method and mobile terminal
JP2018046337A (en) Information processing device, program and control method
WO2020119315A1 (en) Face acquisition method and related product
CN109547678B (en) Processing method, device, equipment and readable storage medium
CN116958795A (en) Method and device for identifying flip image, electronic equipment and storage medium
CN111402391A (en) User face image display method, display device and corresponding storage medium
US10282633B2 (en) Cross-asset media analysis and processing
JP2014085845A (en) Moving picture processing device, moving picture processing method, program and integrated circuit
KR102628714B1 (en) Photography system for surpporting to picture for mobile terminal and method thereof
CN116980744B (en) Feature-based camera tracking method and device, electronic equipment and storage medium
CN115334241B (en) Focusing control method, device, storage medium and image pickup apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant