CN110222567B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110222567B
CN110222567B CN201910365351.5A CN201910365351A CN110222567B CN 110222567 B CN110222567 B CN 110222567B CN 201910365351 A CN201910365351 A CN 201910365351A CN 110222567 B CN110222567 B CN 110222567B
Authority
CN
China
Prior art keywords
face object
face
target
image
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910365351.5A
Other languages
Chinese (zh)
Other versions
CN110222567A (en
Inventor
查志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910365351.5A priority Critical patent/CN110222567B/en
Publication of CN110222567A publication Critical patent/CN110222567A/en
Application granted granted Critical
Publication of CN110222567B publication Critical patent/CN110222567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an image processing method and equipment, relates to the technical field of communication, and aims to solve the problem that the existing image processing method cannot meet the requirements of multiple people in a photo at the same time. The image processing method comprises the following steps: acquiring a target image; identifying at least one face object in the target image; sending the face object to a target device corresponding to the face object; receiving the face object processed by the target equipment; and updating the corresponding face object in the target image according to the received processed face object. The image processing method in the embodiment of the invention is applied to equipment.

Description

Image processing method and device
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and device.
Background
At present, the photographing function is more and more powerful, but people pursue beauty endlessly, and usually, after a photo is taken, the post-retouching processing, particularly portrait retouching processing, is carried out on the photo. The portrait fine-adjustment treatment enables people to look more beautiful and more lackluster through careful operations such as skin grinding, color mixing, curve adjusting and the like of the detail part.
Some photos are usually taken when friends meet, and if post portrait fine-trimming processing is carried out on the photos, the photos may fall into the two difficult places that one person trims the whole picture or each person trims the respective pictures. If a person repairs the map, the workload is huge, the time cost is high, and the expected effect of each person can not be achieved due to different individual aesthetics; if people revise the respective graphs, the same artwork can be caused, but every person who sends the artwork to the social application platform later is different in length, so that unnecessary misunderstandings are caused.
Therefore, the existing image processing method cannot simultaneously meet the requirements of a plurality of people in the photos.
Disclosure of Invention
The embodiment of the invention provides an image processing method, which aims to solve the problem that the existing image processing method cannot meet the requirements of a plurality of people in a photo at the same time.
In order to solve the technical problem, the invention is realized as follows:
the embodiment of the invention provides an image processing method, which comprises the following steps: acquiring a target image; identifying at least one face object in the target image; sending the face object to a target device corresponding to the face object; receiving the face object processed by the target equipment; and updating the corresponding face object in the target image according to the received processed face object.
An embodiment of the present invention further provides an apparatus, including: the image acquisition module is used for acquiring a target image; the face recognition module is used for recognizing at least one face object in the target image; the face sending module is used for sending the face object to target equipment corresponding to the face object; the face receiving module is used for receiving the face object processed by the target equipment; and the face updating module is used for updating the corresponding face object in the target image according to the received processed face object.
An embodiment of the present invention further provides an apparatus, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program implements the steps of the image processing method when executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method are implemented.
In the embodiment of the invention, under the condition that the device performs face image processing on the multiple-person group photo, after a target image of the multiple-person group photo is obtained, firstly, a face object in the target image is identified to obtain at least one face object in the target image, so that each face object is sent to the target device corresponding to the face object, each person in the group photo can perform face image operation processing on a mobile terminal of the person according to the preference, and then the processed face object is returned. Further, in the image processing process of this embodiment, based on the face objects returned by each device, the corresponding initial face object in the target image is updated, so as to complete the face image processing of multiple persons in the group photograph. From the above process, it can be seen that the processing effect of the face objects in the group photo meets the expectation of each person, thereby meeting the requirements of a plurality of persons.
Drawings
FIG. 1 is one of the flow charts of an image processing method of an embodiment of the present invention;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a fourth flowchart of an image processing method according to an embodiment of the present invention;
FIG. 5 is a fifth flowchart of an image processing method according to an embodiment of the present invention;
FIG. 6 is one of the block diagrams of the apparatus of an embodiment of the present invention;
fig. 7 is a second block diagram of an apparatus of an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flow chart of an image processing method according to an embodiment of the present invention is shown, including:
step S1: and acquiring a target image.
In the image processing method of the present embodiment, first, one image to be processed is acquired as a target image.
Among them, the present embodiment preferably performs face optimization processing on the target image.
Preferably, the target image is a photo album including a plurality of persons.
Preferably, the image processing method in the present embodiment is applied to a server. Illustratively, a user selects a target image from a mobile phone, and uploads the target image to a server through a specified application platform in the mobile phone, so that the server acquires the target image and creates a new retouching task T.
For convenience of explanation, the present embodiment refers to the initiator of the retouching task T as the first user. Preferably, the first user may upload a target image to the server through the retouching application platform of the first mobile terminal.
Step S2: at least one face object in the target image is identified.
Preferably, the face object in the target image is recognized using face recognition techniques.
Wherein each person shooting subject corresponds to a human face object.
Preferably, at least one person in the target image is recognized as a human face object of the subject, and further, all persons in the target image are recognized as a human face object of the subject.
With the progress of technology and the development of society, face recognition technology has become more and more popular. Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. Along with the remarkable improvement of the face recognition technology in the aspects of precision, stability and speed, the face recognition technology gradually goes to practicability and is applied to various fields such as traffic, finance, public security, frontier inspection, education and the like, so that our life is faster and more convenient.
Preferably, in this embodiment, the server performs face detection on the target image by using a face recognition technology, so as to cut out different face objects in the target image, mark different face objects on the target image, and position the corresponding face objects as the cropping subtasks t1, t2, and t3 ….
Step S3: and sending the face object to the target equipment corresponding to the face object.
The implementation of this step includes a number of scenarios, such as:
in the first case, each face object in the target image may be returned to the first mobile terminal by the server, so that the first user may manually identify each face object in the first mobile terminal and send the face object to the target device corresponding to the face object through a manual operation according to the identified face object.
In the second case, the first user may set the face object and the target device information corresponding thereto in the first mobile terminal in advance, and upload the face object and the target device information corresponding thereto to the server, so that the server may automatically send the face object identified by the first user to the target device corresponding to the face object.
In a third case, based on the manual transmission operation of the first user, the server may automatically record history transmission data in which the face object and the target device information corresponding thereto are included, so that the server may automatically transmit to the target device corresponding to the face object according to the recognized face object.
The three cases respectively provide the implementation schemes of automatic transmission and manual transmission. For the scheme of manual transmission, more choices are provided for the user, and the requirements of the user are met. For the scheme of automatic transmission, the manual operation of a user is simplified, the accuracy of transmitting the face object can be ensured, and the privacy leakage problem caused by mistaken transmission, mistaken transmission and the like is avoided.
Further, in practical applications, the number of target devices to be transmitted also includes various situations.
For example, the target devices correspond one-to-one to the number of face objects. For example, if 5 subjects are included in the target image, 5 face objects can be recognized, and the 5 face objects can be transmitted to the corresponding 5 target devices, respectively. And the first user initiating the image repairing task also receives the face object of the first user.
For another example, the first user initiating the image repairing task may directly perform image processing on the face object of the first user on the target image, and may also download the face object of the first user for image processing. Thus, the number of target devices is smaller than the number of face objects.
For another example, some users may not transmit the face image without performing face image processing. Thus, the number of target devices is smaller than the number of face objects.
And if the target equipment corresponding to the face object cannot be found, the target equipment is not sent. Thus, the number of target devices is smaller than the number of face objects.
If a face object unrelated to the person shooting subject is identified, such a face object may not be considered. For example, when a scene spot is photographed, a background image contains a large number of tourists, and the face object for identifying the tourists may not be considered.
Preferably, a picture of the face object is cropped in the target image according to the recognition result, so that the face object is transmitted in the form of the picture in the step.
Step S4: and receiving the human face object processed by the target equipment.
After step S3, after the target device receives the face object, the user corresponding to the target device, that is, the person to which the face object belongs, may perform processing such as face refinement according to the preference of the user, and then return the processed face object.
Preferably, the target device is a mobile terminal such as a mobile phone. Correspondingly, each mobile terminal can return the refined face object to the first mobile terminal, and then the first mobile terminal returns the face object to the server; or, each mobile terminal can directly return the refined human face object to the server.
In this step, the processed face object sent back based on the target device is preferably received by the server.
Step S5: and updating the corresponding face object in the target image according to the received processed face object.
After all the target devices are determined to have returned the processed face objects, that is, after all the subtasks t1, t2, and t3 … are completed, the server will replace the new map update returned from t1 with the face object corresponding to t1 in the target image, replace the new map update returned from t2 with the face object corresponding to t2 in the target image, and so on, and finally synthesize a map including the user maps.
In the embodiment of the invention, under the condition that the device performs face image processing on the multiple-person group photo, after a target image of the multiple-person group photo is obtained, firstly, a face object in the target image is identified to obtain at least one face object in the target image, so that each face object is sent to the target device corresponding to the face object, each person in the group photo can perform face image operation processing on a mobile terminal of the person according to the preference, and then the processed face object is returned. Further, in the image processing process of this embodiment, based on the face objects returned by each device, the corresponding initial face object in the target image is updated, so as to complete the face image processing of multiple persons in the group photograph. From the above process, it can be seen that the processing effect of the face objects in the group photo meets the expectation of each person, thereby meeting the requirements of a plurality of persons.
Preferably, the image processing method in this embodiment is also applicable to mobile terminals such as mobile phones. Illustratively, the user selects a target image from the mobile terminal, so that the mobile terminal can directly acquire the target image and create a new retouching task T, and then divide a plurality of retouching subtasks T1, T2 and T3 … in the target image. And further, sending the face object in each image modification subtask to target equipment corresponding to the face object, receiving the refined face object returned by each target equipment, updating and replacing the target image, and finally synthesizing an image containing each user image modification.
Therefore, for all the people in the co-photography shooting, each person can independently refine the face of the person according to the preference and the aesthetic value of the person, so that the co-photography result can meet the expectation of a plurality of people.
In addition, for the initiator of the retouching, the task of finishing the portrait of the photo is decomposed to each corresponding person in the photo, so that the retouching time is saved.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a flowchart of an image processing method according to another embodiment of the present invention, and after step S2, the method further includes:
step S6: a first input is received.
Step S7: in response to the first input, attribute information of the face object is set. The attribute information at least comprises target equipment information corresponding to the face object.
For example, a first user may mark attribute information on each identified human face object through a first input made on a first mobile terminal.
For another example, the server may automatically record historical operation data of attribute information marked on the face object by the user, and automatically mark attribute information on each identified face object according to the historical operation data.
For another example, the server may automatically associate the face objects and the attribute information associated therewith in local data or network data, and automatically mark the attribute information on each identified face object.
In reference, when the server detects that the similarity between the face object and a certain user avatar of a certain application platform is high, the server may automatically label the user information of the application platform on the face object.
And for the condition that the server automatically marks the attribute information, the first user can modify and add the attribute information on the recognized individual human face objects through a first input carried out on the first mobile terminal.
Preferably, the attribute information includes at least target device information corresponding to the face object.
The target device information includes, for example, an identification code of the target device, a user name and a user ID in the social application platform, and a user name and a user ID in the profiling application platform.
For reference, the attribute information marked on the face object can be displayed or hidden.
For example, the user name may be displayed and the user ID may be hidden.
Preferably, after the first mobile terminal performs operations of labeling, modifying and adding attribute information through the first input by the first user, the first mobile terminal may upload the first input to the server, so that the server receives the first input, and then the server responds to the first input, performs corresponding labeling, modifying and adding attribute information on the target image, returns to the first mobile terminal, and displays the target image to the user.
The first input comprises touch operations of clicking, sliding, zooming, editing and the like of a user on the mobile terminal and space gesture operations based on the mobile terminal.
The first input includes a series of operations, not limited to a certain operation.
Correspondingly, step S3 includes:
step S31: and acquiring attribute information of the human face object.
Step S32: and identifying target equipment corresponding to the face object according to the attribute information of the face object.
Step S33: and sending the face object to the target equipment corresponding to the face object.
In the first scenario, the present embodiment is implemented based on the same application platform. The application platform is preferably a retouching application platform, and in the retouching application platform, each user can register different user attribute information such as a user name, a user ID and the like. Correspondingly, the attribute information of the face object can be the user name, the user ID and the like of the retouching application platform, so that the face object can be sent to the corresponding user ID through the retouching application platform, and then is sent to the target device where the user ID is located.
For example, the target image of the retouching application platform includes the face object of the small a, so that the server can acquire the user ID of the small a in the retouching application platform according to the recognized face object of the small a, and then send the face object of the small a to the user ID, so that the device of the small a can receive a new message from the retouching application platform, namely, the face object of the small a.
In a second scenario, the present embodiment is implemented based on multiple application platforms. The plurality of application platforms includes at least a cropping application platform and a social application platform. In the social application platform, each user may register a different user name, user ID, etc. Correspondingly, the attribute information of the face object may be a user name, a user ID, and the like of the social application platform.
For example, the target image of the cropping application platform includes the face object of the small a, so that the server can acquire the user ID of the small a in the social application platform according to the recognized face object of the small a, and then the social application platform sends the face object of the small a to the user ID, so that the device of the small a can receive a new message from the social application platform, that is, the face object of the small a.
In reference, in the target image of the cropping application platform, the face object of the small a is included, so that the first user can manually send the face object of the small a to the user ID of the small a in the social application platform, so that the device of the small a can receive a new message from the social application platform, that is, the face object of the small a. Further, based on the above manual operation of the first user, the server may automatically record the face object of the small a and the user ID of the small a in the social application platform, so that after the face object of the small a is detected again, the user ID of the small a in the social application platform may be directly obtained, and then the social application platform sends the face object of the small a to the user ID, so that the device of the small a may receive a new message from the social application platform, that is, the face object of the small a.
The data can be transmitted between different application platforms through the mobile terminal.
Preferably, the first user and the characters in the photo are in friend relationship with each other on the cropping application platform or the social application platform, so that one-touch sharing is facilitated.
In this embodiment, the server may perform friend association with a previously marked application platform according to the face detection, and the user may determine whether the association is correct, and perform secondary editing on the face object and the unmarked or incorrectly marked friend.
The embodiment provides a manual operation mode for the user, so that the phenomena of system identification errors and the like are avoided, and meanwhile, the personalized requirements of the user are met. In addition, the face object and the target device can be automatically corresponding based on the record of the manual operation of the user, so that the face object in the photo can be automatically shared.
It should be noted that, after receiving the face object, the target device may directly submit the processed face object on the same retouching application platform, so that the retouching application platform may automatically search the target image and the corresponding face object in the target image based on the submission path, the submitted user ID, and the like, to automatically synthesize the processed target image.
After receiving the face object, the target device may also submit the processed face object through the social application platform after refinement, on one hand, the retouching application platform may automatically search the target image and the corresponding face object in the target image based on the submission path, the submitted user ID, and the like through interaction with the mobile terminal, or through interaction with the social application platform, so as to automatically synthesize the processed target image. On the other hand, the first user can also manually download the face objects returned by other equipment and upload the face objects to the retouching application platform, so that the retouching application platform automatically synthesizes the processed target image.
Preferably, the image processing method in this embodiment is also applicable to mobile terminals such as mobile phones. Illustratively, a user performs a first input on the mobile terminal, and labels attribute information on the identified face object, so that the mobile terminal can send the face object to the target device corresponding to the face object according to the attribute information, such as an IP address of the target device, a device identification code, and the like, and after receiving the refined face object returned by each target device, update and replace the face object on the target image, and finally synthesize an image including the repaired image of each user.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a flowchart of an image processing method according to another embodiment of the present invention, and after step S2, the method further includes:
step S8: a second input is received.
Step S9: and responding to the second input, and adjusting the region range of the human face object.
Preferably, after the first mobile terminal performs the area range adjustment of the face object through the second input by the first user, the first mobile terminal may upload the second input to the server, so that the server receives the second input, and the server may perform the area range adjustment of the corresponding face object on the target image, and then return to the first mobile terminal to be displayed to the user.
The second input comprises touch operations such as clicking, sliding and zooming on the mobile terminal by the user and space gesture operations based on the mobile terminal.
The second input includes a series of operations, not limited to a certain operation.
In this embodiment, the server may automatically extract a range of the detected face features according to the face detection, so as to display a plurality of face objects. The first user can adjust the area range of each face object according to needs, so that the secondary area range division is performed on the face objects, the personalized requirements of the user are met, and the phenomenon that the area range automatically divided by the server is wrong is avoided.
On the basis of the embodiment shown in fig. 1, fig. 4 shows a flowchart of an image processing method according to another embodiment of the present invention, and step S5 includes:
step S51: and splicing the processed face object on the corresponding face object in the target image.
From the foregoing, it is an object of the embodiments of the present invention to provide a distributed post-portrait modification scheme based on face recognition. According to the scheme, by means of face recognition and a distributed idea, face objects in a group photo are marked according to the face recognition and are shared to corresponding friends, each friend can independently refine the face object of the friend, the repaired picture can be submitted, after all people submit the pictures, a server can re-splice each repaired picture, and then the pictures are output.
The distributed idea is a computer science, and the idea is that a very huge work is divided into a plurality of small subtasks, then the small parts are divided into different nodes for processing, and finally the subtasks are integrated to obtain a final result. Typical applications are such as grid computing, distributed version control systems (git), etc.
In the embodiment shown in fig. 4, mainly for the final node, a form of stitching is provided to update the corresponding region of the target image to complete the final mapping.
For example, after the server determines that all the target devices have returned the processed face objects, that is, after all the subtasks t1, t2, and t3 … are completed, the server splices the new graph returned by t1 to the face object corresponding to t1 in the target image, splices the new graph returned by t2 to the face object corresponding to t2 in the target image, and so on, and finally synthesizes one graph including the user patches.
On the basis of the embodiment shown in fig. 1, after the flowchart of the image processing method according to another embodiment of the present invention shown in fig. 5, step S5 further includes:
step S10: and outputting the processed target image.
Step S11: and sending the processed target image to the target equipment.
And after all the friends are resubmitted, the server rejoins the new graph uploaded by each friend to the previous corresponding area and automatically shares the synthesized new graph to each friend.
It should be noted that the embodiment of the present invention is preferably applied to a server, and the server implements the whole image processing process through interaction with the mobile terminal.
The embodiment of the invention can also be applied to mobile terminals, and the mobile terminals realize the whole image processing process through the interaction between the mobile terminals and a plurality of mobile terminals.
Therefore, the device to which the embodiment of the present invention is applicable is not limited to a server, a mobile terminal, etc.
In conclusion, the invention can divide the work of the portrait refinement in the photo album into respective sub-modifications, and combine the sub-modifications after the sub-modifications are completed, so that the result is shared by each member, which is equivalent to that each member simultaneously refines one photo album refinement, thereby greatly improving the efficiency; secondly, each member can finish the part of the member according to the aesthetic value and the preference of the member, and the condition that the finished result of the matched portrait meets the expectation of the member is ensured. The invention naturally solves the problem of the trimming of the co-photographing portrait, not only solves the problem of huge workload of the trimming of the co-photographing portrait of the photographing user, but also solves the dissatisfaction of the co-photographing member on the result of the post-trimming of the co-photographing, promotes the harmony among friends, and is beautiful. Therefore, the embodiment of the invention provides effective guarantee for the application scene of the photo refinement in the aspects of high efficiency and autonomy by means of face recognition and a distributed thought.
FIG. 6 shows a block diagram of an apparatus of another embodiment of the invention, comprising:
an image acquisition module 10, configured to acquire a target image;
a face recognition module 20 for recognizing at least one face object in the target image;
a face sending module 30, configured to send the face object to a target device corresponding to the face object;
the face receiving module 40 is configured to receive a face object processed by the target device;
and a face updating module 50, configured to update a corresponding face object in the target image according to the received processed face object.
In the embodiment of the invention, under the condition that the device performs face image processing on the multiple-person group photo, after a target image of the multiple-person group photo is obtained, firstly, a face object in the target image is identified to obtain at least one face object in the target image, so that each face object is sent to the target device corresponding to the face object, each person in the group photo can perform face image operation processing on a mobile terminal of the person according to the preference, and then the processed face object is returned. Further, in the image processing process of this embodiment, based on the face objects returned by each device, the corresponding initial face object in the target image is updated, so as to complete the face image processing of multiple persons in the group photograph. From the above process, it can be seen that the processing effect of the face objects in the group photo meets the expectation of each person, thereby meeting the requirements of a plurality of persons.
Preferably, the apparatus further comprises:
the first input receiving module is used for receiving a first input;
the first input response module is used for responding to the first input and setting the attribute information of the face object;
the face transmission module 30 includes:
the attribute acquisition unit is used for acquiring attribute information of the face object;
the device identification unit is used for identifying target devices corresponding to the face objects according to the attribute information of the face objects;
the device sending unit is used for sending the face object to the target device corresponding to the face object;
the attribute information at least comprises target equipment information corresponding to the face object.
Preferably, the apparatus further comprises:
the second input receiving module is used for receiving a second input;
and the second input response module is used for responding to the second input and adjusting the area range of the human face object.
Preferably, the face update module 50 includes:
and the splicing unit is used for splicing the processed face object on the corresponding face object in the target image.
Preferably, the apparatus further comprises:
the image output module is used for outputting the processed target image;
and the image sharing module is used for sending the processed target image to the target equipment.
The device provided by the embodiment of the present invention can implement each process implemented by the device in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of an apparatus for implementing various embodiments of the present invention, where the apparatus 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the configuration of the device shown in fig. 7 does not constitute a limitation of the device, and that the device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted device, a wearable device, a pedometer, and the like.
The processor 110 is configured to obtain a target image; identifying at least one face object in the target image; sending the face object to a target device corresponding to the face object; receiving the face object processed by the target equipment; and updating the corresponding face object in the target image according to the received processed face object.
In the embodiment of the invention, under the condition that the device performs face image processing on the multiple-person group photo, after a target image of the multiple-person group photo is obtained, firstly, a face object in the target image is identified to obtain at least one face object in the target image, so that each face object is sent to the target device corresponding to the face object, each person in the group photo can perform face image operation processing on a mobile terminal of the person according to the preference, and then the processed face object is returned. Further, in the image processing process of this embodiment, based on the face objects returned by each device, the corresponding initial face object in the target image is updated, so as to complete the face image processing of multiple persons in the group photograph. From the above process, it can be seen that the processing effect of the face objects in the group photo meets the expectation of each person, thereby meeting the requirements of a plurality of persons.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The device provides wireless broadband internet access to the user through the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The device 100 also includes at least one sensor 105, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the device attitude (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the apparatus. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the apparatus 100 or may be used to transmit data between the apparatus 100 and an external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the apparatus, connects various parts of the entire apparatus using various interfaces and lines, performs various functions of the apparatus and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the apparatus. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The device 100 may further include a power supply 111 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the device 100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an apparatus is further provided in an embodiment of the present invention, including a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a target image;
identifying at least two face objects in the target image;
sending the face object to a target device corresponding to the face object;
receiving the face object processed by the target equipment;
updating a corresponding face object in the target image according to the received processed face object;
after the at least two face objects in the target image are identified, the method further includes:
receiving a first input;
setting attribute information of the face object in response to the first input;
the sending the face object to the target device corresponding to the face object includes:
acquiring attribute information of the face object;
identifying target equipment corresponding to the face object according to the attribute information of the face object;
sending the face object to a target device corresponding to the face object;
wherein the attribute information at least includes target device information corresponding to the face object.
2. The method of claim 1, wherein after identifying at least two human face objects in the target image, further comprising:
receiving a second input;
and responding to the second input, and adjusting the area range of the human face object.
3. The method according to claim 1, wherein the updating the corresponding face object in the target image according to the received processed face object comprises:
and splicing the processed face object on the corresponding face object in the target image.
4. The method according to claim 1, further comprising, after updating the corresponding face object in the target image according to the received processed face object, the steps of:
outputting the processed target image;
and sending the processed target image to the target equipment.
5. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a target image;
the face recognition module is used for recognizing at least two face objects in the target image;
the face sending module is used for sending the face object to target equipment corresponding to the face object;
the face receiving module is used for receiving the face object processed by the target equipment;
the face updating module is used for updating a corresponding face object in the target image according to the received processed face object;
wherein the image processing apparatus further comprises:
the first input receiving module is used for receiving a first input;
the first input response module is used for responding to the first input and setting the attribute information of the face object;
the face sending module comprises:
an attribute obtaining unit for obtaining attribute information of the face object;
the device identification unit is used for identifying target devices corresponding to the face objects according to the attribute information of the face objects;
the device sending unit is used for sending the face object to a target device corresponding to the face object;
wherein the attribute information at least includes target device information corresponding to the face object.
6. The image processing apparatus according to claim 5, characterized by further comprising:
the second input receiving module is used for receiving a second input;
and the second input response module is used for responding to the second input and adjusting the area range of the human face object.
7. The image processing apparatus according to claim 5, wherein the face update module includes:
and the splicing unit is used for splicing the processed face object on the corresponding face object in the target image.
8. The image processing apparatus according to claim 5, characterized by further comprising:
the image output module is used for outputting the processed target image;
and the image sharing module is used for sending the processed target image to the target equipment.
9. An image processing apparatus, characterized by comprising a processor, a memory, a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
CN201910365351.5A 2019-04-30 2019-04-30 Image processing method and device Active CN110222567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910365351.5A CN110222567B (en) 2019-04-30 2019-04-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910365351.5A CN110222567B (en) 2019-04-30 2019-04-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110222567A CN110222567A (en) 2019-09-10
CN110222567B true CN110222567B (en) 2021-01-08

Family

ID=67820223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910365351.5A Active CN110222567B (en) 2019-04-30 2019-04-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110222567B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292224B (en) * 2020-02-18 2024-01-16 维沃移动通信有限公司 Image processing method and electronic equipment
CN111696051A (en) * 2020-05-14 2020-09-22 维沃移动通信有限公司 Portrait restoration method and electronic equipment
CN112036310A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN109671034A (en) * 2018-12-26 2019-04-23 维沃移动通信有限公司 A kind of image processing method and terminal device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5822125B2 (en) * 2011-11-09 2015-11-24 日本電気株式会社 Service cooperation apparatus, service cooperation method, and service cooperation program
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN107169042B (en) * 2017-04-24 2021-01-26 北京小米移动软件有限公司 Method and device for sharing pictures and computer readable storage medium
CN107274354A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal
CN107274355A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal
CN107944448A (en) * 2017-09-29 2018-04-20 百度在线网络技术(北京)有限公司 A kind of image asynchronous edit methods and device
CN107833090A (en) * 2017-10-18 2018-03-23 宁波江丰智能科技有限公司 One kind repaiies figure platform
CN107766831B (en) * 2017-10-31 2020-06-30 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer-readable storage medium
CN107967339B (en) * 2017-12-06 2021-01-26 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN109461117B (en) * 2018-10-30 2023-11-24 维沃移动通信有限公司 Image processing method and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN109671034A (en) * 2018-12-26 2019-04-23 维沃移动通信有限公司 A kind of image processing method and terminal device

Also Published As

Publication number Publication date
CN110222567A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
US11315336B2 (en) Method and device for editing virtual scene, and non-transitory computer-readable storage medium
CN107995429B (en) Shooting method and mobile terminal
CN111541845B (en) Image processing method and device and electronic equipment
CN113132618B (en) Auxiliary photographing method and device, terminal equipment and storage medium
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
CN108184050B (en) Photographing method and mobile terminal
CN110059652B (en) Face image processing method, device and storage medium
CN110222567B (en) Image processing method and device
CN108174103B (en) Shooting prompting method and mobile terminal
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN111416940A (en) Shooting parameter processing method and electronic equipment
CN108628985B (en) Photo album processing method and mobile terminal
CN109495616B (en) Photographing method and terminal equipment
CN112052897B (en) Multimedia data shooting method, device, terminal, server and storage medium
CN109819168B (en) Camera starting method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
US11863901B2 (en) Photographing method and terminal
CN109684277B (en) Image display method and terminal
CN109544445B (en) Image processing method and device and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN111461985A (en) Picture processing method and electronic equipment
CN111079030A (en) Group searching method and electronic device
CN107959755B (en) Photographing method, mobile terminal and computer readable storage medium
EP3905037B1 (en) Session creation method and terminal device
CN111353946B (en) Image restoration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant