CN111402157B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN111402157B
CN111402157B CN202010170787.1A CN202010170787A CN111402157B CN 111402157 B CN111402157 B CN 111402157B CN 202010170787 A CN202010170787 A CN 202010170787A CN 111402157 B CN111402157 B CN 111402157B
Authority
CN
China
Prior art keywords
image
user
person
preset
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010170787.1A
Other languages
Chinese (zh)
Other versions
CN111402157A (en
Inventor
张琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010170787.1A priority Critical patent/CN111402157B/en
Publication of CN111402157A publication Critical patent/CN111402157A/en
Application granted granted Critical
Publication of CN111402157B publication Critical patent/CN111402157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention provides an image processing method and electronic equipment, wherein the image processing method comprises the following steps: acquiring a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image. According to the scheme, different users can be subjected to different beauty according to the preset processing rules under the scene that multiple people appear on the same interface, personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that an image processing scheme in the prior art is not intelligent enough is well solved.

Description

Image processing method and electronic equipment
Technical Field
The present invention relates to the field of electronic devices, and in particular, to an image processing method and an electronic device.
Background
Currently, in the process of using a mobile phone by a mobile terminal user, self-timer is one of the most commonly used functions of the terminal. In the self-timer of a user, the simultaneous appearance of multiple persons in the field of view of a terminal camera is a very common scene. The person appearing in the field of view of the camera may be a passerby or other user taking a photograph with the user.
In the conventional self-timer process, the terminal performs a process such as beautifying for all people in the field of view of the camera. While the person present in the field of view of the camera, many others, other than the user himself, may not need to be treated with beauty or the like. Meanwhile, the traditional beauty self-timer has no difference, and the effect of the portrait beauty is the same or different when all people are beautified, so that the personalized requirements of users can not be met.
From the above, the existing image processing schemes are not intelligent enough to meet the personalized needs of users.
Disclosure of Invention
The invention aims to provide an image processing method and electronic equipment, which are used for solving the problem that an image processing scheme in the prior art is not intelligent enough.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, applied to an electronic device, where the image processing method includes:
acquiring a first person image of a first user and a second person image of a second user in a preset interface;
processing the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the first acquisition module is used for acquiring a first person image of a first user and a second person image of a second user in a preset interface;
the first processing module is used for processing the first person image and the second person image according to preset processing rules;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program implements the steps of the image processing method described above when executed by the processor.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method described above.
In the embodiment of the invention, a first person image of a first user and a second person image of a second user in a preset interface are acquired; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; different users can be subjected to different beauty according to preset processing rules under the scene that multiple people appear on the same interface, personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that an image processing scheme in the prior art is not intelligent enough is well solved.
Drawings
FIG. 1 is a flow chart of an image processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an application flow of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides an image processing method, which is applied to electronic equipment and is shown in fig. 1, aiming at the problem that an image processing scheme in the prior art is not intelligent enough, wherein the image processing method comprises the following steps:
step 11: and acquiring a first person image of the first user and a second person image of the second user in a preset interface.
The preset interface may be any interface where a person image appears, such as a photographing interface, a conference video interface, or an image editing interface, and the person image may be an image including a face of a user, which is not limited herein.
Step 12: processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
The target user may be a user whose recognized character image of the electronic device meets a predetermined characteristic (such as long hair), or may be a predetermined specific user (such as predetermined through the character image), which is not limited herein.
Regarding the preset processing rule may be the selected target processing rule, there may be a correspondence between the preset processing rule and the user identity, and the preset processing rule may be selected or adjusted before step 12 is performed.
The preset value may be a positive value, a negative value or zero, and is not limited herein.
The image processing method provided by the embodiment of the invention comprises the steps of obtaining a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; different users can be subjected to different beauty according to preset processing rules under the scene that multiple people appear on the same interface, personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that an image processing scheme in the prior art is not intelligent enough is well solved.
Wherein, when the second user is a non-target user, obtaining a second character image of the second user in a preset interface includes: acquiring character images of all non-target users in the preset interface; scoring the acquired images of each person to obtain image scores corresponding to the images of each person respectively; obtaining a target image score from the obtained image scores; and determining the character image corresponding to the target image score as the second character image.
The target image score may be the highest score, the lowest score, or the middle score, without limitation.
This facilitates accurate and quick determination of the second person image.
Specifically, the obtaining the target image score from the obtained image scores includes: and acquiring the image score with the largest value from the acquired image scores as a target image score.
Thus, the image processing requirements of the user relative to the person image with the highest score can be met.
In the embodiment of the present invention, the processing the first person image and the second person image according to the preset processing rule includes: acquiring first input parameter information; processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
That is, the images are processed together according to the input information and the automatic processing rule, so that the condition that a user adjusts one image can be met, the rest other images are adjusted correspondingly automatically, and the processing speed is increased.
Further, after the processing of the first person image and the second person image according to the preset processing rule, the method further includes: acquiring second input parameter information; re-processing the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
That is, after the image is automatically processed according to the processing rule, the image can be adjusted again according to the input information, so that the user requirement is better met; and the method can meet the requirement that the user can automatically and correspondingly adjust the rest other images under the condition of adjusting one image, so that the processing speed is increased.
In the embodiment of the invention, the preset processing rule is the selected target processing rule, or the preset processing rule is the processing rule corresponding to the target user with the highest priority level; and the target user with the highest priority level is the target user with the highest priority level in all users in the preset interface.
Thus, the personalized requirements of the user can be better met.
The image processing method provided by the embodiment of the invention is further described below, in which a terminal is taken as an example of the electronic device, a multi-person self-timer camera shooting interface is taken as an example of the preset interface, and a user face-beautifying (specifically, a user portrait can be processed) is taken as an example of the image processing.
In view of the above technical problems, an embodiment of the present invention provides an image processing method, involving: through presetting a beautifying strategy (namely the preset processing rule) of a user, in a multi-person self-timer scene, intelligent beautifying is performed on all the figures in the field of view of a camera (namely the shooting interface of the camera); specifically, firstly, the terminal scores the face value of the portrait in the camera according to the grade, and records the face value scoring of each person; then according to a preset beautifying strategy, intelligent beautifying is carried out according to the grading level of each person; meanwhile, when the user needs to manually adjust, only one person needs to be manually operated, and other persons automatically carry out beauty according to a preset beauty strategy.
More specifically, the embodiment of the invention provides a specific implementation of the scheme: in the multi-person self-timer scene, the performing of differential beauty according to the user-defined beauty policy may include, as shown in fig. 2:
step 21: the user firstly defines the strategy of beauty;
if the score of the face value of the user 1 (namely the target user) is at least higher than the score of the face value of the non-user (namely the non-target user) 3, the score of the face value of the user 2 (namely the target user) is at least higher than the score of the non-user 5.
Step 22: when the front camera is opened for self-shooting, the terminal firstly recognizes whether the camera has only one face in the field of view, if yes, the step 23 is entered, and if not, the step 24 is entered;
specifically, when the front camera is opened for self-shooting, the terminal firstly detects the number of faces in the field of view of the camera; it is determined whether there is only one face, and the process goes to step 23 when there is only one face in the field of view of the camera, and the process goes to step 24 when there are a plurality of (at least two) faces.
Step 23: performing beauty according to a normal flow;
if only one face is in the field of view of the camera, the method is carried out according to normal beauty (namely, non-differential beauty and image processing of the traditional flow are carried out).
Step 24: judging whether a user (namely the target user) exists in the field of view of the camera, if not, proceeding to step 25, if so, proceeding to step 26;
that is, if there are multiple faces in the camera field of view, first identify the faces in the camera field of view, if there is no user, jump to step 25; if there is a user, go to step 26.
Step 25: performing beauty according to a normal flow;
if there is no user in the field of view of the camera (i.e. the target user), the process is performed according to normal beauty.
Step 26: the terminal marks the color values of all people in the camera view, and performs differential beautifying on all people in the camera view according to the selected beautifying strategy;
among them, two cases are involved: a case where there is only one user (case one) and a case where there are a plurality of users (at least two) (case two);
aiming at the first situation, the terminal scores the color values of all people in the field of view of the camera and performs differential beauty on the user and other people;
specifically, if there are 1 user in the camera view, the terminal marks the color values of all the people in the camera view, and performs differential beauty on the user and other people, so that the mark after the user's beauty meets the policy preset by the user (belonging to a specific case of the selected beauty policy). If the terminal recognizes that the user 1 exists in the camera view, the terminal performs differential beauty on the user 1, so that the score of the color value of the user 1 after the beauty is higher than the score of the non-user 3 with the highest score of the color value in the camera view before the beauty; that is, assuming that the score of the face value of user 1 before the face is beautified is 70, the score of the face value of non-user 1 is 77; then after the face is beautified, the score for the face value of user 1 is 90 and the score for non-user 1 is 87.
Aiming at the second case, the terminal marks the face values of all the people in the camera view, and makes the face of each user appearing in the camera view meet the face-beautifying strategy (namely the selected face-beautifying strategy) corresponding to the user;
specifically, if there are multiple users in the camera view, the terminal scores all the users in the camera view, and makes the beauty of the figures of each user in the camera view meet the corresponding preset beauty policy of the user. If the terminal recognizes that the user 1 and the user 2 exist in the visual field, the final terminal performs different beauty on the user 1 and the user 2, so that the score of the color value of the user 1 after the beauty is higher than the score of the non-user 3 with the highest score of the color value in the visual field of the camera before the beauty, and the score of the color value of the user 2 after the beauty is higher than the score of the non-user 5 with the highest score of the color value in the visual field of the camera before the beauty; that is, assuming that the score of the face value of user 1 is 70, the score of the face value of user 2 is 75, and the score of the face value of non-user 1 is 77 before the face is beautified; then after the face is beautified, the face score for user 1 is 90, the face score for user 2 is 92, and the score for non-user 1 is 87.
Step 27: after step 26, when a person in the field of view of the camera is manually beautified, the terminal automatically beautifies the other person in the field of view of the camera correspondingly, so that the score of the color value of each person after the beautification meets a preset strategy (namely the selected beautification strategy);
there are two situations with respect to someone as described above: the above-mentioned some person is the user (case one) and the above-mentioned some person is the non-user (case two);
aiming at the first situation, when the user is manually beautified, the terminal automatically beautifies other people who are not manually beautified, so that the score of the color value of the user after the beautification meets a preset strategy (namely the selected beautification strategy);
specifically, when the user is manually beautified, the terminal can automatically carry out differential beauty on other people who are not manually beautified, so that the score of the color value of the user after the beauty is satisfied with a preset strategy, for example, when the user 1 is manually beautified, the terminal can automatically adjust the beauty effect of other people (which can comprise other users and/or non-users) who are not manually beautified, so that the score of the color value of the user 1 after the beauty is always higher than the score of the color value of the non-user.
Aiming at the second situation, when the non-user is beautified manually, the terminal automatically beautifies the difference of other people who are not beautified manually;
specifically, when the non-user is beautified manually, the terminal can automatically carry out differential beautifying (namely, carrying out differential processing on the character image which is not manually adjusted) on other people which are not manually beautified, for example, when the non-user 1 is beautified manually, the terminal can automatically adjust the beautifying effect of other non-users, and meanwhile, carry out differential beautifying on the user 1 in other people, so that the scoring of the face value of the user 1 after the beautifying is always higher than the scoring of the face value of the non-user with the highest scoring of the face value in the non-user by 3.
Here, the color value score may be specifically an image score corresponding to a character image of a corresponding user.
From the above, the scheme provided by the embodiment of the invention mainly relates to: confirming whether a target user exists in the field of view of the camera, if so, executing differentiated beauty (namely, the image processing method), and if not, executing normal beauty (namely, the traditional image processing (beauty) method);
regarding differentiated beauty, the differentiated beauty can be automatically performed according to a beauty policy; furthermore, the user may also input parameter information (specifically, user input information, corresponding to the above manual beautifying) and perform differentiated beautifying again according to the beautifying policy.
In summary, the scheme provided by the embodiment of the invention can achieve the purpose of carrying out differential beauty on different people according to the preset strategy in the multi-person self-timer scene.
The embodiment of the invention also provides an electronic device, as shown in fig. 3, which comprises:
a first acquiring module 31, configured to acquire a first person image of a first user and a second person image of a second user in a preset interface;
a first processing module 32, configured to process the first person image and the second person image according to a preset processing rule;
the first user is a target user, and the second user is a target user or a non-target user;
the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
The electronic equipment provided by the embodiment of the invention acquires the first person image of the first user and the second person image of the second user in the preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; different users can be subjected to different beauty according to preset processing rules under the scene that multiple people appear on the same interface, personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that an image processing scheme in the prior art is not intelligent enough is well solved.
Wherein, in the case that the second user is a non-target user, the first obtaining module includes: the first acquisition submodule is used for acquiring the character images of all non-target users in the preset interface; the first processing submodule is used for scoring the acquired person images to obtain image scores corresponding to the person images respectively; the second acquisition sub-module is used for acquiring a target image score from the obtained image scores; and the first determining submodule is used for determining the character image corresponding to the target image score as the second character image.
Specifically, the second obtaining sub-module includes: and the first acquisition unit is used for acquiring the image score with the largest value from the obtained image scores as a target image score.
In an embodiment of the present invention, the first processing module includes: the third acquisition sub-module is used for acquiring the first input parameter information; the second processing submodule is used for processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
Further, the electronic device further includes: the second acquisition module is used for acquiring second input parameter information after the first person image and the second person image are processed according to a preset processing rule; the second processing module is used for reprocessing the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
In the embodiment of the invention, the preset processing rule is the selected target processing rule, or the preset processing rule is the processing rule corresponding to the target user with the highest priority level; and the target user with the highest priority level is the target user with the highest priority level in all users in the preset interface.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 1 to 2, and in order to avoid repetition, a description is omitted here.
Fig. 4 is a schematic hardware structure of an electronic device implementing various embodiments of the present invention, where the electronic device 40 includes, but is not limited to: radio frequency unit 41, network module 42, audio output unit 43, input unit 44, sensor 45, display unit 46, user input unit 47, interface unit 48, memory 49, processor 410, and power source 411. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The processor 410 is configured to obtain a first person image of a first user and a second person image of a second user in a preset interface; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image.
In the embodiment of the invention, a first person image of a first user and a second person image of a second user in a preset interface are acquired; processing the first person image and the second person image according to a preset processing rule; the first user is a target user, and the second user is a target user or a non-target user; the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; different users can be subjected to different beauty according to preset processing rules under the scene that multiple people appear on the same interface, personalized requirements of the users are met, the intelligence of image processing is improved, and the problem that an image processing scheme in the prior art is not intelligent enough is well solved.
Optionally, in the case that the second user is a non-target user, the processor 410 is specifically configured to obtain a character image of each non-target user in the preset interface; scoring the acquired images of each person to obtain image scores corresponding to the images of each person respectively; obtaining a target image score from the obtained image scores; and determining the character image corresponding to the target image score as the second character image.
Optionally, the processor 410 is specifically configured to obtain, as the target image score, the image score with the largest value from the obtained image scores.
Optionally, the processor 410 is specifically configured to obtain first input parameter information; processing the first person image and the second person image according to the first input parameter information and a preset processing rule; the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
Optionally, the processor 410 is further configured to obtain second input parameter information after the first person image and the second person image are processed according to a preset processing rule; re-processing the first person image and the second person image according to the second input parameter information and the preset processing rule; the second input parameter information is image processing parameter information for the first person image, or the second input parameter information is image processing parameter information for the second person image.
Optionally, the preset processing rule is a selected target processing rule, or the preset processing rule is a processing rule corresponding to a target user with the highest priority level; and the target user with the highest priority level is the target user with the highest priority level in all users in the preset interface.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 41 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from the base station and then processing the received downlink data by the processor 410; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 41 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 41 may also communicate with networks and other devices via a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 42, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 43 may convert audio data received by the radio frequency unit 41 or the network module 42 or stored in the memory 49 into an audio signal and output as sound. Also, the audio output unit 43 may also provide audio output (e.g., call signal reception sound, message reception sound, etc.) related to a specific function performed by the electronic device 40. The audio output unit 43 includes a speaker, a buzzer, a receiver, and the like.
The input unit 44 is for receiving an audio or video signal. The input unit 44 may include a graphics processor (Graphics Processing Unit, GPU) 441 and a microphone 442, the graphics processor 441 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 46. The image frames processed by the graphics processor 441 may be stored in the memory 49 (or other storage medium) or transmitted via the radio frequency unit 41 or the network module 42. The microphone 442 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 41 in the case of a telephone call mode.
The electronic device 40 further comprises at least one sensor 45, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 461 according to the brightness of the ambient light, and the proximity sensor can turn off the display panel 461 and/or the backlight when the electronic device 40 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 45 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described herein.
The display unit 46 is used to display information input by a user or information provided to the user. The display unit 46 may include a display panel 461, and the display panel 461 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 47 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 47 includes a touch panel 471 and other input devices 472. The touch panel 471, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 471 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 471 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 410, and receives and executes commands sent from the processor 410. In addition, the touch panel 471 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 47 may include other input devices 472 in addition to the touch panel 471. In particular, other input devices 472 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 471 may be overlaid on the display panel 461, and when the touch panel 471 detects a touch operation thereon or thereabout, the touch panel 471 is transmitted to the processor 410 to determine the type of touch event, and then the processor 410 provides a corresponding visual output on the display panel 461 according to the type of touch event. Although in fig. 4, the touch panel 471 and the display panel 461 are implemented as two independent components for the input and output functions of the electronic device, in some embodiments, the touch panel 471 may be integrated with the display panel 461 for the input and output functions of the electronic device, which is not limited herein.
The interface unit 48 is an interface for connecting an external device to the electronic apparatus 40. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 48 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 40 or may be used to transmit data between the electronic apparatus 40 and an external device.
The memory 49 may be used to store software programs as well as various data. The memory 49 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 49 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 49, and calling data stored in the memory 49, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 40 may also include a power source 411 (e.g., a battery) for powering the various components, and preferably the power source 411 may be logically coupled to the processor 410 via a power management system that performs functions such as managing charge, discharge, and power consumption.
In addition, the electronic device 40 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 410, a memory 49, and a computer program stored in the memory 49 and capable of running on the processor 410, where the computer program when executed by the processor 410 implements each process of the above embodiment of the image processing method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing an electronic device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (8)

1. An image processing method applied to an electronic device, the image processing method comprising:
acquiring a first person image of a first user and a second person image of a second user in a preset interface; the first user is a target user, and the second user is a target user or a non-target user; the image score corresponding to the first person image is obtained by scoring the color value of the first person image according to the grade; the image score corresponding to the second person image is obtained by scoring the color value of the second person image according to the grade;
processing the first person image and the second person image according to a preset processing rule, so that the difference value of the image score corresponding to the processed first person image and the image score of the second person image is the same as a preset value; the preset value is a positive value or a negative value;
the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the preset processing rule is a target processing rule selected by a user, or the preset processing rule is a processing rule corresponding to a target user with the highest priority level;
the processing the first person image and the second person image according to the preset processing rule includes:
acquiring first input parameter information; the first input parameter information is user input information when a user manually adjusts an image;
processing the first person image and the second person image according to the first input parameter information and a preset processing rule;
the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
2. The image processing method according to claim 1, wherein, in the case where the second user is a non-target user, acquiring the second character image of the second user in the preset interface includes:
acquiring character images of all non-target users in the preset interface;
scoring the acquired images of each person to obtain image scores corresponding to the images of each person respectively;
obtaining a target image score from the obtained image scores;
and determining the character image corresponding to the target image score as the second character image.
3. The image processing method according to claim 2, wherein the obtaining a target image score from the obtained image scores includes:
and acquiring the image score with the largest value from the acquired image scores as a target image score.
4. The image processing method according to claim 1, wherein the target user with the highest priority is the target user with the highest priority among all users in the preset interface.
5. An electronic device, the electronic device comprising:
the first acquisition module is used for acquiring a first person image of a first user and a second person image of a second user in a preset interface; the first user is a target user, and the second user is a target user or a non-target user; the image score corresponding to the first person image is obtained by scoring the color value of the first person image according to the grade; the image score corresponding to the second person image is obtained by scoring the color value of the second person image according to the grade;
the first processing module is used for processing the first person image and the second person image according to a preset processing rule so that the difference value of the image score corresponding to the processed first person image and the image score of the second person image is the same as a preset value; the preset value is a positive value or a negative value;
the preset processing rule comprises a phase difference preset value between a first image score corresponding to the processed first person image and a second image score corresponding to the processed second person image; the preset processing rule is a target processing rule selected by a user, or the preset processing rule is a processing rule corresponding to a target user with the highest priority level;
wherein the first processing module comprises:
the third acquisition sub-module is used for acquiring the first input parameter information; the first input parameter information is user input information when a user manually adjusts an image;
the second processing submodule is used for processing the first person image and the second person image according to the first input parameter information and a preset processing rule;
the first input parameter information is image processing parameter information for the first person image, or the first input parameter information is image processing parameter information for the second person image.
6. The electronic device of claim 5, wherein the first acquisition module, in the case where the second user is a non-target user, comprises:
the first acquisition submodule is used for acquiring the character images of all non-target users in the preset interface;
the first processing submodule is used for scoring the acquired person images to obtain image scores corresponding to the person images respectively;
the second acquisition sub-module is used for acquiring a target image score from the obtained image scores;
and the first determining submodule is used for determining the character image corresponding to the target image score as the second character image.
7. The electronic device of claim 6, wherein the second acquisition sub-module comprises:
and the first acquisition unit is used for acquiring the image score with the largest value from the obtained image scores as a target image score.
8. The electronic device of claim 5, wherein the highest priority target user is a highest priority target user among all users in the preset interface.
CN202010170787.1A 2020-03-12 2020-03-12 Image processing method and electronic equipment Active CN111402157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010170787.1A CN111402157B (en) 2020-03-12 2020-03-12 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010170787.1A CN111402157B (en) 2020-03-12 2020-03-12 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111402157A CN111402157A (en) 2020-07-10
CN111402157B true CN111402157B (en) 2024-04-09

Family

ID=71436193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010170787.1A Active CN111402157B (en) 2020-03-12 2020-03-12 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111402157B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344812A (en) * 2021-05-31 2021-09-03 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN113473227B (en) * 2021-08-16 2023-05-26 维沃移动通信(杭州)有限公司 Image processing method, device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512615A (en) * 2015-11-26 2016-04-20 小米科技有限责任公司 Picture processing method and apparatus
CN107274355A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal
CN107341762A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 Take pictures processing method, device and terminal device
CN107424130A (en) * 2017-07-10 2017-12-01 北京小米移动软件有限公司 Picture U.S. face method and apparatus
CN107463373A (en) * 2017-07-10 2017-12-12 北京小米移动软件有限公司 The management method and device of picture U.S. face method, good friend's face value
CN108764334A (en) * 2018-05-28 2018-11-06 北京达佳互联信息技术有限公司 Facial image face value judgment method, device, computer equipment and storage medium
CN110263737A (en) * 2019-06-25 2019-09-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110287809A (en) * 2019-06-03 2019-09-27 Oppo广东移动通信有限公司 Image processing method and Related product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512615A (en) * 2015-11-26 2016-04-20 小米科技有限责任公司 Picture processing method and apparatus
CN107274355A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal
CN107341762A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 Take pictures processing method, device and terminal device
CN107424130A (en) * 2017-07-10 2017-12-01 北京小米移动软件有限公司 Picture U.S. face method and apparatus
CN107463373A (en) * 2017-07-10 2017-12-12 北京小米移动软件有限公司 The management method and device of picture U.S. face method, good friend's face value
CN108764334A (en) * 2018-05-28 2018-11-06 北京达佳互联信息技术有限公司 Facial image face value judgment method, device, computer equipment and storage medium
CN110287809A (en) * 2019-06-03 2019-09-27 Oppo广东移动通信有限公司 Image processing method and Related product
CN110263737A (en) * 2019-06-25 2019-09-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111402157A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN108491775B (en) Image correction method and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN110706179B (en) Image processing method and electronic equipment
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN108427873B (en) Biological feature identification method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN108196815B (en) Method for adjusting call sound and mobile terminal
CN111562896B (en) Screen projection method and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN111031253B (en) Shooting method and electronic equipment
CN111177420B (en) Multimedia file display method, electronic equipment and medium
CN110213485B (en) Image processing method and terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN111601063B (en) Video processing method and electronic equipment
CN111402157B (en) Image processing method and electronic equipment
CN110636225B (en) Photographing method and electronic equipment
CN110928407B (en) Information display method and device
CN110602387B (en) Shooting method and electronic equipment
CN109949809B (en) Voice control method and terminal equipment
CN107729100B (en) Interface display control method and mobile terminal
CN111045769B (en) Background picture switching method and electronic equipment
CN110443752B (en) Image processing method and mobile terminal
CN109858447B (en) Information processing method and terminal
CN109453526B (en) Sound processing method, terminal and computer readable storage medium
CN111145083B (en) Image processing method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant