CN111402271A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN111402271A
CN111402271A CN202010192688.3A CN202010192688A CN111402271A CN 111402271 A CN111402271 A CN 111402271A CN 202010192688 A CN202010192688 A CN 202010192688A CN 111402271 A CN111402271 A CN 111402271A
Authority
CN
China
Prior art keywords
image
area
processing
user
editing operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010192688.3A
Other languages
Chinese (zh)
Inventor
彭业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010192688.3A priority Critical patent/CN111402271A/en
Publication of CN111402271A publication Critical patent/CN111402271A/en
Priority to PCT/CN2021/080138 priority patent/WO2021185142A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention provides an image processing method and electronic equipment, wherein the method comprises the following steps: receiving the editing operation of a user on a first target image; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.

Description

Image processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and an electronic device.
Background
The current image processing method only supports processing for the whole image, but cannot process for the local image.
In order to solve the problem that the image cannot be locally processed, the image is automatically identified and then segmented, however, for some images with complex scenes, because some images in the image cannot be identified, the image segmentation is not accurate, the image cannot be rapidly and accurately processed, and the image processing effect is not ideal.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, and aims to solve the problems that an image processing method in the prior art cannot accurately process an image after segmentation, and the image processing effect is poor.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: receiving the editing operation of a user on a first target image; determining an identification area according to the editing operation; and processing the image of the identification area according to a first processing rule.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: the first receiving module is used for receiving the editing operation of a user on the first target image; the first determining module is used for determining an identification area according to the editing operation; and the first processing module is used for processing the image of the identification area according to a first processing rule.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image processing method.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
Drawings
FIG. 1 is a flowchart illustrating steps of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of an image processing method according to a second embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of an image processing method according to a first embodiment of the present invention is shown.
The image processing method provided by the embodiment of the invention comprises the following steps:
step 101: the electronic equipment receives the editing operation of the user on the first target image.
It should be noted that the first target image may be an image in an album of the electronic device, or an image acquired from a server, and an image acquired by a camera of the electronic device is called, which is not particularly limited in this embodiment of the present invention.
The user can manually demarcate the recognition area and the non-recognition area in the first target image, or manually edit the recognition object and the non-recognition object of each object in the image.
Step 102: the electronic device determines the identification area according to the editing operation.
When a user inputs sliding tracks in the first target image, determining a closed area of each sliding track, and determining an identification area through setting operation of the user on the areas, wherein the identification area is an area needing to be replaced.
Step 103: the electronic equipment processes the image of the identification area according to the first processing rule.
And processing the identification area by adopting a first processing rule, for example: and performing replacement operation on the identification area, or performing enlargement operation, reduction operation and the like on the identification area.
Through manual editing, the first target image is divided into a non-recognition area and a recognition area, so that the problems that the image is not accurately divided due to automatic recognition, the local area is not properly processed and the like are avoided, and the image processing effect is better.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
Example two
Referring to fig. 2, a flowchart illustrating steps of an image processing method according to a second embodiment of the present invention is shown.
The image processing method provided by the embodiment of the invention comprises the following steps:
step 201: the electronic equipment receives a user segmentation operation on the first target image.
The segmentation operation may be a sliding operation on the first target image, and the segmentation operation may be determined according to a sliding trajectory.
The first target image may be an image in an album of the electronic device, or an image acquired from a server, and an image acquired by invoking a camera of the electronic device, which is not particularly limited in this embodiment of the present invention.
Step 202: the electronic device segments the first target image into a plurality of sub-regions according to a segmentation operation.
In the first target image, a plurality of sliding tracks exist, different closed sliding tracks form a sub-region, and the first target image is divided into a plurality of sub-regions according to the dividing operation.
Step 203: the electronic equipment receives the editing operation of the user on each sub-area.
The user can manually select the recognition area and the non-recognition area in the first target image, or manually edit the recognition object and the non-recognition object of each object in the image.
Step 204: and the electronic equipment determines an identification area and a non-identification area of each sub-area according to the editing operation.
For example: the first target image has areas such as clothes, people, a water cup and a background, the people, the clothes, the water cup and the background are manually segmented, a user selects the clothes and the water cup as identification areas, selects the background and the people as non-identification areas, the image of the identification areas is processed through a first processing rule, the image of the non-identification areas is processed through a second processing rule, and the second processing rule is used for enhancing image parameters in the non-identification areas.
Or the first target image is a face image, eyes, eyebrows, lips, nose and ears in the face image are manually edited and segmented, and a user can select which areas are identified areas and select the other partial areas as non-identified areas.
When a user inputs sliding tracks in the first target image or each sub-area, determining a closed area of each sliding track, and determining an identification area and a non-identification area through setting operation of the user on the areas, wherein the identification area is an area needing to be replaced, and the non-identification area is an area not needing to be replaced.
For example: the first target image comprises areas such as clothes, people, a water cup and a background, the people, the clothes, the water cup and the background are manually segmented, a user selects the clothes and the water cup as identification areas, selects the background and the people as non-identification areas, the image of the identification areas is processed according to a first processing rule, and the image of the non-identification areas is processed according to a second processing rule.
Or the first target image is a face image, eyes, eyebrows, lips, nose and ears in the face image are manually edited and segmented, and a user can select which areas are identified areas and select the other partial areas as non-identified areas.
Step 205: the electronic equipment acquires parameter information of the identification area.
Step 206: the electronic device receives a second target image selected by the user.
The parameter information of the identification area may be shape information, slide trajectory information, or the like based on the shape information or the slide trajectory information.
Step 207: and the electronic equipment replaces the image of the identification area with a second target image according to the parameter information.
And determining a second target image corresponding to the identification area, wherein the second target image is an image selected by a user, and can also be an image preset in the electronic equipment.
In some first target images or in some sub-area, the identification area needs to be occluded, and therefore the second target image is replaced to the image of the identification area to highlight the identification area.
In steps 204 to 207, the following steps may be further performed: the electronic equipment identifies a first area image of the identification area; the electronic equipment determines a second area image selected by a user; the electronic equipment adjusts the second area image according to the first area image to obtain the adjusted second area image, and the electronic equipment replaces the first area image with the adjusted second area image.
The first area image and the second area image may be human images.
Determining an identification area and a non-identification area according to an editing operation input by a user, identifying the identification area by using an image identification technology, identifying a first portrait of the identification area, and allowing the user to select a second portrait by himself or output and display each second portrait in the identification area after identifying the first portrait of the identification area, wherein the second portrait is an image pre-stored in an electronic device, and each output second portrait is displayed in a list form or a nine-square grid form for the user to select, and the second portrait is adjusted according to the first portrait, for example: the first portrait is a left half area of the portrait, the size of the area of the first portrait, the size of the figure of the portrait and the size of the five sense organs of the portrait are determined, then the second portrait is also the left half area, and the size of the figure of the portrait and the size of the five sense organs of the portrait are all adjusted through the first portrait, finally, the first portrait is replaced by the adjusted second portrait, so that the left half area of the portrait of the finally generated image is the left half area of the second portrait, and the right half area is the right half area of the first portrait.
Specifically, the second portrait is adjusted based on the first portrait, the facial features of the first portrait and the position information of each facial feature need to be acquired, the position of the facial features of the second portrait is adjusted according to the position information of the facial features of the first portrait, after the first portrait is replaced by the second portrait, the generated image is more natural, and the user can adjust the second portrait according to the preference.
The image recognition technology refers to a technology for performing object recognition on an image to recognize various objects and objects in different modes, and the image recognition technology may be based on main features of the image. Each image has its features such as the letter a having a tip, P having a circle, and the center of Y having an acute angle, etc. The study of eye movement in image recognition shows that the sight line is always focused on the main features of the image, namely, the places where the curvature of the contour of the image is maximum or the direction of the contour changes suddenly, and the information content of the places is maximum. .
In a human image recognition system, complex images are often recognized through different levels of information processing. For a familiar figure, it is recognized as a unit by grasping its main features, and its details are not paid attention to. Such an integral unit composed of isolated unit material is called a block, each of which is sensed simultaneously.
Step 208: the electronic device performs enhancement processing on the image parameters in the non-recognition area.
The image parameters include any one of: image brightness, image contrast, and image color.
Through an image recognition technology, recognizing each object image in the non-recognition area and highlighting each object image in the recognition area, or directly enhancing image parameters of the non-recognition area to highlight the recognition area, wherein the image parameters may be: image brightness, image contrast, image color, etc., which are not particularly limited in this embodiment of the present invention.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
EXAMPLE III
Referring to fig. 3, a block diagram of an electronic device according to a third embodiment of the present invention is shown.
The electronic equipment provided by the embodiment of the invention comprises: a first receiving module 301, configured to receive an editing operation of a user on a first target image; a first determining module 302, configured to determine an identification area according to the editing operation; a first processing module 303, configured to process the image of the identified region according to a first processing rule.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
Example four
Referring to fig. 4, a block diagram of an electronic device according to a fourth embodiment of the present invention is shown.
The electronic equipment provided by the embodiment of the invention comprises: a second receiving module 401, configured to receive an editing operation of a user on a first target image; a first determining module 402, configured to determine an identification area according to the editing operation; a first processing module 403, configured to process the image of the identified region according to a first processing rule.
Preferably, the first processing module 403 includes: a first obtaining sub-module 4031 for identifying parameter information of the identification area; a first receiving submodule 4032 for receiving a second target image selected by a user; and the first replacing submodule 4033 is configured to replace the image of the identification area with the second target image according to the parameter information.
Preferably, the first processing module 403 includes: a second obtaining sub-module 4034, configured to obtain a first region image of the identification region; a determination sub-module 4035 for determining the second area image selected by the user; an adjusting submodule 4036, configured to adjust the second area image according to the first area image, to obtain an adjusted second area image; a second replacing sub-module 4037, configured to replace the first area image with the adjusted second area image.
Preferably, the electronic device further includes: a second receiving module 404, configured to receive a segmentation operation performed on the first target image by the user before the first receiving module 401 receives an editing operation performed on the first target image by the user; a segmentation module 405, configured to segment the first target image into a plurality of sub-regions according to the segmentation operation; the first receiving module 401 is specifically configured to: receiving the editing operation of a user on each sub-region; the first determining module 402 is specifically configured to include: and determining the identification area of the sub-area according to the editing operation aiming at each sub-area.
Preferably, the electronic device further includes: a second determining module 406, configured to determine, after the first determining module 402 determines an identified region according to the editing operation, a non-identified region according to the editing operation; a second processing module 407, configured to perform enhancement processing on the image parameters in the non-identified region after the first processing module 403 processes the identified region according to a first processing rule, where the image parameters include any one of: image brightness, image contrast, and image color.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
EXAMPLE five
Referring to fig. 5, a hardware structure diagram of an electronic device for implementing various embodiments of the present invention is shown.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510, configured to receive an editing operation of a first target image by a user; determining an identification area according to the editing operation; and processing the image of the identification area according to a first processing rule.
In the embodiment of the invention, the editing operation of a user on the first target image is received; determining an identification area according to editing operation; and processing the image of the identification area according to a first processing rule. In the image processing process, the method is not limited to only performing global processing on the first target image, and can be used for dividing the first target image into local images according to the user editing operation and performing local processing on the identification area of the first target image, so that the processed image quality is better.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light Emitting Diode (O L ED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the image processing method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
receiving the editing operation of a user on a first target image;
determining an identification area according to the editing operation;
and processing the image of the identification area according to a first processing rule.
2. The method of claim 1, wherein the step of processing the identified region according to the first processing rule comprises:
acquiring parameter information of the identification area;
receiving a second target image selected by a user;
and replacing the image of the identification area with a second target image according to the parameter information.
3. The method of claim 1, wherein the step of processing the identified region according to the first processing rule comprises:
acquiring a first area image of the identification area;
determining a second area image selected by the user;
adjusting the second area image according to the first area image to obtain the adjusted second area image;
replacing the first area image with the adjusted second area image.
4. The method of claim 1, wherein prior to the step of receiving a user editing operation on the first target image, the method further comprises:
receiving a segmentation operation of a user on a first target image;
segmenting the first target image into a plurality of sub-regions according to the segmentation operation;
the step of receiving the editing operation of the user on the first target image comprises the following steps:
receiving the editing operation of a user on each sub-region;
the step of determining an identification area according to the editing operation includes:
and determining the identification area of the sub-area according to the editing operation aiming at each sub-area.
5. The method of claim 1, wherein after the step of determining an identification region according to the editing operation, the method further comprises:
determining a non-recognition area according to the editing operation;
after the step of processing the identified region according to the first processing rule, the method further comprises:
performing enhancement processing on image parameters in the non-identification area, wherein the image parameters comprise any one of the following items: image brightness, image contrast, and image color.
6. An electronic device, characterized in that the electronic device comprises:
the first receiving module is used for receiving the editing operation of a user on the first target image;
the first determining module is used for determining an identification area according to the editing operation;
and the first processing module is used for processing the image of the identification area according to a first processing rule.
7. The electronic device of claim 6, wherein the first processing module comprises:
the first obtaining submodule is used for obtaining the parameter information of the identification area;
the first receiving submodule is used for receiving a second target image selected by a user;
and the first replacing submodule is used for replacing the image of the identification area with a second target image according to the parameter information.
8. The electronic device of claim 6, wherein the first processing module comprises:
the second acquisition submodule is used for acquiring a first area image of the identification area;
a determination submodule for determining the second region image selected by the user;
the adjusting submodule is used for adjusting the second area image according to the first person to obtain the adjusted second area image;
a second replacement sub-module, configured to replace the first area image with the adjusted second area image.
9. The electronic device of claim 6, further comprising:
the second receiving module is used for receiving the segmentation operation of the first target image by the user before the first receiving module receives the editing operation of the first target image by the user;
a segmentation module for segmenting the first target image into a plurality of sub-regions according to the segmentation operation;
the first receiving module is specifically configured to: receiving the editing operation of a user on each sub-region;
the first determining module is specifically configured to include: and determining the identification area of the sub-area according to the editing operation aiming at each sub-area.
10. The electronic device of claim 6, further comprising:
the second determining module is used for determining a non-recognition area according to the editing operation after the first determining module determines the recognition area according to the editing operation;
a second processing module, configured to perform enhancement processing on image parameters in the non-recognition area after the first processing module processes the recognition area according to a first processing rule, where the image parameters include any one of: image brightness, image contrast, and image color.
CN202010192688.3A 2020-03-18 2020-03-18 Image processing method and electronic equipment Pending CN111402271A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010192688.3A CN111402271A (en) 2020-03-18 2020-03-18 Image processing method and electronic equipment
PCT/CN2021/080138 WO2021185142A1 (en) 2020-03-18 2021-03-11 Image processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010192688.3A CN111402271A (en) 2020-03-18 2020-03-18 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111402271A true CN111402271A (en) 2020-07-10

Family

ID=71432611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010192688.3A Pending CN111402271A (en) 2020-03-18 2020-03-18 Image processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111402271A (en)
WO (1) WO2021185142A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021185142A1 (en) * 2020-03-18 2021-09-23 维沃移动通信有限公司 Image processing method, electronic device and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429493B (en) * 2022-01-26 2023-05-09 数坤(北京)网络科技股份有限公司 Image sequence processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282031A (en) * 2014-09-19 2015-01-14 广州三星通信技术研究有限公司 Method and device for processing picture to be output and terminal
CN109144361A (en) * 2018-07-09 2019-01-04 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN110335277A (en) * 2019-05-07 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110766606A (en) * 2019-10-29 2020-02-07 维沃移动通信有限公司 Image processing method and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1826723B1 (en) * 2006-02-28 2015-03-25 Microsoft Corporation Object-level image editing
JP2014085796A (en) * 2012-10-23 2014-05-12 Sony Corp Information processing device and program
CN111402271A (en) * 2020-03-18 2020-07-10 维沃移动通信有限公司 Image processing method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282031A (en) * 2014-09-19 2015-01-14 广州三星通信技术研究有限公司 Method and device for processing picture to be output and terminal
CN109144361A (en) * 2018-07-09 2019-01-04 维沃移动通信有限公司 A kind of image processing method and terminal device
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN110335277A (en) * 2019-05-07 2019-10-15 腾讯科技(深圳)有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN110766606A (en) * 2019-10-29 2020-02-07 维沃移动通信有限公司 Image processing method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021185142A1 (en) * 2020-03-18 2021-09-23 维沃移动通信有限公司 Image processing method, electronic device and storage medium

Also Published As

Publication number Publication date
WO2021185142A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN107817939B (en) Image processing method and mobile terminal
CN111182205B (en) Photographing method, electronic device, and medium
CN109461117B (en) Image processing method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN108076290B (en) Image processing method and mobile terminal
CN111223143B (en) Key point detection method and device and computer readable storage medium
CN110706179A (en) Image processing method and electronic equipment
CN111031253B (en) Shooting method and electronic equipment
CN109727212B (en) Image processing method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
US20230014409A1 (en) Detection result output method, electronic device and medium
CN109671034B (en) Image processing method and terminal equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN107995417B (en) Photographing method and mobile terminal
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN111080747B (en) Face image processing method and electronic equipment
WO2021185142A1 (en) Image processing method, electronic device and storage medium
CN109639981B (en) Image shooting method and mobile terminal
CN112733673B (en) Content display method and device, electronic equipment and readable storage medium
CN107563353B (en) Image processing method and device and mobile terminal
CN111491124B (en) Video processing method and device and electronic equipment
CN109819331B (en) Video call method, device and mobile terminal
CN111402157A (en) Image processing method and electronic equipment
CN111679737B (en) Hand segmentation method and electronic device
CN111654755B (en) Video editing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination