WO2023108444A1 - Image processing method, intelligent terminal, and storage medium - Google Patents

Image processing method, intelligent terminal, and storage medium Download PDF

Info

Publication number
WO2023108444A1
WO2023108444A1 PCT/CN2021/138111 CN2021138111W WO2023108444A1 WO 2023108444 A1 WO2023108444 A1 WO 2023108444A1 CN 2021138111 W CN2021138111 W CN 2021138111W WO 2023108444 A1 WO2023108444 A1 WO 2023108444A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
image area
area
attribute
Prior art date
Application number
PCT/CN2021/138111
Other languages
French (fr)
Chinese (zh)
Inventor
赵伟
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Priority to PCT/CN2021/138111 priority Critical patent/WO2023108444A1/en
Publication of WO2023108444A1 publication Critical patent/WO2023108444A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present application relates to the technical field of image processing, and in particular to an image processing method, an intelligent terminal and a storage medium.
  • part of the original image can be adjusted through the smart terminal, so that the adjusted original image is updated and beautiful.
  • the inventors found that there are at least the following problems: the operator cuts out the original image through the smart terminal according to the experience of the cutout processing, and obtains part of the original image; secondly, the operator Adjustment operation experience, adjust part of the image through the smart terminal to obtain the adjusted part of the image; finally, the operator replaces part of the original image with the adjusted part of the image through the smart terminal to obtain the target image corresponding to the original image.
  • the operator performs matting processing on the original image according to matting processing experience to obtain a partial image, which makes the operation of obtaining the target image complicated, resulting in low efficiency in obtaining the target image.
  • the present application provides an image processing method, an intelligent terminal and a storage medium, so as to improve the efficiency of obtaining a target image.
  • the present application provides an image processing method, comprising the following steps:
  • step S10 includes: performing pixel detection processing on the original image through a pixel detection model to obtain labels of each pixel in the original image; and obtaining at least one image region from the original image according to the labels of each pixel.
  • obtaining at least one image region from the original image according to the label of each pixel includes: partitioning the original image according to the label of each pixel to obtain at least one second image region; Each pixel has the same label; the original image is partitioned by a pixel classification model to obtain at least one first image region; at least one first image region is segmented according to at least one second image partition to obtain at least one image region .
  • performing partition processing on the original image through a pixel classification model to obtain at least one first image region includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; according to at least one image category , performing partition processing on the original image to obtain at least a first image area; the first image area is an area corresponding to an image category.
  • step S11 includes: displaying edges of each image area; receiving a touch command, where the touch command includes a target position; and determining an image area corresponding to an edge surrounding the target position in at least one image area as the target image area.
  • step S11 includes: displaying the edges of each image area; receiving a touch instruction on the target edge; determining an image area surrounded by the target edge in at least one image area as the target image area;
  • the target edge is at least one of the following:
  • any one of the edges of each image area is adjusted.
  • displaying the edge of each image area includes: displaying the edge of each image area according to a pre-stored preset format; or receiving a selection instruction for a target format in at least one preset format, and displaying the edge of each image area according to the target format edges of each image area.
  • step S12 includes: displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, confirm or Generate the target image corresponding to the original image.
  • step S12 includes: receiving an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and sharpness; set the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area to the custom attribute value corresponding to at least one attribute, confirm or generate The target image corresponding to the original image.
  • the present application provides an image processing method, comprising the following steps:
  • S20 acquire an original image may also be included.
  • This application uses S20, S21, and S22 as examples for illustration.
  • step S21 includes: performing image partition processing on the original image to obtain at least one image area;
  • image partition processing is performed on the original image to obtain at least one image region, including: performing pixel detection processing on the original image through a pixel detection model to obtain the label of each pixel in the original image; according to the label of each pixel, from the original Get at least one image region in the image.
  • obtaining at least one image region from the original image according to the label of each pixel includes: partitioning the original image according to the label of each pixel to obtain at least one second image region; The image is partitioned to obtain at least one first image area; according to at least one second image partition, the at least one first image area is segmented to obtain at least one image area.
  • performing partition processing on the original image through a pixel classification model to obtain at least one first image region includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; according to at least one image category , performing partition processing on the original image to obtain at least a first image area; the first image area is an area corresponding to an image category.
  • determining the target image area in at least one image area includes: displaying edges of each image area; receiving a touch command, where the touch command includes the target position; area, determined as the target image area.
  • determining the target image area in at least one image area includes: displaying the edges of each image area; receiving a touch instruction on the target edge; determining the image area surrounded by the target edge in at least one image area as the target image area;
  • the target edge is at least one of the following:
  • any one of the edges of each image area is adjusted.
  • displaying the edge of each image area includes: displaying the edge of each image area according to a pre-stored preset format; or receiving a selection instruction for a target format in at least one preset format, and displaying the edge of each image area according to the target format edges of each image area.
  • step S22 includes: displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, confirm or Generate the target image corresponding to the original image.
  • step S22 includes: receiving an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as the custom attribute value corresponding to at least one attribute, and the target corresponding to the original image is confirmed or generated image.
  • the present application also provides an image processing device, including: a processing module; the processing module is used for:
  • Adjust the attribute information of the image in the target image area confirm or generate the target image corresponding to the original image.
  • the processing module is specifically configured to: use a pixel detection model to perform pixel detection processing on the original image to obtain a label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
  • the processing module is specifically configured to: partition the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same; through the pixel classification model, the Partition processing is performed on the original image to obtain at least one first image region; according to at least one second image partition, at least one first image region is partitioned to obtain at least one image region.
  • the processing module is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to at least one image category to obtain at least the first image region;
  • An image area is an area corresponding to an image category.
  • the processing module is specifically configured to: control the display module to display the edge of each image area; receive a touch command, the touch command includes the target position; determine the image area corresponding to the edge surrounding the target position in at least one image area as target image area.
  • the processing module is specifically configured to: control the display module to display the edges of each image area; receive a touch instruction on the target edge; determine the image area surrounded by the target edge in at least one image area as the target image area;
  • the target edge is at least one of the following:
  • any one of the edges of each image area is adjusted.
  • the processing module is specifically configured to: control the display module to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, and control the display module according to the target format.
  • the display module displays edges of each image area.
  • the processing module is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, Confirm or generate the target image corresponding to the original image.
  • the processing module is specifically configured to: receive an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, chroma, saturation At least one of brightness, high light, low light, and clarity; set the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area to the custom attribute value corresponding to at least one attribute, and confirm Or generate the target image corresponding to the original image.
  • the present application also provides an image processing device, including: a processing module; the processing module is used to: acquire the original image; determine the image to be processed in the original image; adjust the attribute information of the image to be processed, and confirm or generate the original image the corresponding target image.
  • the processing module is specifically configured to: perform image partition processing on the original image to obtain at least one image region;
  • the processing module is specifically configured to: use a pixel detection model to perform pixel detection processing on the original image to obtain a label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
  • the processing module is specifically configured to: perform partition processing on the original image according to the label of each pixel to obtain at least one second image region; perform partition processing on the original image through a pixel classification model to obtain at least one first image region ; Perform segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
  • the processing module is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to at least one image category to obtain at least the first image region;
  • An image area is an area corresponding to an image category.
  • the processing module is specifically configured to: control the display module to display the edge of each image area; receive a touch command, the touch command includes the target position; determine the image area corresponding to the edge surrounding the target position in at least one image area as target image area.
  • the processing module is specifically configured to: control the display module to display the edges of each image area; receive a touch instruction on the target edge; determine the image area surrounded by the target edge in at least one image area as the target image area;
  • the target edge is at least one of the following:
  • any one of the edges of each image area is adjusted.
  • the processing module is specifically configured to: control the display module to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, and control the display module according to the target format.
  • the display module displays edges of each image area.
  • the processing module is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, Confirm or generate the target image corresponding to the original image.
  • the processing module is specifically configured to: receive an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; set the initial value corresponding to at least one attribute included in the attribute information of the image to be processed as a custom attribute value corresponding to at least one attribute, and confirm or generate the original image corresponding to target image.
  • the present application also provides an intelligent terminal, which includes: a memory and a processor; optionally, an image processing program is stored in the memory, and when the image processing program is executed by the processor, the first aspect or the second aspect is realized The steps of any one of the image processing methods.
  • the present application further provides a computer storage medium, a computer program product, including a computer program, and when the computer program is executed by a processor, the steps of the image processing method in any one of the above-mentioned first aspect or the second aspect are realized.
  • the present application further provides a computer program product, including a computer program.
  • a computer program product including a computer program.
  • the steps of the image processing method in any one of the above-mentioned first aspect or the second aspect are implemented.
  • the present application provides an image processing method, an intelligent terminal, and a storage medium, the method comprising: acquiring at least one image area included in an original image; determining a target image area in at least one image area; adjusting attribute information of an image in the target image area , confirm or generate the target image corresponding to the original image.
  • at least one image area included in the original image is acquired; the target image area is determined in at least one image area, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the generation target. image efficiency.
  • FIG. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application
  • FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of the hardware structure of the controller 140 shown according to the first embodiment
  • Fig. 4 is a schematic diagram of a hardware structure of a network node 150 shown according to the first embodiment
  • FIG. 5 is a schematic diagram of a hardware structure of a network node 160 shown according to the first embodiment
  • FIG. 6 is a schematic diagram of the hardware structure of the controller 170 shown according to the second embodiment
  • FIG. 7 is a schematic diagram of a hardware structure of a network node 180 according to a second embodiment
  • FIG. 8 is a schematic diagram of an application scenario provided by this application.
  • FIG. 9 is a flowchart of an image processing method provided by the present application.
  • FIG. 10 is a schematic diagram of obtaining at least one image region according to labels provided by the present application.
  • FIG. 11 is a schematic diagram of obtaining at least one image region according to the image category provided by the present application.
  • FIG. 12 is a schematic diagram of an attribute control corresponding to each attribute provided by the present application.
  • FIG. 13 is a flowchart of a method for acquiring at least one image area provided by the present application.
  • FIG. 14 is a flowchart of another image processing method provided by the present application.
  • FIG. 15 is a schematic structural diagram of an image processing device provided by the present application.
  • FIG. 16 is a schematic diagram of the hardware of the smart terminal provided by the present application.
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
  • the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C”. Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (a stated condition or event)”.
  • step codes such as S601, S602, and S603 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order. , it is possible to execute S603 first and then execute S601 and S602, etc., but these should be within the scope of protection of this application. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
  • Smart terminals can be implemented in various forms.
  • the intelligent terminal described in this application can include such as smart phones, personal computers (such as tablet computers, notebook computers, palmtop computers, PDA (Personal Digital Assistant, personal digital assistant), PMP (Portable Media Player, portable media player) ), netbooks, desktop computers), cameras, video cameras, virtual reality equipment, navigation devices, wearable devices, smart bracelets, pedometers and other smart terminals, as well as fixed terminals such as digital TVs and desktop computers.
  • a smart terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
  • FIG. 1 is a schematic diagram of the hardware structure of a smart terminal implementing various embodiments of the present application.
  • the smart terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, an /V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 , Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplex long-term evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution, time-division duplex long-term evolution) and 5G, etc.
  • GSM Global System of Mobile communication, Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • CDMA2000 Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronous Code Division Multiple Access, Time Division Synchro
  • WiFi is a short-distance wireless transmission technology.
  • the smart terminal can help users send and receive emails, browse web pages, and access streaming media, etc., and it provides users with wireless broadband Internet access.
  • FIG. 1 shows the WiFi module 102, it can be understood that it is not a necessary component of the smart terminal, and can be completely omitted as required without changing the essence of the present application.
  • the audio output unit 103 can store the information received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 when the smart terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like.
  • the audio data is converted into an audio signal and output as sound.
  • the audio output unit 103 may also provide audio output related to specific functions performed by the smart terminal 100 (for example, call signal receiving sound, message receiving sound, etc.).
  • the audio output unit 103 may include a speaker, a buzzer, and the like.
  • the A/V input unit 104 is used to receive audio or video signals.
  • the A/V input unit 104 may include a GPU (Graphics Processing Unit, graphics processor) 1041 and a microphone 1042, and the graphics processor 1041 is used for still pictures obtained by an image capture device (such as a camera) in video capture mode or image capture mode Video image data is processed.
  • the processed image frames may be displayed on the display unit 106 .
  • the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 .
  • the microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data.
  • the processed audio (voice) data can be converted into a format transmittable to a mobile communication base station via the radio frequency unit 101 for output in case of a phone call mode.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the smart terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the smart terminal 100 moves to the ear. panel 1061 and/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used for applications that recognize the posture of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light-emitting diode), and the like.
  • LCD Liquid Crystal Display, liquid crystal display
  • OLED Organic Light-Emitting Diode, organic light-emitting diode
  • the user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the smart terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it.
  • the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072 .
  • other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event.
  • the corresponding visual output is provided on the display panel 1061 .
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the smart terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated.
  • the implementation of the input and output functions of the smart terminal is not specifically limited here.
  • the interface unit 108 is used as an interface through which at least one external device can be connected with the smart terminal 100 .
  • an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
  • the memory 109 can be used to store software programs as well as various data.
  • the memory 109 can mainly include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.) etc.
  • the storage data area can be Store data (such as audio data, phone book, etc.) created according to the use of the mobile phone.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the smart terminal, and uses various interfaces and lines to connect various parts of the whole smart terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the smart terminal and process data, so as to monitor the smart terminal as a whole.
  • the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
  • the application processor mainly processes operating systems, user interfaces, and application programs, etc.
  • the demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
  • the smart terminal 100 can also include a power supply 111 (such as a battery) for supplying power to various components.
  • a power supply 111 (such as a battery) for supplying power to various components.
  • the power supply 111 can be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. and other functions.
  • the smart terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
  • the following describes the communication network system on which the smart terminal of the present application is based.
  • FIG. 2 is a structure diagram of a communication network system provided by an embodiment of the present application.
  • the communication network system is an LTE system of general mobile communication technology.
  • 201 E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
  • E-UTRAN Evolved UMTS Terrestrial Radio Access Network
  • EPC Evolved Packet Core, Evolved Packet Core Network
  • the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on.
  • the eNodeB 2021 can be connected to other eNodeB 2022 through a backhaul (for example, X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
  • a backhaul for example, X2 interface
  • EPC203 may include MME (Mobility Management Entity, Mobility Management Entity) 2031, HSS (Home Subscriber Server, Home Subscriber Server) 2032, other MME2033, SGW (Serving Gate Way, Serving Gateway) 2034, PGW (PDN Gate Way, packet data Network Gateway) 2035 and PCRF (Policy and Charging Rules Function, Policy and Charging Functional Entity) 2036, etc.
  • MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management.
  • HSS2032 is used to provide some registers to manage functions such as home location register (not shown in the figure), and save some user-specific information about service features and data rates.
  • PCRF2036 is the policy and charging control policy decision point of business data flow and IP bearer resources, it is the policy and charging execution functional unit (not shown) Select and provide available policy and charging control decisions.
  • the IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem, IP Multimedia Subsystem
  • LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems.
  • the network system (such as 5G), etc., is not limited here.
  • the operator performs matting processing on the original image through the smart terminal according to the matting processing experience, and obtains part of the original image; secondly, the operator uses the smart terminal to process the part of the image according to the image adjustment operation experience. Adjust to obtain the adjusted part of the image; finally, the operator replaces the part of the original image with the adjusted part of the image through the smart terminal to obtain the target image corresponding to the original image.
  • the operator performs matting processing on the original image according to matting processing experience to obtain a partial image, which makes the operation of obtaining the target image complicated, resulting in low efficiency in obtaining the target image.
  • the applicant in order to improve the efficiency of obtaining the target image, provides an image processing method in this application, which converts the original image into multiple partitions, and the user can select any one of the partitions according to needs, and adjust the The attribute information of the partition is used to obtain the target image, thereby simplifying the operation of obtaining the target image and improving the efficiency of obtaining the target image.
  • FIG. 3 is a schematic diagram of a hardware structure of a controller 140 provided in the present application.
  • the controller 140 includes: a memory 1401 and a processor 1402, the memory 1401 is used to store program instructions, and the processor 1402 is used to call the program instructions in the memory 1401 to execute the steps performed by the controller in the first method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
  • the foregoing controller further includes a communication interface 1403 , and the communication interface 1403 may be connected to the processor 1402 through a bus 1404 .
  • the processor 1402 can control the communication interface 1403 to implement the receiving and sending functions of the controller 140 .
  • FIG. 4 is a schematic diagram of a hardware structure of a network node 150 provided in the present application.
  • the network node 150 includes: a memory 1501 and a processor 1502, the memory 1501 is used to store program instructions, and the processor 1502 is used to call the program instructions in the memory 1501 to execute the steps performed by the first node in the first method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
  • the foregoing controller further includes a communication interface 1503 , and the communication interface 1503 may be connected to the processor 1502 through a bus 1504 .
  • the processor 1502 can control the communication interface 1503 to realize the functions of receiving and sending of the network node 150 .
  • FIG. 5 is a schematic diagram of a hardware structure of a network node 160 provided in the present application.
  • the network node 160 includes: a memory 1601 and a processor 1602, the memory 1601 is used to store program instructions, and the processor 1602 is used to call the program instructions in the memory 1601 to execute the steps performed by the intermediate node and the tail node in the first method embodiment above, The implementation principles and beneficial effects are similar, and will not be repeated here.
  • the foregoing controller further includes a communication interface 1603 , and the communication interface 1603 may be connected to the processor 1602 through a bus 1604 .
  • the processor 1602 can control the communication interface 1603 to realize the functions of receiving and sending of the network node 160 .
  • FIG. 6 is a schematic diagram of a hardware structure of a controller 170 provided in the present application.
  • the controller 170 includes: a memory 1701 and a processor 1702, the memory 1701 is used to store program instructions, and the processor 1702 is used to call the program instructions in the memory 1701 to execute the steps performed by the controller in the second method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
  • FIG. 7 is a schematic diagram of a hardware structure of a network node 180 provided in the present application.
  • the network node 180 includes: a memory 1801 and a processor 1802, the memory 1801 is used to store program instructions, and the processor 1802 is used to invoke the program instructions in the memory 1801 to execute the steps performed by the head node in the second method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
  • the above-mentioned integrated modules implemented in the form of software function modules can be stored in a computer-readable storage medium.
  • the above-mentioned software function modules are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. partial steps.
  • a computer program product includes one or more computer instructions.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • DSL digital subscriber line
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state disk, SSD), etc.
  • FIG. 8 is a schematic diagram of an application scenario provided by this application. As shown in Figure 8, it includes: an original image 11, an original image 12 including at least one image partition, and a target image 13.
  • at least one image partition includes 4 image areas, namely 1, 2, 3, and 4.
  • any image region in at least one image partition may be determined as the target image partition.
  • image region 4 is determined as the target image subregion.
  • the attribute information (such as color) of the target image partition (image area 4 ) is adjusted, and the target image 13 is confirmed or generated.
  • the target image area is determined in at least one image area, the attribute information of the target image area is adjusted, and the target image corresponding to the original image is confirmed or generated, which can avoid matting the original image to obtain a part of the original image , which simplifies the operation of obtaining the target image area, further simplifies the operation of obtaining the target image, and improves the efficiency of obtaining the target image.
  • the execution subject of the image processing method provided in the present application may be a smart terminal, or an image processing device installed on the smart terminal, and the image processing device may be realized by a combination of software and/or hardware.
  • the smart terminal may be a wired terminal or a wireless terminal.
  • the wired terminal may be a desktop computer.
  • the wireless terminal may be a device such as a tablet computer, a notebook computer, a personal digital assistant (Personal Digital Assistant, PDA for short), and a mobile phone.
  • the software can be, for example, any kind of image editing software.
  • FIG. 9 is a flowchart of an image processing method provided by the present application. As shown in Figure 9, the method includes:
  • the execution subject of the image processing method provided in this application may be the above-mentioned smart terminal, or an image processing device installed in the smart terminal.
  • the image processing device may be realized by a combination of software and/or hardware.
  • the software includes but Not limited to various image (retouching) processing software.
  • At least one image region may be acquired in the following two manners (modes 11 and 12).
  • Mode 11 Perform pixel detection processing on the original image through the pixel detection model to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
  • obtaining at least one image region from the original image according to the label of each pixel includes: performing partition processing on the original image according to the label of each pixel to obtain at least one image region.
  • the labels of the pixels included in each image region are the same.
  • the pixel detection model may be a semantic segmentation (semantic segmentation) algorithm model.
  • the label of the pixel can be, for example, a preset number 1, 2, 3, 4, 5 and so on.
  • FIG. 10 is a schematic diagram of obtaining at least one image region according to tags provided by the present application. As shown in FIG. 10 , it includes: the original image 31 and the labels of each pixel in the original image 31 .
  • the label is 1, 2, 3, 4 or 5.
  • the original image is partitioned to obtain at least one image region.
  • pixels labeled 1 form one image region
  • pixels labeled 2 form another image region.
  • the label when pixels corresponding to the same label are not adjacent in the original image, the label may form N image areas in the at least one image area, where N is an integer greater than or equal to 1.
  • the label 5 may form two image areas in at least one image area, and the two image areas are respectively image areas bordered by dotted lines 32 and 33 .
  • Mode 12 Perform image detection processing on the original image by using a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least one image region.
  • the pixel classification model may be an instance segmentation (Instance segmentation) algorithm model.
  • At least one image category may include: a person category, a sky category, an ocean category, a grass category, a road category, a tree category, a building category, a vehicle category, and the like.
  • FIG. 11 is a schematic diagram of obtaining at least one image area according to image categories provided by the present application. As shown in FIG. 11 , it includes: an original image 41 and at least one image category in the original image 41 .
  • At least one image category includes: person category, sky category, road category, tree category, building category, and vehicle category
  • the image categories are distinguished by different colors as shown in FIG. 11 .
  • each image category may correspond to M image areas in at least one image area, where M is an integer greater than or equal to 1.
  • the vehicle class corresponds to 2 image areas in at least one image area.
  • the target image area can be determined in the following two ways (modes 21, 22 and 23).
  • Mode 21 displaying edges of each image area; receiving a touch command, where the touch command includes a target position; determining an image area corresponding to an edge surrounding the target position in at least one image area as the target image area.
  • the image area bordered by the dotted line 32 is determined as the target image area.
  • Mode 22 displaying the edges of each image area; receiving a touch instruction on the target edge; determining an image area surrounded by the target edge in at least one image area as the target image area.
  • the target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
  • the operation track may be input, and the image area in the original image bordered by the operation track is determined as the target image area.
  • Mode 23 displaying the edges of each image area; in response to canceling the display operation, canceling the display of the edges of each image area; in response to the closed area drawing operation performed in the original image, obtaining the operation track of the closed area drawing operation, and converting the original image to The image area with the operation track as the edge is determined as the target image area.
  • the cancel operation may be a check operation of an input through a cancel control.
  • the cancel operation may also be a cancel operation on any at least one edge of the edges of each image area.
  • the edge selected by the cancel operation is canceled from display.
  • edges of the respective image regions in the foregoing manners 21, 22 and 23 may be displayed through the following manners 30, 31 or 32.
  • Mode 31 displaying edges of each image area according to a pre-stored preset format.
  • the preset format may include displaying the line type of the edge, the color and brightness of the line type, and the like.
  • Way 32 receiving a selection instruction for a target format in at least one preset format, and displaying edges of each image area according to the target format.
  • At least one preset format includes a line type, a color of the line type, or brightness, etc., which are different.
  • the target format can also be obtained in the following manner: in response to a selection operation of at least one target linear in linear, a selection operation of target color in at least one color, and a selection operation of target brightness in at least one brightness , determine the target linearity, target color, and target brightness as the target format.
  • the target image can be confirmed or generated in the following two ways (modes 51 and 52).
  • Mode 51 displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, high light, low light, and sharpness At least one of: receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set, adjusting the attribute information of the image in the target image area, confirming or generating the original image corresponding target image.
  • the style may correspond to the tags or image categories involved in this application.
  • the style is a person style
  • the image category is a person category
  • the style is a person style
  • the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low light, at least one of clarity;
  • receiving an attribute adjustment instruction for the image in the target image area includes: displaying attribute controls corresponding to the above attributes; receiving an attribute adjustment instruction for the image in the target image area input through each attribute control.
  • mode 52 it is possible to save the attribute adjustment instruction including the custom attribute value corresponding to at least one attribute, and set the attribute adjustment instruction including the custom attribute value corresponding to at least one attribute as one of the mode 51 A style adjustment parameter set, so that at least one style adjustment parameter set is displayed in manner 51.
  • FIG. 12 is a schematic diagram of displaying property controls corresponding to various properties provided by the present application. As shown in Figure 12, it includes: an attribute control 51 corresponding to brightness, an attribute control 52 corresponding to contrast, an attribute control 53 corresponding to hue, an attribute control 54 corresponding to saturation, an attribute control 55 corresponding to high light, and an attribute corresponding to low light Control 56 , property control 57 corresponding to sharpness.
  • the attribute information of the image in the target image area 4 (image 12 ) in the original image 11 is adjusted to obtain the target image 13 . Compared with the target image 13 and the original image 11, the attribute information of the image in the target image area 4 is different.
  • the image processing method provided by the embodiment in FIG. 9 includes: acquiring at least one image area included in the original image; determining the target image area in at least one image area; adjusting the attribute information of the image in the target image area, confirming or generating the original image corresponding target image.
  • at least one image area included in the original image is acquired; the target image area is determined in at least one image area, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the generation target. image efficiency.
  • At least one image region included in the original image may also be obtained through the following method, specifically, please refer to FIG. 13 .
  • Fig. 13 is a flowchart of a method for acquiring at least one image area provided by the present application. As shown in Figure 13, the method includes:
  • S602. Perform partition processing on the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same.
  • the obtained at least one second image area is similar to the at least one image area shown in FIG. 10 , and the execution methods of S601 to S602 are similar to the execution method of the above-mentioned method 11, which will not be repeated here.
  • S603 specifically includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; performing partition processing on the original image according to at least one image category to obtain at least a first image region; the first image A region is a region corresponding to an image category.
  • the at least one first image area is similar to the at least one image area shown in FIG. 11 , and the specific execution method of S603 is similar to the above-mentioned manner 12, which will not be repeated here.
  • S604. Perform segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
  • the image area division according to the image category will divide different images belonging to the same image category into the same image partition. As shown in FIG. 11 , for different vehicle images in the original image, different vehicle images are located in the same image partition.
  • FIG. 14 is a flowchart of another image processing method provided by the present application. As shown in Figure 14, the method includes:
  • the original image may be a target image among multiple images stored in the smart terminal.
  • the target image can be determined by the following methods:
  • the image with the identifier in the multiple images is determined as the target image.
  • the image to be processed can be determined through the following two methods (methods 31 and 32).
  • Mode 31 displaying the original image; obtaining the operation trajectory of the closed area drawing operation in the original image; determining the image with the operation trajectory as the edge as the image to be processed.
  • Obtaining an operation track of the closed area drawing operation executed in the original image includes: in response to the closed area drawing operation performed in the original image, acquiring an operation track of the closed area drawing operation.
  • Mode 32 perform image partition processing on the original image to obtain at least one image area; determine a target image area in the at least one image area; determine the image in the target image area as an image to be processed.
  • At least one image region may be obtained through the foregoing manner 11, or the foregoing manner 12, or the method shown in the foregoing embodiment in FIG. 13 .
  • the execution process of performing image partition processing on the original image to obtain at least one image area will not be repeated here.
  • the target image area may be determined in at least one image area by using the method shown in the foregoing manner 21, or the foregoing manner 22, or the foregoing manner 23.
  • the execution process of determining the target image area in at least one image area will not be described in detail.
  • S703 specifically includes: receiving an attribute adjustment instruction of the image to be processed; the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as the custom attribute value corresponding to at least one attribute, and the target corresponding to the original image is confirmed or generated image.
  • S703 may also be performed by a method similar to the above manners 51 and 52, so as to confirm or generate a target image corresponding to the original image. It should be noted that, the specific process of S703 is executed by a method similar to the above manners 51 and 52, which will not be repeated here.
  • the method provided by the embodiment in FIG. 14 includes: acquiring an original image; determining an image to be processed in the original image; adjusting attribute information of the image to be processed, and confirming or generating a target image corresponding to the original image.
  • the image to be processed is determined in the original image, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the efficiency of generating the target image.
  • the scene parameters cannot be balanced. For example, in some scenes, only one effect of the characters and the background in the image can be guaranteed. If the background is adjusted to the desired If the desired effect is desired, the effect of the character will be shifted along with it, resulting in the poor effect of the character.
  • the characters and the background can be divided into different image areas, so as to adjust the effect of the background or the characters in a targeted manner, which not only ensures the accuracy of restoring the background color, but also makes the effect of the characters good.
  • the image processing method provided in this application can be used in the standard (F.AWBE) Step 2 (Step 2) illumination indicator detection to realize the positioning and segmentation of the illumination indicator.
  • a 24-color card is usually used as an illumination indicator, and a series of calculations are performed on the data of the four gray blocks at the bottom of the color card to determine whether the image is a uniformly illuminated image. If the requirements are not met, then judge Non-uniform illumination images are discarded.
  • This proposal can perform post-adjustment on non-uniform illumination images that do not meet the requirements, and perform secondary detection again. If the secondary detection meets the requirements, the adjusted image will be kept in the data set to achieve data expansion. set purpose.
  • FIG. 15 is a schematic structural diagram of an image processing device provided by the present application.
  • the image processing device 10 includes: a processing module 11; the processing module 11 is used to: obtain at least one image area included in the original image; determine the target image area in at least one image area; adjust the image in the target image area The attribute information of the original image is confirmed or generated corresponding to the target image.
  • the image processing device 10 provided in the present application can execute the methods in the above method embodiments, and the implementation principles and beneficial effects are similar, and will not be repeated here.
  • the processing module 11 is specifically configured to: use the pixel detection model to perform pixel detection processing on the original image to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
  • the processing module 11 is specifically configured to: partition the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same; through the pixel classification model, Partition processing is performed on the original image to obtain at least one first image region; according to at least one second image partition, at least one first image region is partitioned to obtain at least one image region.
  • the processing module 11 is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least the first image region;
  • the first image area is an area corresponding to the image category.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; the display module 12 is included in the image processing device 10; receive a touch command, and the touch command includes a target position; convert at least one image area The image area corresponding to the edge surrounding the target position in is determined as the target image area.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; receive a touch instruction on the target edge; determine an image area surrounded by the target edge in at least one image area as the target image area.
  • the target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, , control the display module 12 to display the edge of each image area.
  • the processing module 11 is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area , confirm or generate the target image corresponding to the original image.
  • the processing module 11 is specifically configured to: receive an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, chroma, At least one of saturation, high light, low light, and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area is set as a custom attribute value corresponding to at least one attribute, Confirm or generate the target image corresponding to the original image.
  • the processing module 11 in the image processing device 10 provided by this application can also be used to: acquire the original image; determine the image to be processed in the original image; adjust the attribute information of the image to be processed, confirm or generate the corresponding target image.
  • the processing module 11 is specifically configured to: perform image partition processing on the original image to obtain at least one image area; determine a target image area in the at least one image area; determine an image in the target image area as an image to be processed.
  • the processing module 11 is specifically configured to: use the pixel detection model to perform pixel detection processing on the original image to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
  • the processing module 11 is specifically configured to: perform partition processing on the original image according to the label of each pixel to obtain at least one second image region; perform partition processing on the original image through a pixel classification model to obtain at least one first image Regions: performing segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
  • the processing module 11 is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least the first image region;
  • the first image area is an area corresponding to the image category.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; receive a touch command, the touch command includes the target position; and set the image area corresponding to the edge surrounding the target position in at least one image area, Determined as the target image area.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area; receive a touch instruction on the target edge; determine an image area surrounded by the target edge in at least one image area as the target image area.
  • the target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
  • the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, , control the display module 12 to display the edge of each image area.
  • the processing module 11 is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area , confirm or generate the target image corresponding to the original image.
  • the processing module 11 is specifically configured to: receive an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and sharpness; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as a custom attribute value corresponding to at least one attribute, and the original image is confirmed or generated the corresponding target image.
  • the image processing device 10 provided in the present application can execute the methods in the above method embodiments, and the implementation principles and beneficial effects are similar, and will not be repeated here.
  • FIG. 16 is a schematic diagram of the hardware of the smart terminal provided by the present application.
  • the smart terminal 20 may include: a transceiver 21 , a memory 22 and a processor 23 .
  • the transceiver 21 may include: a transmitter and/or a receiver.
  • a transmitter may also be referred to as a sender, a transmitter, a send port, or a send interface, and similar descriptions.
  • a receiver may also be referred to as a receiver, receiver, receiving port, or receiving interface, and similar descriptions.
  • each part of the transceiver 21, the memory 22, and the processor 23 is connected to each other through a bus.
  • the memory 22 is used to store computer-executable instructions.
  • the processor 23 is configured to execute the computer-executed instructions stored in the memory 22, so that the processor 23 executes the above image processing method.
  • An embodiment of the present application further provides a computer-readable storage medium, on which an image processing program is stored, and when the image processing program is executed by a processor, the steps of the image processing method in any of the foregoing embodiments are implemented.
  • the embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the image processing method in the above various possible implementation manners.
  • the embodiment of the present application also provides a chip, including a memory and a processor.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. image processing method.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Usable media may be magnetic media, (eg, floppy disk, memory disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application provides an image processing method, an intelligent terminal, and a storage medium. The method comprises: obtaining at least one image area comprised in an original image; determining a target image area in the at least one image area; and adjusting attribute information of an image in the target image area, and confirming or generating a target image corresponding to the original image. The image processing method, the intelligent terminal, and the storage medium provided by the present application are used for improving the efficiency of obtaining the target image.

Description

图像处理方法、智能终端及存储介质Image processing method, intelligent terminal and storage medium 技术领域technical field
本申请涉及图像处理技术领域,具体涉及一种图像处理方法、智能终端及存储介质。The present application relates to the technical field of image processing, and in particular to an image processing method, an intelligent terminal and a storage medium.
背景技术Background technique
一些实现中,通过智能终端可以对原始图像中的部分图像进行调整,以使调整后的原始图像更新美观。In some implementations, part of the original image can be adjusted through the smart terminal, so that the adjusted original image is updated and beautiful.
在构思及实现本申请过程中,发明人发现至少存在如下问题:操作人员依据抠图处理经验,通过智能终端对原始图像进行抠图处理,得到原始图像中的部分图像;其次,操作人员依据图像调整操作经验,通过智能终端对部分图像进行调整,得到调整后的部分图像;最后,操作人员通过智能终端将原始图像中的部分图像替换为调整后的部分图像,得到原始图像对应的目标图像。During the process of conceiving and implementing this application, the inventors found that there are at least the following problems: the operator cuts out the original image through the smart terminal according to the experience of the cutout processing, and obtains part of the original image; secondly, the operator Adjustment operation experience, adjust part of the image through the smart terminal to obtain the adjusted part of the image; finally, the operator replaces part of the original image with the adjusted part of the image through the smart terminal to obtain the target image corresponding to the original image.
在上述相关技术中,操作人员根据抠图处理经验对原始图像进行抠图处理,得到部分图像,使得得到目标图像的操作复杂,导致得到目标图像的效率较低。In the above-mentioned related technologies, the operator performs matting processing on the original image according to matting processing experience to obtain a partial image, which makes the operation of obtaining the target image complicated, resulting in low efficiency in obtaining the target image.
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。The foregoing description is provided to provide general background information and does not necessarily constitute prior art.
技术解决方案technical solution
针对上述技术问题,本申请提供一种图像处理方法、智能终端及存储介质,提高得到目标图像的效率。In view of the above technical problems, the present application provides an image processing method, an intelligent terminal and a storage medium, so as to improve the efficiency of obtaining a target image.
第一方面,本申请提供一种图像处理方法,包括以下步骤:In a first aspect, the present application provides an image processing method, comprising the following steps:
S10:获取原始图像中包括的至少一个图像区域;S10: Obtain at least one image region included in the original image;
S11:在至少一个图像区域中确定目标图像区域;S11: Determine a target image area in at least one image area;
S12:调整目标图像区域中图像的属性信息,以确认或生成原始图像对应的目标图像。S12: Adjust the attribute information of the image in the target image area to confirm or generate a target image corresponding to the original image.
可选地,S10步骤包括:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, step S10 includes: performing pixel detection processing on the original image through a pixel detection model to obtain labels of each pixel in the original image; and obtaining at least one image region from the original image according to the labels of each pixel.
可选地,根据各像素的标签,从原始图像中获取至少一个图像区域,包括:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;第二图像区域中包括的各像素的标签相同;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, obtaining at least one image region from the original image according to the label of each pixel includes: partitioning the original image according to the label of each pixel to obtain at least one second image region; Each pixel has the same label; the original image is partitioned by a pixel classification model to obtain at least one first image region; at least one first image region is segmented according to at least one second image partition to obtain at least one image region .
可选地,通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域,包括:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, performing partition processing on the original image through a pixel classification model to obtain at least one first image region includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; according to at least one image category , performing partition processing on the original image to obtain at least a first image area; the first image area is an area corresponding to an image category.
可选地,S11步骤包括:显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, step S11 includes: displaying edges of each image area; receiving a touch command, where the touch command includes a target position; and determining an image area corresponding to an edge surrounding the target position in at least one image area as the target image area.
可选地,S11步骤包括:显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域;Optionally, step S11 includes: displaying the edges of each image area; receiving a touch instruction on the target edge; determining an image area surrounded by the target edge in at least one image area as the target image area;
目标边缘为如下中的至少一种:The target edge is at least one of the following:
各图像区域的边缘中的任意一个边缘;any one of the edges of the image regions;
根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
可选地,显示各图像区域的边缘,包括:按照预先存储的预设格式,显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,显示各图像区域的边缘。Optionally, displaying the edge of each image area includes: displaying the edge of each image area according to a pre-stored preset format; or receiving a selection instruction for a target format in at least one preset format, and displaying the edge of each image area according to the target format edges of each image area.
可选地,S12步骤包括:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, step S12 includes: displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, confirm or Generate the target image corresponding to the original image.
可选地,S12步骤包括:接收对目标图像区域中图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将目标图像区域中图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, step S12 includes: receiving an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and sharpness; set the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area to the custom attribute value corresponding to at least one attribute, confirm or generate The target image corresponding to the original image.
第二方面,本申请提供图像处理方法,包括以下步骤:In a second aspect, the present application provides an image processing method, comprising the following steps:
S21:在原始图像中确定待处理图像;S21: determine the image to be processed in the original image;
S22:调整待处理图像的属性信息,以确认或生成原始图像对应的目标图像。S22: Adjust the attribute information of the image to be processed to confirm or generate a target image corresponding to the original image.
可选地,在S21之前,还可以包括S20:获取原始图像。本申请以S20、S21、S22进行示例说明。Optionally, before S21, S20: acquire an original image may also be included. This application uses S20, S21, and S22 as examples for illustration.
可选地,S21步骤包括:对原始图像进行图像分区处理,得到至少一个图像区域;Optionally, step S21 includes: performing image partition processing on the original image to obtain at least one image area;
在至少一个图像区域中确定目标图像区域;将目标图像区域中的图像,确定为待处理图像。Determine a target image area in at least one image area; determine an image in the target image area as an image to be processed.
可选地,对原始图像进行图像分区处理,得到至少一个图像区域,包括:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, image partition processing is performed on the original image to obtain at least one image region, including: performing pixel detection processing on the original image through a pixel detection model to obtain the label of each pixel in the original image; according to the label of each pixel, from the original Get at least one image region in the image.
可选地,根据各像素的标签,从原始图像中获取至少一个图像区域,包括:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, obtaining at least one image region from the original image according to the label of each pixel includes: partitioning the original image according to the label of each pixel to obtain at least one second image region; The image is partitioned to obtain at least one first image area; according to at least one second image partition, the at least one first image area is segmented to obtain at least one image area.
可选地,通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域,包括:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, performing partition processing on the original image through a pixel classification model to obtain at least one first image region includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; according to at least one image category , performing partition processing on the original image to obtain at least a first image area; the first image area is an area corresponding to an image category.
可选地,在至少一个图像区域中确定目标图像区域,包括:显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, determining the target image area in at least one image area includes: displaying edges of each image area; receiving a touch command, where the touch command includes the target position; area, determined as the target image area.
可选地,在至少一个图像区域中确定目标图像区域,包括:显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域;Optionally, determining the target image area in at least one image area includes: displaying the edges of each image area; receiving a touch instruction on the target edge; determining the image area surrounded by the target edge in at least one image area as the target image area;
目标边缘为如下中的至少一种:The target edge is at least one of the following:
各图像区域的边缘中的任意一个边缘;any one of the edges of the image regions;
根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
可选地,显示各图像区域的边缘,包括:按照预先存储的预设格式,显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,显示各图像区域的边缘。Optionally, displaying the edge of each image area includes: displaying the edge of each image area according to a pre-stored preset format; or receiving a selection instruction for a target format in at least one preset format, and displaying the edge of each image area according to the target format edges of each image area.
可选地,S22步骤包括:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, step S22 includes: displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, confirm or Generate the target image corresponding to the original image.
可选地,S22步骤包括:接收对待处理图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将待处理图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, step S22 includes: receiving an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as the custom attribute value corresponding to at least one attribute, and the target corresponding to the original image is confirmed or generated image.
第三方面,本申请还提供一种图像处理装置,包括:处理模块;处理模块用于:In a third aspect, the present application also provides an image processing device, including: a processing module; the processing module is used for:
获取原始图像中包括的至少一个图像区域;Obtain at least one image region included in the original image;
在至少一个图像区域中确定目标图像区域;determining a target image area in at least one image area;
调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Adjust the attribute information of the image in the target image area, confirm or generate the target image corresponding to the original image.
可选地,处理模块具体用于:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, the processing module is specifically configured to: use a pixel detection model to perform pixel detection processing on the original image to obtain a label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
可选地,处理模块具体用于:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;第二图像区域中包括的各像素的标签相同;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, the processing module is specifically configured to: partition the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same; through the pixel classification model, the Partition processing is performed on the original image to obtain at least one first image region; according to at least one second image partition, at least one first image region is partitioned to obtain at least one image region.
可选地,处理模块具体用于:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, the processing module is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to at least one image category to obtain at least the first image region; An image area is an area corresponding to an image category.
可选地,处理模块具体用于:控制显示模块显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, the processing module is specifically configured to: control the display module to display the edge of each image area; receive a touch command, the touch command includes the target position; determine the image area corresponding to the edge surrounding the target position in at least one image area as target image area.
可选地,处理模块具体用于:控制显示模块显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域;Optionally, the processing module is specifically configured to: control the display module to display the edges of each image area; receive a touch instruction on the target edge; determine the image area surrounded by the target edge in at least one image area as the target image area;
目标边缘为如下中的至少一种:The target edge is at least one of the following:
各图像区域的边缘中的任意一个边缘;any one of the edges of the image regions;
根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
可选地,处理模块具体用于:按照预先存储的预设格式,控制显示模块显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,控制显示模块显示各图像区域的边缘。Optionally, the processing module is specifically configured to: control the display module to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, and control the display module according to the target format. The display module displays edges of each image area.
可选地,处理模块具体用于:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, the processing module is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, Confirm or generate the target image corresponding to the original image.
可选地,处理模块具体用于:接收对目标图像区域中图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将目标图像区域中图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, the processing module is specifically configured to: receive an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, chroma, saturation At least one of brightness, high light, low light, and clarity; set the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area to the custom attribute value corresponding to at least one attribute, and confirm Or generate the target image corresponding to the original image.
第四方面,本申请还提供一种图像处理装置,包括:处理模块;处理模块用于:获取原始图像;在原始图像中确定待处理图像;调整待处理图像的属性信息,确认或生成原始图像对应的目标图像。In the fourth aspect, the present application also provides an image processing device, including: a processing module; the processing module is used to: acquire the original image; determine the image to be processed in the original image; adjust the attribute information of the image to be processed, and confirm or generate the original image the corresponding target image.
可选地,处理模块具体用于:对原始图像进行图像分区处理,得到至少一个图像区域;Optionally, the processing module is specifically configured to: perform image partition processing on the original image to obtain at least one image region;
在至少一个图像区域中确定目标图像区域;将目标图像区域中的图像,确定为待处理图像。Determine a target image area in at least one image area; determine an image in the target image area as an image to be processed.
可选地,处理模块具体用于:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, the processing module is specifically configured to: use a pixel detection model to perform pixel detection processing on the original image to obtain a label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
可选地,处理模块具体用于:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, the processing module is specifically configured to: perform partition processing on the original image according to the label of each pixel to obtain at least one second image region; perform partition processing on the original image through a pixel classification model to obtain at least one first image region ; Perform segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
可选地,处理模块具体用于:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, the processing module is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to at least one image category to obtain at least the first image region; An image area is an area corresponding to an image category.
可选地,处理模块具体用于:控制显示模块显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, the processing module is specifically configured to: control the display module to display the edge of each image area; receive a touch command, the touch command includes the target position; determine the image area corresponding to the edge surrounding the target position in at least one image area as target image area.
可选地,处理模块具体用于:控制显示模块显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域;Optionally, the processing module is specifically configured to: control the display module to display the edges of each image area; receive a touch instruction on the target edge; determine the image area surrounded by the target edge in at least one image area as the target image area;
目标边缘为如下中的至少一种:The target edge is at least one of the following:
各图像区域的边缘中的任意一个边缘;any one of the edges of the image regions;
根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
可选地,处理模块具体用于:按照预先存储的预设格式,控制显示模块显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,控制显示模块显示各图像区域的边缘。Optionally, the processing module is specifically configured to: control the display module to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, and control the display module according to the target format. The display module displays edges of each image area.
可选地,处理模块具体用于:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, the processing module is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area, Confirm or generate the target image corresponding to the original image.
可选地,处理模块具体用于:接收对待处理图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将待处理图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, the processing module is specifically configured to: receive an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, highlight At least one of , low light, and sharpness; set the initial value corresponding to at least one attribute included in the attribute information of the image to be processed as a custom attribute value corresponding to at least one attribute, and confirm or generate the original image corresponding to target image.
第五方面,本申请还提供一种智能终端,智能终端包括:存储器和处理器;可选地,存储器上存储有图像处理程序,图像处理程序被处理器执行时实现第一方面或者第二方面任一项的图像处理方法的步骤。In the fifth aspect, the present application also provides an intelligent terminal, which includes: a memory and a processor; optionally, an image processing program is stored in the memory, and when the image processing program is executed by the processor, the first aspect or the second aspect is realized The steps of any one of the image processing methods.
第六方面,本申请还提供一种计算机存储介质,计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现上述第一方面或者第二方面中任一项的图像处理方法的步骤。In a sixth aspect, the present application further provides a computer storage medium, a computer program product, including a computer program, and when the computer program is executed by a processor, the steps of the image processing method in any one of the above-mentioned first aspect or the second aspect are realized.
第七方面,本申请还提供一种计算机程序产品,包括计算机程序,计算机程序被处理器执行时实现上述第一方面或者第二方面中任一项的图像处理方法的步骤。In a seventh aspect, the present application further provides a computer program product, including a computer program. When the computer program is executed by a processor, the steps of the image processing method in any one of the above-mentioned first aspect or the second aspect are implemented.
本申请提供一种图像处理方法、智能终端及存储介质,该方法包括:获取原始图像中包括的至少一个图像区域;在至少一个图像区域中确定目标图像区域;调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。在上述方法中,获取原始图像中包括的至少一个图像区域;在至少一个图像区域中确定目标图像区域,可以避免对原始图像进行抠图处理,简化了得到目标图像区域的操作,提高了生成目标图像的效率。The present application provides an image processing method, an intelligent terminal, and a storage medium, the method comprising: acquiring at least one image area included in an original image; determining a target image area in at least one image area; adjusting attribute information of an image in the target image area , confirm or generate the target image corresponding to the original image. In the above method, at least one image area included in the original image is acquired; the target image area is determined in at least one image area, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the generation target. image efficiency.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the accompanying drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, for those of ordinary skill in the art, the Under the premise, other drawings can also be obtained based on these drawings.
图1为实现本申请各个实施例的一种智能终端的硬件结构示意图;FIG. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application;
图2为本申请实施例提供的一种通信网络系统架构图;FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application;
图3是根据第一实施例示出的控制器140的硬件结构示意图;Fig. 3 is a schematic diagram of the hardware structure of the controller 140 shown according to the first embodiment;
图4是根据第一实施例示出的网络节点150的硬件结构示意图;Fig. 4 is a schematic diagram of a hardware structure of a network node 150 shown according to the first embodiment;
图5是根据第一实施例示出的网络节点160的硬件结构示意图;FIG. 5 is a schematic diagram of a hardware structure of a network node 160 shown according to the first embodiment;
图6是根据第二实施例示出的控制器170的硬件结构示意图;FIG. 6 is a schematic diagram of the hardware structure of the controller 170 shown according to the second embodiment;
图7是根据第二实施例示出的网络节点180的硬件结构示意图;FIG. 7 is a schematic diagram of a hardware structure of a network node 180 according to a second embodiment;
图8为本申请提供的应用场景示意图;FIG. 8 is a schematic diagram of an application scenario provided by this application;
图9为本申请提供的一种图像处理方法的流程图;FIG. 9 is a flowchart of an image processing method provided by the present application;
图10为本申请提供的根据标签得到至少一个图像区域的示意图;FIG. 10 is a schematic diagram of obtaining at least one image region according to labels provided by the present application;
图11为本申请提供的根据图像类别得到至少一个图像区域的示意图;FIG. 11 is a schematic diagram of obtaining at least one image region according to the image category provided by the present application;
图12为本申请提供的一种显示各属性对应的属性控件的示意图;FIG. 12 is a schematic diagram of an attribute control corresponding to each attribute provided by the present application;
图13为本申请提供的获取至少一个图像区域的方法流程图;FIG. 13 is a flowchart of a method for acquiring at least one image area provided by the present application;
图14为本申请提供的另一种图像处理方法的流程图;FIG. 14 is a flowchart of another image processing method provided by the present application;
图15为本申请提供的图像处理装置的结构示意图;FIG. 15 is a schematic structural diagram of an image processing device provided by the present application;
图16为本申请提供的智能终端的硬件示意图。FIG. 16 is a schematic diagram of the hardware of the smart terminal provided by the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。The realization, functional features and advantages of the present application will be further described in conjunction with the embodiments and with reference to the accompanying drawings. By means of the above drawings, specific embodiments of the present application have been shown, which will be described in more detail hereinafter. These drawings and text descriptions are not intended to limit the scope of the concept of the application in any way, but to illustrate the concept of the application for those skilled in the art by referring to specific embodiments.
本申请的实施方式Embodiment of this application
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with this application. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present application as recited in the appended claims.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合该具体实施例中上下文进行确定。It should be noted that, in this document, the term "comprising", "comprising" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the statement "comprising a..." does not exclude the presence of other identical elements in the process, method, article, or device that includes the element. In addition, different implementations of the present application Components, features, and elements with the same name in the example may have the same meaning, or may have different meanings, and the specific meaning shall be determined based on the explanation in the specific embodiment or further combined with the context in the specific embodiment.
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination". Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It should be further understood that the terms "comprising", "comprising" indicate the presence of features, steps, operations, elements, components, items, species, and/or groups, but do not exclude one or more other features, steps, operations, elements, The existence, occurrence, or addition of components, items, categories, and/or groups. The terms "or", "and/or", "comprising at least one of" and the like used in this application may be interpreted as inclusive, or mean any one or any combination. For example, "including at least one of the following: A, B, C" means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C", another example, " A, B or C" or "A, B and/or C" means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C". Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flow chart in the embodiment of the present application are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and they can be executed in other orders. Moreover, at least some of the steps in the figure may include multiple sub-steps or multiple stages, these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, and the execution order is not necessarily sequential Instead, it may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。Depending on the context, the words "if", "if" as used herein may be interpreted as "at" or "when" or "in response to determining" or "in response to detecting". Similarly, depending on the context, the phrases "if determined" or "if detected (the stated condition or event)" could be interpreted as "when determined" or "in response to the determination" or "when detected (the stated condition or event) )" or "in response to detection of (a stated condition or event)".
需要说明的是,在本文中,采用了诸如S601、S602、S603等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S603后执行S601和S602等,但这些均应在本申请的保护范围之内。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be noted that, in this article, step codes such as S601, S602, and S603 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order. , it is possible to execute S603 first and then execute S601 and S602, etc., but these should be within the scope of protection of this application. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。In the following description, the use of suffixes such as 'module', 'part' or 'unit' for denoting elements is only for facilitating the description of the present application and has no specific meaning by itself. Therefore, 'module', 'part' or 'unit' may be mixedly used.
智能终端可以以各种形式来实施。例如,本申请中描述的智能终端可以包括诸如智能手机、个人电脑(例如平板电脑、笔记本电脑、掌上电脑、PDA(Personal Digital Assistant,个人数字助理)、PMP(Portable Media Player,便捷式媒体播放器)、上网本、台式电脑)、相机、摄像机、虚拟现实设备、导航装置、可穿戴设备、智能手环、计步器等智能终端,以及诸如数字TV、台式计算机等固定终端。Smart terminals can be implemented in various forms. For example, the intelligent terminal described in this application can include such as smart phones, personal computers (such as tablet computers, notebook computers, palmtop computers, PDA (Personal Digital Assistant, personal digital assistant), PMP (Portable Media Player, portable media player) ), netbooks, desktop computers), cameras, video cameras, virtual reality equipment, navigation devices, wearable devices, smart bracelets, pedometers and other smart terminals, as well as fixed terminals such as digital TVs and desktop computers.
后续描述中将以智能终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。In the subsequent description, a smart terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
请参阅图1,其为实现本申请各个实施例的一种智能终端的硬件结构示意图,该智能终端100可以包括:RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的智能终端结构并不构成对智能终端的限定,智能终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Please refer to FIG. 1, which is a schematic diagram of the hardware structure of a smart terminal implementing various embodiments of the present application. The smart terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, an /V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111 and other components. Those skilled in the art can understand that the smart terminal structure shown in Figure 1 does not constitute a limitation on the smart terminal, and the smart terminal may include more or less components than shown in the figure, or combine certain components, or different components layout.
下面结合图1对智能终端的各个部件进行具体的介绍:The following is a specific introduction to each component of the smart terminal in conjunction with Figure 1:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division  Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing-Long Term Evolution,频分双工长期演进)、TDD-LTE(Time Division Duplexing-Long Term Evolution,分时双工长期演进)和5G等。The radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station. Generally, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with the network and other devices through wireless communication. The above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 , Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplex long-term evolution), TDD-LTE (Time Division Duplexing-Long Term Evolution, time-division duplex long-term evolution) and 5G, etc.
WiFi属于短距离无线传输技术,智能终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于智能终端的必须构成,完全可以根据需要在不改变本申请的本质的范围内而省略。WiFi is a short-distance wireless transmission technology. Through the WiFi module 102, the smart terminal can help users send and receive emails, browse web pages, and access streaming media, etc., and it provides users with wireless broadband Internet access. Although FIG. 1 shows the WiFi module 102, it can be understood that it is not a necessary component of the smart terminal, and can be completely omitted as required without changing the essence of the present application.
音频输出单元103可以在智能终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与智能终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。The audio output unit 103 can store the information received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 when the smart terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like. The audio data is converted into an audio signal and output as sound. Moreover, the audio output unit 103 may also provide audio output related to specific functions performed by the smart terminal 100 (for example, call signal receiving sound, message receiving sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括GPU(Graphics Processing Unit,图形处理器)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的影像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。The A/V input unit 104 is used to receive audio or video signals. The A/V input unit 104 may include a GPU (Graphics Processing Unit, graphics processor) 1041 and a microphone 1042, and the graphics processor 1041 is used for still pictures obtained by an image capture device (such as a camera) in video capture mode or image capture mode Video image data is processed. The processed image frames may be displayed on the display unit 106 . The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 . The microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data. The processed audio (voice) data can be converted into a format transmittable to a mobile communication base station via the radio frequency unit 101 for output in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
智能终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在智能终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The smart terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor. Optionally, the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the smart terminal 100 moves to the ear. panel 1061 and/or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used for applications that recognize the posture of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板1061。The display unit 106 is used to display information input by the user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light-emitting diode), and the like.
用户输入单元107可用于接收输入的数字或字符信息,以及产生与智能终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并 检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。The user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the smart terminal. Optionally, the user input unit 107 may include a touch panel 1071 and other input devices 1072 . The touch panel 1071, also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program. The touch panel 1071 may include two parts, a touch detection device and a touch controller. Optionally, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it. In addition, the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1071 , the user input unit 107 may also include other input devices 1072 . Optionally, other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现智能终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现智能终端的输入和输出功能,具体此处不做限定。Optionally, the touch panel 1071 may cover the display panel 1061. When the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event. The corresponding visual output is provided on the display panel 1061 . Although in FIG. 1, the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the smart terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated. The implementation of the input and output functions of the smart terminal is not specifically limited here.
接口单元108用作至少一个外部装置与智能终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到智能终端100内的一个或多个元件或者可以用于在智能终端100和外部装置之间传输数据。The interface unit 108 is used as an interface through which at least one external device can be connected with the smart terminal 100 . For example, an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more. The interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 109 can be used to store software programs as well as various data. The memory 109 can mainly include a storage program area and a storage data area. Optionally, the storage program area can store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.) etc.; the storage data area can be Store data (such as audio data, phone book, etc.) created according to the use of the mobile phone. In addition, the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
处理器110是智能终端的控制中心,利用各种接口和线路连接整个智能终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行智能终端的各种功能和处理数据,从而对智能终端进行整体监控。处理器110可包括一个或多个处理单元;优选的,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。The processor 110 is the control center of the smart terminal, and uses various interfaces and lines to connect various parts of the whole smart terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the smart terminal and process data, so as to monitor the smart terminal as a whole. The processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor. Optionally, the application processor mainly processes operating systems, user interfaces, and application programs, etc. The demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
智能终端100还可以包括给各个部件供电的电源111(比如电池),优选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The smart terminal 100 can also include a power supply 111 (such as a battery) for supplying power to various components. Preferably, the power supply 111 can be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. and other functions.
尽管图1未示出,智能终端100还可以包括蓝牙模块等,在此不再赘述。Although not shown in FIG. 1 , the smart terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
为了便于理解本申请实施例,下面对本申请的智能终端所基于的通信网络系统进行描述。In order to facilitate understanding of the embodiments of the present application, the following describes the communication network system on which the smart terminal of the present application is based.
请参阅图2,图2为本申请实施例提供的一种通信网络系统架构图,该通信网络系统为通用移动通信技术的LTE系统,该LTE系统包括依次通讯连接的UE(User Equipment,用户设备)201,E-UTRAN(Evolved UMTS Terrestrial Radio Access Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运 营商的IP业务204。Please refer to FIG. 2. FIG. 2 is a structure diagram of a communication network system provided by an embodiment of the present application. The communication network system is an LTE system of general mobile communication technology. ) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
可选地,UE201可以是上述终端100,此处不再赘述。Optionally, the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(backhaul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。 E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on. Optionally, the eNodeB 2021 can be connected to other eNodeB 2022 through a backhaul (for example, X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031,HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033,SGW(Serving Gate Way,服务网关)2034,PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。EPC203 may include MME (Mobility Management Entity, Mobility Management Entity) 2031, HSS (Home Subscriber Server, Home Subscriber Server) 2032, other MME2033, SGW (Serving Gate Way, Serving Gateway) 2034, PGW (PDN Gate Way, packet data Network Gateway) 2035 and PCRF (Policy and Charging Rules Function, Policy and Charging Functional Entity) 2036, etc. Optionally, MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management. HSS2032 is used to provide some registers to manage functions such as home location register (not shown in the figure), and save some user-specific information about service features and data rates. All user data can be sent through SGW2034, PGW2035 can provide UE201 IP address allocation and other functions, PCRF2036 is the policy and charging control policy decision point of business data flow and IP bearer resources, it is the policy and charging execution functional unit (not shown) Select and provide available policy and charging control decisions.
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子系统)或其它IP业务等。The IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
虽然上述以LTE系统为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE系统,也可以适用于其他无线通信系统,例如GSM、CDMA2000、WCDMA、TD-SCDMA以及未来新的网络系统(如5G)等,此处不做限定。Although the LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems. The network system (such as 5G), etc., is not limited here.
基于上述智能终端硬件结构以及通信网络系统,提出本申请各个实施例。Based on the above hardware structure of the smart terminal and the communication network system, various embodiments of the present application are proposed.
在相关技术中,首先,操作人员依据抠图处理经验,通过智能终端对原始图像进行抠图处理,得到原始图像中的部分图像;其次,操作人员依据图像调整操作经验,通过智能终端对部分图像进行调整,得到调整后的部分图像;最后,操作人员通过智能终端将原始图像中的部分图像替换为调整后的部分图像,得到原始图像对应的目标图像。在上述相关技术中,操作人员根据抠图处理经验对原始图像进行抠图处理,得到部分图像,使得得到目标图像的操作复杂,导致得到目标图像的效率较低。In the related technology, firstly, the operator performs matting processing on the original image through the smart terminal according to the matting processing experience, and obtains part of the original image; secondly, the operator uses the smart terminal to process the part of the image according to the image adjustment operation experience. Adjust to obtain the adjusted part of the image; finally, the operator replaces the part of the original image with the adjusted part of the image through the smart terminal to obtain the target image corresponding to the original image. In the above-mentioned related technologies, the operator performs matting processing on the original image according to matting processing experience to obtain a partial image, which makes the operation of obtaining the target image complicated, resulting in low efficiency in obtaining the target image.
在本申请中,为了提高得到目标图像的效率,申请人在本申请中提供一种图像处理方法,将原始图像换分为多个分区,用户可以根据需求选择其中的任意一个分区,并调整该分区的属性信息,以得到目标图像,从而简化得到目标图像的操作,提高得到目标图像的效率。In this application, in order to improve the efficiency of obtaining the target image, the applicant provides an image processing method in this application, which converts the original image into multiple partitions, and the user can select any one of the partitions according to needs, and adjust the The attribute information of the partition is used to obtain the target image, thereby simplifying the operation of obtaining the target image and improving the efficiency of obtaining the target image.
图3为本申请提供的一种控制器140的硬件结构示意图。该控制器140包括:存储器1401和处理器1402,存储器1401用于存储程序指令,处理器1402用于调用存储器1401中的程序指令执行上述方法实施例一中控制器所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。FIG. 3 is a schematic diagram of a hardware structure of a controller 140 provided in the present application. The controller 140 includes: a memory 1401 and a processor 1402, the memory 1401 is used to store program instructions, and the processor 1402 is used to call the program instructions in the memory 1401 to execute the steps performed by the controller in the first method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
可选地,上述控制器还包括通信接口1403,该通信接口1403可以通过总线1404与处理器1402连接。处理器1402可以控制通信接口1403来实现控制器140的接收和发送的功能。Optionally, the foregoing controller further includes a communication interface 1403 , and the communication interface 1403 may be connected to the processor 1402 through a bus 1404 . The processor 1402 can control the communication interface 1403 to implement the receiving and sending functions of the controller 140 .
图4为本申请提供的一种网络节点150的硬件结构示意图。该网络节点150包括:存 储器1501和处理器1502,存储器1501用于存储程序指令,处理器1502用于调用存储器1501中的程序指令执行上述方法实施例一中首节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。FIG. 4 is a schematic diagram of a hardware structure of a network node 150 provided in the present application. The network node 150 includes: a memory 1501 and a processor 1502, the memory 1501 is used to store program instructions, and the processor 1502 is used to call the program instructions in the memory 1501 to execute the steps performed by the first node in the first method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
可选地,上述控制器还包括通信接口1503,该通信接口1503可以通过总线1504与处理器1502连接。处理器1502可以控制通信接口1503来实现网络节点150的接收和发送的功能。Optionally, the foregoing controller further includes a communication interface 1503 , and the communication interface 1503 may be connected to the processor 1502 through a bus 1504 . The processor 1502 can control the communication interface 1503 to realize the functions of receiving and sending of the network node 150 .
图5为本申请提供的一种网络节点160的硬件结构示意图。该网络节点160包括:存储器1601和处理器1602,存储器1601用于存储程序指令,处理器1602用于调用存储器1601中的程序指令执行上述方法实施例一中中间节点和尾节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。FIG. 5 is a schematic diagram of a hardware structure of a network node 160 provided in the present application. The network node 160 includes: a memory 1601 and a processor 1602, the memory 1601 is used to store program instructions, and the processor 1602 is used to call the program instructions in the memory 1601 to execute the steps performed by the intermediate node and the tail node in the first method embodiment above, The implementation principles and beneficial effects are similar, and will not be repeated here.
可选地,上述控制器还包括通信接口1603,该通信接口1603可以通过总线1604与处理器1602连接。处理器1602可以控制通信接口1603来实现网络节点160的接收和发送的功能。Optionally, the foregoing controller further includes a communication interface 1603 , and the communication interface 1603 may be connected to the processor 1602 through a bus 1604 . The processor 1602 can control the communication interface 1603 to realize the functions of receiving and sending of the network node 160 .
图6为本申请提供的一种控制器170的硬件结构示意图。该控制器170包括:存储器1701和处理器1702,存储器1701用于存储程序指令,处理器1702用于调用存储器1701中的程序指令执行上述方法实施例二中控制器所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。FIG. 6 is a schematic diagram of a hardware structure of a controller 170 provided in the present application. The controller 170 includes: a memory 1701 and a processor 1702, the memory 1701 is used to store program instructions, and the processor 1702 is used to call the program instructions in the memory 1701 to execute the steps performed by the controller in the second method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
图7为本申请提供的一种网络节点180的硬件结构示意图。该网络节点180包括:存储器1801和处理器1802,存储器1801用于存储程序指令,处理器1802用于调用存储器1801中的程序指令执行上述方法实施例二中首节点所执行的步骤,其实现原理以及有益效果类似,此处不再进行赘述。FIG. 7 is a schematic diagram of a hardware structure of a network node 180 provided in the present application. The network node 180 includes: a memory 1801 and a processor 1802, the memory 1801 is used to store program instructions, and the processor 1802 is used to invoke the program instructions in the memory 1801 to execute the steps performed by the head node in the second method embodiment above, and its implementation principle and beneficial effects are similar, and will not be repeated here.
上述以软件功能模块的形式实现的集成的模块,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本申请各个实施例方法的部分步骤。The above-mentioned integrated modules implemented in the form of software function modules can be stored in a computer-readable storage medium. The above-mentioned software function modules are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. partial steps.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk,SSD)等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. A computer can be a general purpose computer, special purpose computer, computer network, or other programmable device. Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media. Available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state disk, SSD), etc.
下面结合图8对本申请提供图像处理方法的应用场景进行说明。The application scenario of the image processing method provided by the present application will be described below with reference to FIG. 8 .
图8为本申请提供的应用场景示意图。如图8所示,包括:原始图像11、包括至少一 个图像分区的原始图像12、目标图像13。例如至少一个图像分区包括4个图像区域,别为1、2、3、4。FIG. 8 is a schematic diagram of an application scenario provided by this application. As shown in Figure 8, it includes: an original image 11, an original image 12 including at least one image partition, and a target image 13. For example, at least one image partition includes 4 image areas, namely 1, 2, 3, and 4.
可选地,可以将至少一个图像分区中的任意一个图像区域,确定为目标图像分区。例如将图像区域4确定为目标图像分区。进一步地,调整目标图像分区(图像区域4)的属性信息(例如颜色),确认或生成目标图像13。Optionally, any image region in at least one image partition may be determined as the target image partition. For example, image region 4 is determined as the target image subregion. Further, the attribute information (such as color) of the target image partition (image area 4 ) is adjusted, and the target image 13 is confirmed or generated.
在本申请中,在至少一图像区域中确定目标图像区域,调整目标图像区域的属性信息,确认或生成原始图像对应的目标图像,可以避免对原始图像进行抠图处理得到原始图像中的部分图像,简化了得到目标图像区域的操作,进而简化了得到目标图像的操作,提高得到目标图像的效率。In this application, the target image area is determined in at least one image area, the attribute information of the target image area is adjusted, and the target image corresponding to the original image is confirmed or generated, which can avoid matting the original image to obtain a part of the original image , which simplifies the operation of obtaining the target image area, further simplifies the operation of obtaining the target image, and improves the efficiency of obtaining the target image.
面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。本申请提供的图像处理方法的执行主体可以为智能终端,也可以为设置在智能终端上的图像处理装置,该图像处理装置可以通过软件和/或硬件的结合来实现。可选地,智能终端可以为有线终端、也可以为无线终端。例如有线终端可以为台式电脑。例如无线终端可以为平板电脑、笔记本电脑、个人数字助理(Personal Digital Assistant,简称PDA)、手机等设备。软件例如可以为任意一种修图软件。The technical solution of the present application and how the technical solution of the present application solves the above technical problems will be described in detail with specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below in conjunction with the accompanying drawings. The execution subject of the image processing method provided in the present application may be a smart terminal, or an image processing device installed on the smart terminal, and the image processing device may be realized by a combination of software and/or hardware. Optionally, the smart terminal may be a wired terminal or a wireless terminal. For example, the wired terminal may be a desktop computer. For example, the wireless terminal may be a device such as a tablet computer, a notebook computer, a personal digital assistant (Personal Digital Assistant, PDA for short), and a mobile phone. The software can be, for example, any kind of image editing software.
图9为本申请提供的一种图像处理方法的流程图。如图9所示,该方法包括:FIG. 9 is a flowchart of an image processing method provided by the present application. As shown in Figure 9, the method includes:
可选地,本申请提供的图像处理方法的执行主体可以上述智能终端,也可以设置在智能终端中的图像处理装置,该图像处理装置可以通过软件和/或硬件的结合来实现,软件包括但不限于各种图像(修图)处理软件。Optionally, the execution subject of the image processing method provided in this application may be the above-mentioned smart terminal, or an image processing device installed in the smart terminal. The image processing device may be realized by a combination of software and/or hardware. The software includes but Not limited to various image (retouching) processing software.
S201,获取原始图像中包括的至少一个图像区域。S201. Acquire at least one image area included in an original image.
可选地,可以通过如下2种方式(方式11和12),获取至少一个图像区域。Optionally, at least one image region may be acquired in the following two manners (modes 11 and 12).
方式11,通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Mode 11: Perform pixel detection processing on the original image through the pixel detection model to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
可选地,根据各像素的标签,从原始图像中获取至少一个图像区域,包括:按照各像素的标签,对原始图像进行分区处理,得到至少一个图像区域。可选地,每个图像区域中包括的各像素的标签相同。Optionally, obtaining at least one image region from the original image according to the label of each pixel includes: performing partition processing on the original image according to the label of each pixel to obtain at least one image region. Optionally, the labels of the pixels included in each image region are the same.
可选地,像素检测模型可以为语义分割(semantic segmentation)算法模型。像素的标签例如可以为预先设定的数字1、2、3、4、5等。Optionally, the pixel detection model may be a semantic segmentation (semantic segmentation) algorithm model. The label of the pixel can be, for example, a preset number 1, 2, 3, 4, 5 and so on.
在上述方式11的基础上,下面结合图10,对根据像素得到至少一个图像区域进行说明书。图10为本申请提供的根据标签得到至少一个图像区域的示意图。如图10所示,包括:原始图像31和原始图像31中各像素的标签。例如标签为1、2、3、4或者5。On the basis of the above method 11, the following describes obtaining at least one image region according to pixels with reference to FIG. 10 . FIG. 10 is a schematic diagram of obtaining at least one image region according to tags provided by the present application. As shown in FIG. 10 , it includes: the original image 31 and the labels of each pixel in the original image 31 . For example the label is 1, 2, 3, 4 or 5.
进一步地,根据标签,将原始图像进行分区处理,得到至少一个图像区域。例如,将标签为1的像素组成一个图像区域,将标签为2的像素组成另一个图像区域。Further, according to the label, the original image is partitioned to obtain at least one image region. For example, pixels labeled 1 form one image region, and pixels labeled 2 form another image region.
可选地,当相同标签对应的像素在原始图像中不邻接时,该标签可以组成上述至少一个图像区域中的N个图像区域,N为大于或等于1的整数。例如,标签5可以组成至少一个图像区域中的2个图像区域,2个图像区域分别为以虚线32和33为边缘的图像区域。Optionally, when pixels corresponding to the same label are not adjacent in the original image, the label may form N image areas in the at least one image area, where N is an integer greater than or equal to 1. For example, the label 5 may form two image areas in at least one image area, and the two image areas are respectively image areas bordered by dotted lines 32 and 33 .
方式12,通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别; 根据至少一个图像类别,对原始图像进行分区处理,得到至少一个图像区域。Mode 12: Perform image detection processing on the original image by using a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least one image region.
可选地,像素分类模型可以为实例分割(Instance segmentation)算法模型。Optionally, the pixel classification model may be an instance segmentation (Instance segmentation) algorithm model.
可选地,至少一个图像类别可以包括:人类别、天空类别、海洋类别、草地类别、道路类别、树木类别、建筑物类别、交通工具类别等。Optionally, at least one image category may include: a person category, a sky category, an ocean category, a grass category, a road category, a tree category, a building category, a vehicle category, and the like.
在上述方式12的基础上,下面结合图11,对根据图像类别,得到至少一个图像区域的方法进行说明。图11为本申请提供的根据图像类别得到至少一个图像区域的示意图。如图11所示,包括:原始图像41和原始图像41中的至少一个图像类别。On the basis of the above method 12, the method of obtaining at least one image area according to the image category will be described below with reference to FIG. 11 . FIG. 11 is a schematic diagram of obtaining at least one image area according to image categories provided by the present application. As shown in FIG. 11 , it includes: an original image 41 and at least one image category in the original image 41 .
例如,当至少一个图像类别包括:人类别、天空类别、道路类别、树木类别、建筑物类别、交通工具类别时,如图11所示通过不同的颜色区分图像类别。For example, when at least one image category includes: person category, sky category, road category, tree category, building category, and vehicle category, the image categories are distinguished by different colors as shown in FIG. 11 .
可选地,每种图像类别可以对应至少一个图像区域中的M个图像区域,M为大于或等于1的整数。例如,交通工具类别对应于至少一个图像区域中的2个图像区域。Optionally, each image category may correspond to M image areas in at least one image area, where M is an integer greater than or equal to 1. For example, the vehicle class corresponds to 2 image areas in at least one image area.
S202,在至少一个图像区域中确定目标图像区域。S202. Determine a target image area in at least one image area.
可选地,可以通过如下2种方式(方式21、22和23)确定目标图像区域。Optionally, the target image area can be determined in the following two ways ( modes 21, 22 and 23).
方式21,显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。 Mode 21, displaying edges of each image area; receiving a touch command, where the touch command includes a target position; determining an image area corresponding to an edge surrounding the target position in at least one image area as the target image area.
例如,在图10的基础上,包围目标位置的边缘为虚线32时,则确定以虚线32为边缘的图像区域为目标图像区域。For example, on the basis of FIG. 10 , when the edge surrounding the target position is a dotted line 32 , then the image area bordered by the dotted line 32 is determined as the target image area.
方式22,显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域。 Mode 22, displaying the edges of each image area; receiving a touch instruction on the target edge; determining an image area surrounded by the target edge in at least one image area as the target image area.
目标边缘为如下中的至少一种:各图像区域的边缘中的任意一个边缘;根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。The target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
可选地,在实际应用中,若至少一个图像区域中没有用户想要的图像区域,则可以输入操作轨迹,将原始图像中以操作轨迹为边缘的图像区域,确定为目标图像区域。Optionally, in practical applications, if there is no image area desired by the user in at least one image area, the operation track may be input, and the image area in the original image bordered by the operation track is determined as the target image area.
方式23,显示各图像区域的边缘;响应于取消显示操作,取消显示各图像区域的边缘;响应于在原始图像中执行的闭合区域绘制操作,获取闭合区域绘制操作的操作轨迹,将原始图像中以操作轨迹为边缘的图像区域,确定为目标图像区域。 Mode 23, displaying the edges of each image area; in response to canceling the display operation, canceling the display of the edges of each image area; in response to the closed area drawing operation performed in the original image, obtaining the operation track of the closed area drawing operation, and converting the original image to The image area with the operation track as the edge is determined as the target image area.
取消操作可以为通过取消控件的输入的选中操作。The cancel operation may be a check operation of an input through a cancel control.
可选地,取消操作也可以为对各图像区域的边缘中任意至少一个边缘的取消操作。可选地,响应于该取消操作,取消显示该取消操作选中的边缘。Optionally, the cancel operation may also be a cancel operation on any at least one edge of the edges of each image area. Optionally, in response to the cancel operation, the edge selected by the cancel operation is canceled from display.
可选地,可以通过如下方式30、31或者32,显示上述方式21、22和23中各图像区域的边缘。Optionally, the edges of the respective image regions in the foregoing manners 21, 22 and 23 may be displayed through the following manners 30, 31 or 32.
方式31,按照预先存储的预设格式,显示各图像区域的边缘。可选地,预设格式可以包括显示边缘的线型、线型的颜色、亮度等。 Mode 31, displaying edges of each image area according to a pre-stored preset format. Optionally, the preset format may include displaying the line type of the edge, the color and brightness of the line type, and the like.
方式32,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,显示各图像区域的边缘。Way 32, receiving a selection instruction for a target format in at least one preset format, and displaying edges of each image area according to the target format.
可选地,至少一个预设格式中包括线型、线型的颜色、或者亮度等存在不同。Optionally, at least one preset format includes a line type, a color of the line type, or brightness, etc., which are different.
可选地,还可以通过如下方式获取目标格式:响应于对至少一种线性中目标线性的选择操作、对至少一种颜色中目标颜色的选择操作、对至少一种亮度中目标亮度的选择操作, 将目标线性、目标颜色、目标亮度,确定为目标格式。Optionally, the target format can also be obtained in the following manner: in response to a selection operation of at least one target linear in linear, a selection operation of target color in at least one color, and a selection operation of target brightness in at least one brightness , determine the target linearity, target color, and target brightness as the target format.
S203,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。S203. Adjust the attribute information of the image in the target image area, and confirm or generate a target image corresponding to the original image.
可选地,可以通过如下2种方式(方式51和52)确认或生成目标图像。Optionally, the target image can be confirmed or generated in the following two ways (modes 51 and 52).
方式51,显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。 Mode 51, displaying at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, high light, low light, and sharpness At least one of: receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set, adjusting the attribute information of the image in the target image area, confirming or generating the original image corresponding target image.
可选地,风格可以与本申请涉及的标签或者图像类别对应。例如,当标签为1(例如指示人)时,风格为人风格;当图像类别为人类别时,风格为人风格。Optionally, the style may correspond to the tags or image categories involved in this application. For example, when the label is 1 (for example, indicating a person), the style is a person style; when the image category is a person category, the style is a person style.
方式52,接收对目标图像区域中图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种; Mode 52, receiving an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low light, at least one of clarity;
将目标图像区域中图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Set the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area as the custom attribute value corresponding to at least one attribute, and confirm or generate the target image corresponding to the original image.
可选地,接收对目标图像区域中图像的属性调整指令,包括:显示上述各属性对应的属性控件;接收通过各属性控件输入的对目标图像区域中图像的属性调整指令。Optionally, receiving an attribute adjustment instruction for the image in the target image area includes: displaying attribute controls corresponding to the above attributes; receiving an attribute adjustment instruction for the image in the target image area input through each attribute control.
可选地,在方式52中,可以保存属性调整指令包括至少一种属性对应的自定义属性值,并将属性调整指令包括至少一种属性对应的自定义属性值,设置为方式51中的一个风格调整参数集合,以便在方式51中显示至少一个风格调整参数集合。Optionally, in mode 52, it is possible to save the attribute adjustment instruction including the custom attribute value corresponding to at least one attribute, and set the attribute adjustment instruction including the custom attribute value corresponding to at least one attribute as one of the mode 51 A style adjustment parameter set, so that at least one style adjustment parameter set is displayed in manner 51.
图12为本申请提供的一种显示各属性对应的属性控件的示意图。如图12所示,包括:亮度对应的属性控件51、对比度对应的属性控件52、色度对应的属性控件53、饱和度对应的属性控件54、高光对应的属性控件55、低光对应的属性控件56、清晰度对应的属性控件57。示例性的,如图8所示,对原始图像11中目标图像区域4(图像12)中图像的属性信息进行调整,得到目标图像13。目标图像13和原始图像11相比,目标图像区域4中图像的属性信息存在不同。FIG. 12 is a schematic diagram of displaying property controls corresponding to various properties provided by the present application. As shown in Figure 12, it includes: an attribute control 51 corresponding to brightness, an attribute control 52 corresponding to contrast, an attribute control 53 corresponding to hue, an attribute control 54 corresponding to saturation, an attribute control 55 corresponding to high light, and an attribute corresponding to low light Control 56 , property control 57 corresponding to sharpness. Exemplarily, as shown in FIG. 8 , the attribute information of the image in the target image area 4 (image 12 ) in the original image 11 is adjusted to obtain the target image 13 . Compared with the target image 13 and the original image 11, the attribute information of the image in the target image area 4 is different.
在图9实施例提供的图像处理方法包括:获取原始图像中包括的至少一个图像区域;在至少一个图像区域中确定目标图像区域;调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。在上述方法中,获取原始图像中包括的至少一个图像区域;在至少一个图像区域中确定目标图像区域,可以避免对原始图像进行抠图处理,简化了得到目标图像区域的操作,提高了生成目标图像的效率。The image processing method provided by the embodiment in FIG. 9 includes: acquiring at least one image area included in the original image; determining the target image area in at least one image area; adjusting the attribute information of the image in the target image area, confirming or generating the original image corresponding target image. In the above method, at least one image area included in the original image is acquired; the target image area is determined in at least one image area, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the generation target. image efficiency.
在上述实施例的基础上,还可以通过如下方法获取原始图像中包括的至少一个图像区域,具体的,请参见图13。On the basis of the above embodiments, at least one image region included in the original image may also be obtained through the following method, specifically, please refer to FIG. 13 .
图13为本申请提供的获取至少一个图像区域的方法流程图。如图13所示,该方法包括:Fig. 13 is a flowchart of a method for acquiring at least one image area provided by the present application. As shown in Figure 13, the method includes:
S601,通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签。S601. Perform pixel detection processing on the original image by using a pixel detection model to obtain a label of each pixel in the original image.
S602,按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;第二图像区域中包括的各像素的标签相同。S602. Perform partition processing on the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same.
需要说明的是,得到至少一个第二图像区域与图10所示的至少一个图像区域相似,S601至S602的执行方法,与上述方式11的执行方法相似,此处不再赘述。It should be noted that the obtained at least one second image area is similar to the at least one image area shown in FIG. 10 , and the execution methods of S601 to S602 are similar to the execution method of the above-mentioned method 11, which will not be repeated here.
S603,通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域。S603. Using the pixel classification model, perform partition processing on the original image to obtain at least one first image region.
可选地,S603具体包括:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为一个图像类别对应的区域。Optionally, S603 specifically includes: performing image detection processing on the original image through a pixel classification model to obtain at least one image category; performing partition processing on the original image according to at least one image category to obtain at least a first image region; the first image A region is a region corresponding to an image category.
需要说明的是,至少一个第一图像区域与图11所示的至少一个图像区域相似,S603具体的执行方法与上述方式12相似,此处不再赘述。It should be noted that the at least one first image area is similar to the at least one image area shown in FIG. 11 , and the specific execution method of S603 is similar to the above-mentioned manner 12, which will not be repeated here.
S604,按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。S604. Perform segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
需要说明的是,根据图像类别进行图像区域划分,会将属于同一图像类别的不同图像,划分至相同的图像分区中。如图11所示,针对原始图像中不同的车辆图像,不同的车辆图像位于相同的图像分区中。It should be noted that the image area division according to the image category will divide different images belonging to the same image category into the same image partition. As shown in FIG. 11 , for different vehicle images in the original image, different vehicle images are located in the same image partition.
因此,在本申请中,通过根据像素的标签得到至少一个第二图像区域,对根据图像类别得到至少一个第一图像区域进行分割处理,得到至少一个图像区域,可以将属于同一图像类别的不同图像进行分割,提高得到至少一个图像区域的准确性。Therefore, in this application, by obtaining at least one second image region according to the label of the pixel, and performing segmentation processing on at least one first image region obtained according to the image category to obtain at least one image region, different images belonging to the same image category can be Segmentation is performed to improve the accuracy of obtaining at least one image region.
图14为本申请提供的另一种图像处理方法的流程图。如图14所示,该方法包括:FIG. 14 is a flowchart of another image processing method provided by the present application. As shown in Figure 14, the method includes:
S701,获取原始图像。S701. Acquire an original image.
可选地,原始图像可以为智能终端中存储的多张图像中的目标图像。Optionally, the original image may be a target image among multiple images stored in the smart terminal.
可选地,可以通过如下方法确定目标图像:Optionally, the target image can be determined by the following methods:
显示智能终端中存储的多张图像;Display multiple images stored in the smart terminal;
接收对多张图像中任意一张图像的选择指令,选择指令中包括图像的标识;receiving a selection instruction for any one of the multiple images, where the selection instruction includes an identification of the image;
将多张图像中具有该标识的图像,确定为目标图像。The image with the identifier in the multiple images is determined as the target image.
S702,在原始图像中确定待处理图像。S702. Determine an image to be processed in the original image.
可选地,可以通过如下2种方式(方式31和32),确定待处理图像。Optionally, the image to be processed can be determined through the following two methods (methods 31 and 32).
方式31,显示原始图像;获取在原始图像中执行闭合区域绘制操作的操作轨迹;将以操作轨迹为边缘的图像,确定为待处理图像。 Mode 31, displaying the original image; obtaining the operation trajectory of the closed area drawing operation in the original image; determining the image with the operation trajectory as the edge as the image to be processed.
获取在原始图像中执行闭合区域绘制操作的操作轨迹,包括:响应于在原始图像中执行的闭合区域绘制操作,获取闭合区域绘制操作的操作轨迹。Obtaining an operation track of the closed area drawing operation executed in the original image includes: in response to the closed area drawing operation performed in the original image, acquiring an operation track of the closed area drawing operation.
方式32,对原始图像进行图像分区处理,得到至少一个图像区域;在至少一个图像区域中确定目标图像区域;将目标图像区域中的图像,确定为待处理图像。Mode 32, perform image partition processing on the original image to obtain at least one image area; determine a target image area in the at least one image area; determine the image in the target image area as an image to be processed.
可选地,可以通过上述方式11、或者上述方式12、或者上述图13实施例所示的方法得到至少一个图像区域。此处不再赘述对原始图像进行图像分区处理得到至少一个图像区域的执行过程。Optionally, at least one image region may be obtained through the foregoing manner 11, or the foregoing manner 12, or the method shown in the foregoing embodiment in FIG. 13 . The execution process of performing image partition processing on the original image to obtain at least one image area will not be repeated here.
可选地,可以通过上述方式21、或者上述方式22、或者上述方式23所示的方法,在至少一个图像区域中确定目标图像区域。此处,不再赘述在至少一个图像区域中确定目标图像区域的执行过程。Optionally, the target image area may be determined in at least one image area by using the method shown in the foregoing manner 21, or the foregoing manner 22, or the foregoing manner 23. Here, the execution process of determining the target image area in at least one image area will not be described in detail.
S703,调整待处理图像的属性信息,确认或生成原始图像对应的目标图像。S703. Adjust attribute information of the image to be processed, and confirm or generate a target image corresponding to the original image.
可选地,S703具体包括:接收对待处理图像的属性调整指令;属性调整指令包括至少 一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将待处理图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, S703 specifically includes: receiving an attribute adjustment instruction of the image to be processed; the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, highlight, low At least one of light and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as the custom attribute value corresponding to at least one attribute, and the target corresponding to the original image is confirmed or generated image.
可选地,还可以通过与上述方式51和52相似的方法,执行S703,以确认或生成原始图像对应的目标图像。需要说明的是,通过与上述方式51和52相似的方法执行S703的具体过程,此处不再赘述。Optionally, S703 may also be performed by a method similar to the above manners 51 and 52, so as to confirm or generate a target image corresponding to the original image. It should be noted that, the specific process of S703 is executed by a method similar to the above manners 51 and 52, which will not be repeated here.
图14实施例提供的方法包括:获取原始图像;在原始图像中确定待处理图像;调整待处理图像的属性信息,确认或生成原始图像对应的目标图像。在上述方法中,在原始图像中确定待处理图像,可以避免对原始图像进行抠图处理,简化了得到目标图像区域的操作,提高了生成目标图像的效率。The method provided by the embodiment in FIG. 14 includes: acquiring an original image; determining an image to be processed in the original image; adjusting attribute information of the image to be processed, and confirming or generating a target image corresponding to the original image. In the above method, the image to be processed is determined in the original image, which can avoid matting processing on the original image, simplifies the operation of obtaining the target image area, and improves the efficiency of generating the target image.
与现有技术不同,在现有的相机效果调优中,通常出现场景参数不能平衡的问题,例如在某些场景下,图像中人物和背景的效果只能保证一个,若把背景调到想要的效果,则人物的效果就随着一起偏移了,导致人物的效果变成差。在本申请中,通过图像处理方法,可以将人物和背景划分至不同的图像区域中,从而针对性的对背景或者人物的效果进行调整,既保证了还原背景色彩的准确性,又使得人物效果良好。Different from the existing technology, in the existing camera effect tuning, there is usually a problem that the scene parameters cannot be balanced. For example, in some scenes, only one effect of the characters and the background in the image can be guaranteed. If the background is adjusted to the desired If the desired effect is desired, the effect of the character will be shifted along with it, resulting in the poor effect of the character. In this application, through the image processing method, the characters and the background can be divided into different image areas, so as to adjust the effect of the background or the characters in a targeted manner, which not only ensures the accuracy of restoring the background color, but also makes the effect of the characters good.
可选地,本申请提供的图像处理方法,可以用于标准(F.AWBE)中步骤2(Step 2)光照指示物检测,实现对光照指示物的定位与分割。Optionally, the image processing method provided in this application can be used in the standard (F.AWBE) Step 2 (Step 2) illumination indicator detection to realize the positioning and segmentation of the illumination indicator.
在标准(F.AWBE)中,通常使用24色卡作为光照指示物,通过对色卡底部4个灰色块的数据进行一系列计算以判断图像是否为均匀光照图像,如不满足要求,则判断为非均匀光照图片并舍弃,本提案可对不合要求的非均匀光照图像进行后期调整,再次进行二次检测,如果二次检测满足要求,则保留调整后的图像至数据集,以达到扩充数据集的目的。In the standard (F.AWBE), a 24-color card is usually used as an illumination indicator, and a series of calculations are performed on the data of the four gray blocks at the bottom of the color card to determine whether the image is a uniformly illuminated image. If the requirements are not met, then judge Non-uniform illumination images are discarded. This proposal can perform post-adjustment on non-uniform illumination images that do not meet the requirements, and perform secondary detection again. If the secondary detection meets the requirements, the adjusted image will be kept in the data set to achieve data expansion. set purpose.
图15为本申请提供的图像处理装置的结构示意图。如图15所示,图像处理装置10包括:处理模块11;处理模块11用于:获取原始图像中包括的至少一个图像区域;在至少一个图像区域中确定目标图像区域;调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。FIG. 15 is a schematic structural diagram of an image processing device provided by the present application. As shown in Figure 15, the image processing device 10 includes: a processing module 11; the processing module 11 is used to: obtain at least one image area included in the original image; determine the target image area in at least one image area; adjust the image in the target image area The attribute information of the original image is confirmed or generated corresponding to the target image.
本申请提供的图像处理装置10可以执行上述方法实施例中的方法,其实现原理以及有益效果类似,此处不再进行赘述。The image processing device 10 provided in the present application can execute the methods in the above method embodiments, and the implementation principles and beneficial effects are similar, and will not be repeated here.
可选地,处理模块11具体用于:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, the processing module 11 is specifically configured to: use the pixel detection model to perform pixel detection processing on the original image to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
可选地,处理模块11具体用于:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;第二图像区域中包括的各像素的标签相同;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, the processing module 11 is specifically configured to: partition the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same; through the pixel classification model, Partition processing is performed on the original image to obtain at least one first image region; according to at least one second image partition, at least one first image region is partitioned to obtain at least one image region.
可选地,处理模块11具体用于:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, the processing module 11 is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least the first image region; The first image area is an area corresponding to the image category.
可选地,处理模块11具体用于:控制显示模块12显示各图像区域的边缘;显示模块12包括在图像处理装置10中;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; the display module 12 is included in the image processing device 10; receive a touch command, and the touch command includes a target position; convert at least one image area The image area corresponding to the edge surrounding the target position in is determined as the target image area.
可选地,处理模块11具体用于:控制显示模块12显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区域。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; receive a touch instruction on the target edge; determine an image area surrounded by the target edge in at least one image area as the target image area.
目标边缘为如下中的至少一种:各图像区域的边缘中的任意一个边缘;根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。The target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
可选地,处理模块11具体用于:按照预先存储的预设格式,控制显示模块12显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,控制显示模块12显示各图像区域的边缘。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, , control the display module 12 to display the edge of each image area.
可选地,处理模块11具体用于:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, the processing module 11 is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area , confirm or generate the target image corresponding to the original image.
可选地,处理模块11具体用于:接收对目标图像区域中图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将目标图像区域中图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, the processing module 11 is specifically configured to: receive an attribute adjustment instruction for the image in the target image area, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, chroma, At least one of saturation, high light, low light, and clarity; the initial value corresponding to at least one attribute included in the attribute information of the image in the target image area is set as a custom attribute value corresponding to at least one attribute, Confirm or generate the target image corresponding to the original image.
可选地,本申请提供的图像处理装置10中的处理模块11还可以用于:获取原始图像;在原始图像中确定待处理图像;调整待处理图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, the processing module 11 in the image processing device 10 provided by this application can also be used to: acquire the original image; determine the image to be processed in the original image; adjust the attribute information of the image to be processed, confirm or generate the corresponding target image.
可选地,处理模块11具体用于:对原始图像进行图像分区处理,得到至少一个图像区域;在至少一个图像区域中确定目标图像区域;将目标图像区域中的图像,确定为待处理图像。Optionally, the processing module 11 is specifically configured to: perform image partition processing on the original image to obtain at least one image area; determine a target image area in the at least one image area; determine an image in the target image area as an image to be processed.
可选地,处理模块11具体用于:通过像素检测模型,对原始图像进行像素检测处理,得到原始图像中各像素的标签;根据各像素的标签,从原始图像中获取至少一个图像区域。Optionally, the processing module 11 is specifically configured to: use the pixel detection model to perform pixel detection processing on the original image to obtain the label of each pixel in the original image; obtain at least one image region from the original image according to the label of each pixel.
可选地,处理模块11具体用于:按照各像素的标签,对原始图像进行分区处理,得到至少一个第二图像区域;通过像素分类模型,对原始图像进行分区处理,得到至少一个第一图像区域;按照至少一个第二图像分区,对至少一个第一图像区域进行分割处理,得到至少一个图像区域。Optionally, the processing module 11 is specifically configured to: perform partition processing on the original image according to the label of each pixel to obtain at least one second image region; perform partition processing on the original image through a pixel classification model to obtain at least one first image Regions: performing segmentation processing on at least one first image region according to at least one second image partition to obtain at least one image region.
可选地,处理模块11具体用于:通过像素分类模型,对原始图像进行图像检测处理,得到至少一个图像类别;根据至少一个图像类别,对原始图像进行分区处理,得到至少第一图像区域;第一图像区域为图像类别对应的区域。Optionally, the processing module 11 is specifically configured to: perform image detection processing on the original image through a pixel classification model to obtain at least one image category; perform partition processing on the original image according to the at least one image category to obtain at least the first image region; The first image area is an area corresponding to the image category.
可选地,处理模块11具体用于:控制显示模块12显示各图像区域的边缘;接收触控指令,触控指令包括目标位置;将至少一个图像区域中包围目标位置的边缘对应的图像区域,确定为目标图像区域。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edge of each image area; receive a touch command, the touch command includes the target position; and set the image area corresponding to the edge surrounding the target position in at least one image area, Determined as the target image area.
可选地,处理模块11具体用于:控制显示模块12显示各图像区域的边缘;接收对目标边缘的触控指令;将至少一个图像区域中目标边缘包围的图像区域,确定为目标图像区 域。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area; receive a touch instruction on the target edge; determine an image area surrounded by the target edge in at least one image area as the target image area.
目标边缘为如下中的至少一种:各图像区域的边缘中的任意一个边缘;根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。The target edge is at least one of the following: any one of the edges of each image area; an edge adjusted to any one of the edges of each image area according to the operation track of the closed area drawing operation input by the user.
可选地,处理模块11具体用于:按照预先存储的预设格式,控制显示模块12显示各图像区域的边缘;或者,接收对至少一个预设格式中的目标格式的选择指令,按照目标格式,控制显示模块12显示各图像区域的边缘。Optionally, the processing module 11 is specifically configured to: control the display module 12 to display the edges of each image area according to a pre-stored preset format; or, receive a selection instruction for a target format in at least one preset format, , control the display module 12 to display the edge of each image area.
可选地,处理模块11具体用于:显示至少一个风格调整参数集合,风格调整参数集合包括至少一种属性对应的预设属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;接收用于从至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;采用目标风格调整参数集合,调整目标图像区域中图像的属性信息,确认或生成原始图像对应的目标图像。Optionally, the processing module 11 is specifically configured to: display at least one style adjustment parameter set, the style adjustment parameter set includes preset attribute values corresponding to at least one attribute, at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and clarity; receiving a selection instruction for selecting a target style adjustment parameter set from at least one style adjustment parameter set; using the target style adjustment parameter set to adjust the attribute information of the image in the target image area , confirm or generate the target image corresponding to the original image.
可选地,处理模块11具体用于:接收对待处理图像的属性调整指令,属性调整指令包括至少一种属性对应的自定义属性值,至少一种属性包括亮度、对比度、色度、饱和度、高光、低光、清晰度中的至少一种;将待处理图像的属性信息中包括的至少一种属性对应的初始值,设置为至少一种属性对应的自定义属性值,确认或生成原始图像对应的目标图像。Optionally, the processing module 11 is specifically configured to: receive an attribute adjustment instruction of the image to be processed, the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute, and at least one attribute includes brightness, contrast, hue, saturation, At least one of high light, low light, and sharpness; the initial value corresponding to at least one attribute included in the attribute information of the image to be processed is set as a custom attribute value corresponding to at least one attribute, and the original image is confirmed or generated the corresponding target image.
本申请提供的图像处理装置10可以执行上述方法实施例中的方法,其实现原理以及有益效果类似,此处不再进行赘述。The image processing device 10 provided in the present application can execute the methods in the above method embodiments, and the implementation principles and beneficial effects are similar, and will not be repeated here.
图16为本申请提供的智能终端的硬件示意图。如图16所示,智能终端20可以包括:收发器21、存储器22和处理器23。可选地,收发器21可以包括:发射器和/或接收器。发射器还可称为发送器、发射机、发送端口或发送接口等类似描述。接收器还可称为接收器、接收机、接收端口或接收接口等类似描述。示例性地,收发器21、存储器22、处理器23各部分之间通过总线相互连接。FIG. 16 is a schematic diagram of the hardware of the smart terminal provided by the present application. As shown in FIG. 16 , the smart terminal 20 may include: a transceiver 21 , a memory 22 and a processor 23 . Optionally, the transceiver 21 may include: a transmitter and/or a receiver. A transmitter may also be referred to as a sender, a transmitter, a send port, or a send interface, and similar descriptions. A receiver may also be referred to as a receiver, receiver, receiving port, or receiving interface, and similar descriptions. Exemplarily, each part of the transceiver 21, the memory 22, and the processor 23 is connected to each other through a bus.
存储器22用于存储计算机执行指令。处理器23用于执行存储器22存储的计算机执行指令,使得处理器23执行上述图像处理方法。The memory 22 is used to store computer-executable instructions. The processor 23 is configured to execute the computer-executed instructions stored in the memory 22, so that the processor 23 executes the above image processing method.
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有图像处理程序,图像处理程序被处理器执行时实现上述任一实施例中的图像处理方法的步骤。An embodiment of the present application further provides a computer-readable storage medium, on which an image processing program is stored, and when the image processing program is executed by a processor, the steps of the image processing method in any of the foregoing embodiments are implemented.
在本申请实施例提供的智能终端和计算机可读存储介质的实施例中,包含了上述图像处理方法各实施例的全部技术特征,说明书拓展和解释内容与上述方法的各实施例基本相同,在此不做再赘述。In the embodiments of the smart terminal and the computer-readable storage medium provided in the embodiments of the present application, all the technical features of the above-mentioned image processing method embodiments are included, and the expansion and explanation of the description are basically the same as the above-mentioned method embodiments. I won't repeat it here.
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的图像处理方法。The embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the image processing method in the above various possible implementation manners.
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的图像处理方法。The embodiment of the present application also provides a chip, including a memory and a processor. The memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. image processing method.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。本申请实施例设备中的单元可以根据 实际需要进行合并、划分和删减。The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments. The steps in the methods of the embodiments of the present application can be adjusted, combined and deleted according to actual needs. The units in the device in the embodiment of the present application can be combined, divided and deleted according to actual needs.
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。In this application, descriptions of the same or similar terms, concepts, technical solutions and/or application scenarios are generally only described in detail when they appear for the first time, and when they appear repeatedly later, for the sake of brevity, they are generally not repeated. When understanding the technical solutions and other contents of the present application, for the same or similar term concepts, technical solutions and/or application scenario descriptions that are not described in detail later, you can refer to the previous relevant detailed descriptions.
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In this application, the description of each embodiment has its own emphasis. For the parts that are not detailed or recorded in a certain embodiment, please refer to the relevant descriptions of other embodiments.
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。The various technical features of the technical solution of the present application can be combined arbitrarily. For the sake of concise description, all possible combinations of the various technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, all It should be regarded as the scope described in this application.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media. Usable media may be magnetic media, (eg, floppy disk, memory disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only preferred embodiments of the present application, and are not intended to limit the patent scope of the present application. All equivalent structures or equivalent process transformations made by using the description of the application and the accompanying drawings are directly or indirectly used in other related technical fields. , are all included in the patent protection scope of the present application in the same way.

Claims (15)

  1. 一种图像处理方法,其中,包括以下步骤:A kind of image processing method, wherein, comprise the following steps:
    S10:获取原始图像中包括的至少一个图像区域;S10: Obtain at least one image region included in the original image;
    S11:在所述至少一个图像区域中确定目标图像区域;S11: Determine a target image area in the at least one image area;
    S12:调整所述目标图像区域中图像的属性信息,以确认或生成所述原始图像对应的目标图像。S12: Adjust attribute information of images in the target image area to confirm or generate a target image corresponding to the original image.
  2. 根据权利要求1所述的方法,其中,所述S10步骤包括:The method according to claim 1, wherein the S10 step comprises:
    对所述原始图像进行像素检测处理,得到所述原始图像中各像素的标签;performing pixel detection processing on the original image to obtain the label of each pixel in the original image;
    根据所述各像素的标签,从所述原始图像中获取所述至少一个图像区域。Obtain the at least one image region from the original image according to the label of each pixel.
  3. 根据权利要求2所述的方法,其中,所述根据所述各像素的标签,从所述原始图像中获取所述至少一个图像区域,包括:The method according to claim 2, wherein said obtaining said at least one image region from said original image according to said labels of each pixel comprises:
    按照所述各像素的标签,对所述原始图像进行分区处理,得到至少一个第二图像区域;所述第二图像区域中包括的各像素的标签相同;Perform partition processing on the original image according to the label of each pixel to obtain at least one second image area; the labels of the pixels included in the second image area are the same;
    通过像素分类模型,对所述原始图像进行分区处理,得到至少一个第一图像区域;performing partition processing on the original image through a pixel classification model to obtain at least one first image region;
    按照所述至少一个第二图像分区,对所述至少一个第一图像区域进行分割处理,得到所述至少一个图像区域。According to the at least one second image partition, the at least one first image area is segmented to obtain the at least one image area.
  4. 根据权利要求3所述的方法,其中,所述通过像素分类模型,对所述原始图像进行分区处理,得到至少一个第一图像区域,包括:The method according to claim 3, wherein, performing partition processing on the original image through the pixel classification model to obtain at least one first image region, comprising:
    通过像素分类模型,对所述原始图像进行图像检测处理,得到至少一个图像类别;performing image detection processing on the original image through a pixel classification model to obtain at least one image category;
    根据所述至少一个图像类别,对所述原始图像进行分区处理,得到所述至少第一图像区域;所述第一图像区域为所述图像类别对应的区域。Partitioning the original image according to the at least one image category to obtain the at least a first image area; the first image area is an area corresponding to the image category.
  5. 根据权利要求1至3中任一项所述的方法,其中,所述S11步骤包括:The method according to any one of claims 1 to 3, wherein the S11 step comprises:
    显示各图像区域的边缘;display the edges of each image area;
    接收触控指令,所述触控指令包括目标位置;receiving a touch command, where the touch command includes a target position;
    将所述至少一个图像区域中包围所述目标位置的边缘对应的图像区域,确定为所述目标图像区域。An image area corresponding to an edge surrounding the target position in the at least one image area is determined as the target image area.
  6. 根据权利要求1至3中任一项所述的方法,其中,所述S11步骤包括:The method according to any one of claims 1 to 3, wherein the S11 step comprises:
    显示各图像区域的边缘;display the edges of each image area;
    接收对目标边缘的触控指令;Receive touch commands on the edge of the target;
    将所述至少一个图像区域中所述目标边缘包围的图像区域,确定为所述目标图像区域;determining an image area surrounded by the target edge in the at least one image area as the target image area;
    所述目标边缘为如下中的至少一种:The target edge is at least one of the following:
    所述各图像区域的边缘中的任意一个边缘;any one of the edges of the respective image areas;
    根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
  7. 根据权利要求5或6所述的方法,其中,所述显示各图像区域的边缘,包括:The method according to claim 5 or 6, wherein said displaying the edges of each image area comprises:
    按照预设格式,显示各图像区域的边缘;或者,Displays the edges of each image area, according to a preset format; or,
    接收对至少一个预设格式中的目标格式的选择指令,按照所述目标格式,显示各图像区域的边缘。A selection instruction for a target format in at least one preset format is received, and edges of each image area are displayed according to the target format.
  8. 根据权利要求1至3中任一项所述的方法,其中,所述S12步骤包括:The method according to any one of claims 1 to 3, wherein the S12 step comprises:
    接收对所述目标图像区域中图像的属性调整指令,所述属性调整指令包括至少一种属性对应的自定义属性值;receiving an attribute adjustment instruction for an image in the target image area, where the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute;
    将所述目标图像区域中图像的属性信息中包括的所述至少一种属性对应的初始值,设置为所述至少一种属性对应的自定义属性值,确认或生成所述原始图像对应的目标图像。Set the initial value corresponding to the at least one attribute included in the attribute information of the image in the target image area as the custom attribute value corresponding to the at least one attribute, and confirm or generate the target corresponding to the original image image.
  9. 一种图像处理方法,其中,包括以下步骤:A kind of image processing method, wherein, comprise the following steps:
    S21:在原始图像中确定待处理图像;S21: determine the image to be processed in the original image;
    S22:调整所述待处理图像的属性信息,以确认或生成所述原始图像对应的目标图像。S22: Adjust the attribute information of the image to be processed to confirm or generate a target image corresponding to the original image.
  10. 根据权利要求9所述的方法,其中,所述S21步骤包括:The method according to claim 9, wherein the S21 step comprises:
    对所述原始图像进行图像分区处理,得到至少一个图像区域;performing image partition processing on the original image to obtain at least one image area;
    在所述至少一个图像区域中确定目标图像区域;determining a target image area in the at least one image area;
    将所述目标图像区域中的图像,确定为所述待处理图像。An image in the target image area is determined as the image to be processed.
  11. 根据权利要求9至10中任一项所述的方法,其中,所述在所述至少一个图像区域中确定目标图像区域,包括:The method according to any one of claims 9 to 10, wherein said determining a target image area in said at least one image area comprises:
    显示各图像区域的边缘;display the edges of each image area;
    接收触控指令,所述触控指令包括目标位置;receiving a touch command, where the touch command includes a target position;
    将所述至少一个图像区域中包围所述目标位置的边缘对应的图像区域,确定为所述目标图像区域,和/或;determining an image area corresponding to an edge surrounding the target position in the at least one image area as the target image area, and/or;
    显示各图像区域的边缘;display the edges of each image area;
    接收对目标边缘的触控指令;Receive touch commands on the edge of the target;
    将所述至少一个图像区域中所述目标边缘包围的图像区域,确定为所述目标图像区域;determining an image area surrounded by the target edge in the at least one image area as the target image area;
    所述目标边缘为如下中的至少一种:The target edge is at least one of the following:
    各图像区域的边缘中的任意一个边缘;any one of the edges of the image regions;
    根据用户输入的闭合区域绘制操作的操作轨迹,对各图像区域的边缘中的任意一个边缘进行调整后的边缘。According to the operation track of the drawing operation of the closed area input by the user, any one of the edges of each image area is adjusted.
  12. 根据权利要求9至10中任一项所述的方法,其中,所述S22步骤包括:The method according to any one of claims 9 to 10, wherein the S22 step comprises:
    显示至少一个风格调整参数集合,所述风格调整参数集合包括至少一种属性对应的预设属性值;displaying at least one style adjustment parameter set, the style adjustment parameter set including a preset attribute value corresponding to at least one attribute;
    接收用于从所述至少一个风格调整参数集合中选择目标风格调整参数集合的选择指令;receiving a selection instruction for selecting a target style tuning parameter set from the at least one style tuning parameter set;
    采用所述目标风格调整参数集合,调整所述目标图像区域中图像的属性信息,确认或生成所述原始图像对应的目标图像。Using the target style adjustment parameter set, adjust the attribute information of the image in the target image area, and confirm or generate the target image corresponding to the original image.
  13. 根据权利要求9至10中任一项所述的方法,其中,所述S22步骤包括:The method according to any one of claims 9 to 10, wherein the S22 step comprises:
    接收对所述待处理图像的属性调整指令,所述属性调整指令包括至少一种属性对应的自定义属性值;receiving an attribute adjustment instruction for the image to be processed, where the attribute adjustment instruction includes a custom attribute value corresponding to at least one attribute;
    将所述待处理图像的属性信息中包括的所述至少一种属性对应的初始值,设置为所述至少一种属性对应的自定义属性值,确认或生成所述原始图像对应的目标图像。The initial value corresponding to the at least one attribute included in the attribute information of the image to be processed is set as the custom attribute value corresponding to the at least one attribute, and the target image corresponding to the original image is confirmed or generated.
  14. 一种智能终端,其中,所述智能终端包括:存储器和处理器;可选地,所述存储 器上存储有图像处理程序,所述图像处理程序被所述处理器执行时实现如权利要求1至13中任一项所述的图像处理方法的步骤。An intelligent terminal, wherein, the intelligent terminal includes: a memory and a processor; optionally, an image processing program is stored on the memory, and when the image processing program is executed by the processor, the following claims 1 to 1 are implemented. Steps of the image processing method described in any one of 13.
  15. 一种计算机可读存储介质,其中,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要1至13中任一项所述的图像处理方法的步骤。A computer-readable storage medium, wherein a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the steps of the image processing method according to any one of claims 1 to 13 are realized .
PCT/CN2021/138111 2021-12-14 2021-12-14 Image processing method, intelligent terminal, and storage medium WO2023108444A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/138111 WO2023108444A1 (en) 2021-12-14 2021-12-14 Image processing method, intelligent terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/138111 WO2023108444A1 (en) 2021-12-14 2021-12-14 Image processing method, intelligent terminal, and storage medium

Publications (1)

Publication Number Publication Date
WO2023108444A1 true WO2023108444A1 (en) 2023-06-22

Family

ID=86774945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138111 WO2023108444A1 (en) 2021-12-14 2021-12-14 Image processing method, intelligent terminal, and storage medium

Country Status (1)

Country Link
WO (1) WO2023108444A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542977A (en) * 2023-07-06 2023-08-04 宁德时代新能源科技股份有限公司 Image processing method, apparatus, device, storage medium, and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065317A (en) * 2012-12-28 2013-04-24 中山大学 Partial color transferring method and transferring device based on color classification
US20160155010A1 (en) * 2014-11-28 2016-06-02 Canon Kabushiki Kaisha Image feature extraction method, image feature extraction apparatus, and recording medium storing program therefor
CN106020664A (en) * 2016-05-11 2016-10-12 杨夫春 Image processing method
CN106340023A (en) * 2016-08-22 2017-01-18 腾讯科技(深圳)有限公司 Image segmentation method and image segmentation device
CN107563977A (en) * 2017-08-28 2018-01-09 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN111127476A (en) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065317A (en) * 2012-12-28 2013-04-24 中山大学 Partial color transferring method and transferring device based on color classification
US20160155010A1 (en) * 2014-11-28 2016-06-02 Canon Kabushiki Kaisha Image feature extraction method, image feature extraction apparatus, and recording medium storing program therefor
CN106020664A (en) * 2016-05-11 2016-10-12 杨夫春 Image processing method
CN106340023A (en) * 2016-08-22 2017-01-18 腾讯科技(深圳)有限公司 Image segmentation method and image segmentation device
CN107563977A (en) * 2017-08-28 2018-01-09 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN111127476A (en) * 2019-12-06 2020-05-08 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116542977A (en) * 2023-07-06 2023-08-04 宁德时代新能源科技股份有限公司 Image processing method, apparatus, device, storage medium, and program product
CN116542977B (en) * 2023-07-06 2024-02-06 宁德时代新能源科技股份有限公司 Image processing method, apparatus, device, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN107390972B (en) Terminal screen recording method and device and computer readable storage medium
CN108572764B (en) Character input control method and device and computer readable storage medium
CN109697008B (en) Content sharing method, terminal and computer readable storage medium
WO2020182035A1 (en) Image processing method and terminal device
CN114371803B (en) Operation method, intelligent terminal and storage medium
WO2023015774A1 (en) Switching method, mobile terminal, and storage medium
WO2023108444A1 (en) Image processing method, intelligent terminal, and storage medium
CN113126844A (en) Display method, terminal and storage medium
CN112423211A (en) Multi-audio transmission control method, equipment and computer readable storage medium
CN115914719A (en) Screen projection display method, intelligent terminal and storage medium
CN112532786B (en) Image display method, terminal device, and storage medium
CN109144747B (en) Data processing method, terminal and computer readable storage medium
CN113326304A (en) Query method, mobile terminal and storage medium
CN112230825A (en) Sharing method, mobile terminal and storage medium
WO2023108443A1 (en) Image processing method, smart terminal and storage medium
CN114125151B (en) Image processing method, mobile terminal and storage medium
CN109782947B (en) Flexible screen display control method and device and computer readable storage medium
WO2024055333A1 (en) Image processing method, smart device, and storage medium
CN114302007A (en) Information processing method, intelligent terminal and storage medium
CN114118013A (en) Image processing method, mobile terminal and storage medium
CN114139492A (en) Image processing method, intelligent terminal and storage medium
CN117572976A (en) Control method, control device, and storage medium
CN113553181A (en) User space management and control method, terminal and storage medium
CN114995730A (en) Information display method, mobile terminal and storage medium
CN114020392A (en) Information processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21967584

Country of ref document: EP

Kind code of ref document: A1