CN108182658B - Image beautifying method and device - Google Patents
Image beautifying method and device Download PDFInfo
- Publication number
- CN108182658B CN108182658B CN201810088510.7A CN201810088510A CN108182658B CN 108182658 B CN108182658 B CN 108182658B CN 201810088510 A CN201810088510 A CN 201810088510A CN 108182658 B CN108182658 B CN 108182658B
- Authority
- CN
- China
- Prior art keywords
- color
- target object
- image
- beautified
- beautifying
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 239000003086 colorant Substances 0.000 claims abstract description 32
- 238000001514 detection method Methods 0.000 claims description 18
- 238000005282 brightening Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 238000012545 processing Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The disclosure relates to an image beautification method and device. The method comprises the following steps: acquiring the color of a target object in an image to be beautified and the proportion corresponding to each color; and beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain the beautified image. The image beautifying method and the device can beautify the image of the target object with different colors by adopting different beautifying schemes, increase the pertinence of the image beautification and improve the effect of the image beautification.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image beautification method and apparatus.
Background
At present, the beautifying method aiming at the image does not consider the characteristics of a target object in the image, and the beautifying effect is poor. For example, beautification methods for vehicle images do not take into account the characteristics of the vehicle in the image. Therefore, it is necessary to provide an image beautification method with a good beautification effect.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image beautification method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided an image beautification method, including:
acquiring the color of a target object in an image to be beautified and the proportion corresponding to each color;
and beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain the beautified image.
In a possible implementation manner, acquiring colors and a ratio corresponding to each color of a target object in an image to be beautified includes:
inputting the image to be beautified into a detection network to obtain a target area in the image to be beautified;
acquiring the color of a target object in the target area and the proportion corresponding to each color;
wherein the detection network is a Faster regional convolutional neural network, Faster RCNN.
In a possible implementation manner, the obtaining the color of the target object in the target region and the corresponding proportion of each color includes:
inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
In a possible implementation manner, beautifying the image to be beautified according to the color of the target object and the ratio corresponding to each color to obtain a beautified image, including:
determining a beautifying scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color;
and beautifying the image to be beautified according to the beautifying scheme to obtain the beautified image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image beautification device including:
the acquisition module is used for acquiring the color of a target object in the image to be beautified and the proportion corresponding to each color;
and the beautifying module is used for beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain the beautified image.
In one possible implementation manner, the obtaining module includes:
the detection submodule is used for inputting the image to be beautified into a detection network to obtain a target area in the image to be beautified;
the obtaining submodule is used for obtaining the color of the target object in the target area and the proportion corresponding to each color;
wherein the detection network is a Faster regional convolutional neural network, Faster RCNN.
In one possible implementation, the obtaining sub-module is configured to:
inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
In one possible implementation, the beautification module includes:
the determining submodule is used for determining a beautifying scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color;
and the beautification submodule is used for beautifying the image to be beautified according to the beautification scheme to obtain an beautified image.
According to a third aspect of the embodiments of the present disclosure, there is provided an image beautification device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to carry out the above-described method when executed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the image beautifying method and device, the color of the target object in the image to be beautified and the proportion corresponding to each color are obtained, the image to be beautified is beautified according to the color of the target object and the proportion corresponding to each color, and the beautified image is obtained, so that different beautifying schemes can be adopted for beautifying the image of the target object with different colors, the pertinence of image beautification is increased, and the effect of image beautification is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method for image beautification according to an exemplary embodiment.
FIG. 2 is a schematic illustration of a vehicle image shown according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating a classification network according to an example embodiment.
FIG. 4 is a schematic illustration of a vehicle image shown according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating an image beautification apparatus according to an example embodiment.
FIG. 6 is a block diagram illustrating an image beautification apparatus according to an example embodiment.
Fig. 7 is a block diagram illustrating an apparatus 800 for image beautification according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating a method for image beautification according to an exemplary embodiment. The method is used in a camera, a smart phone, a tablet Computer, a PC (Personal Computer), or a wearable device having an image capturing function or an image processing function. As shown in fig. 1, the image beautification method includes steps S11 to S12.
In step S11, the colors of the target object in the image to be beautified and the corresponding proportions of the colors are obtained.
Wherein the target object may refer to an object in the image to be beautified. For example, the image to be beautified is a vehicle image, and the target object may be set as a vehicle. For another example, the image to be beautified is a building image, and the target object may be set as a building.
In a possible implementation manner, acquiring the colors of the target object in the image to be beautified and the corresponding proportions of the colors (step S11), may include: inputting an image to be beautified into a detection network to obtain a target area in the image to be beautified; acquiring the color of a target object in the target area and the proportion corresponding to each color; wherein, the detection Network is a Faster regional Convolutional Neural Network (Faster Convolutional Neural Network) fast RCNN.
It should be noted that, although the detection network is described above by taking the fast RCNN as an example, those skilled in the art can understand that the disclosure should not be limited thereto. Those skilled in the art can flexibly set the detection network, such as RCNN or Fast RCNN, according to the actual application scenario.
The target area may refer to an area where a target object in the image to be beautified is located. One or more target areas may be present in the image to be beautified. When a target object in the image to be beautified is detected each time, the target object can be selected by adopting a rectangular frame, and the area selected by the rectangular frame is used as a target area.
It should be noted that, although the target area is described above by taking the area selected by the rectangular box as an example, those skilled in the art can understand that the present disclosure should not be limited thereto. The person skilled in the art can flexibly set the target area according to the actual application scenario, for example, the area selected by the circular box is used as the target area.
As an example, the image to be beautified is a vehicle image, and the target object is a vehicle. The vehicle image is input to the Faster RCNN to determine the vehicle region in the vehicle image. Each time a vehicle in the vehicle image is detected, the vehicle may be selected using the rectangular frame, and the region selected by the rectangular frame may be used as a vehicle region. FIG. 2 is a schematic illustration of a vehicle image shown according to an exemplary embodiment. As shown in fig. 2, two vehicle regions in the vehicle image may be acquired.
Wherein, the color of the target object may include a single color, such as black, white, gray, red, blue, yellow or brown, etc.; the color of the target object may also include a mixed color, such as red yellow, red black, or blue black, and the like, and the color of the target object is not limited by the present disclosure.
In a possible implementation manner, acquiring the color of the target object in the target region and the corresponding proportion of each color includes: inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
In one possible implementation, a classification network is designed. Wherein the classification network may include a feature extraction network and a color classification network. The feature extraction network may extract feature information from the image. The feature extraction Network may be a CNN (Convolutional Neural Network), such as VGG 16. The color classification network can identify and classify the color of the target object in the target area according to the feature information extracted by the feature extraction network, so as to obtain the color of the target object in the target area and the proportion corresponding to each color. The color classification Network may be an RNN (Recurrent Neural Network).
According to the image beautifying method, the color of the target object in the target area is determined through the RNN, and the influence of factors such as the illumination intensity and the photographing angle in the environment on the color of the target object can be eliminated well. In addition, the RNN can overcome the defect that the classification network can only classify single color in the related technology, and can output the corresponding proportion of each color, thereby increasing the pertinence of the image beautification and improving the effect of the image beautification.
Fig. 3 is a block diagram illustrating a classification network according to an example embodiment. As shown in fig. 3, the feature extraction network may include a Conv1-Conv5 layer of VGG16, and the color classification network may include 2 FCs (Fully Connected Layers) and n +1 hidden Layers, where n is a positive integer. The number of output nodes of 2 FCs may be 4096. The n +1 hidden layers can be represented as H0, H1, … …, Hn-1 and Hn. The input of H0 may be X0< Start >, the output may be Y0, R0; the input of H1 may be X1< Y0, R0>, and the output of H1 may be Y1, R1; by analogy, the input of Hn-1 can be Xn-1< Yn-2, Rn-2>, and the output of Hn-1 can be Yn-1, Rn-1; the input of Hn may be Xn < Yn-1, Rn-1>, and the output of Hn may be Yn, Rn. Yi represents the color classification of the vehicle in the vehicle area output by the i-th hidden layer Hi, Ri represents the proportion corresponding to each color of the vehicle in the vehicle area output by the i-th hidden layer Hi, and i is a positive integer taking the value from 0 to n.
In a possible implementation manner, a certain number of sample images are collected, and labeling is performed on each sample image, where the labeled content may include the color of the target object in the sample image and the proportion corresponding to each color, and the labeled sample images are input to a classification network for training, so as to obtain a converged classification network. The obtained converged classification network can be used for determining the colors of the target objects in the image to be beautified and the corresponding proportions of the colors. The converged classification network may mean that the classification network is in a stable state or the training frequency reaches a preset threshold.
As an example, a certain number of vehicle sample images are collected and labeled for each vehicle sample image, the labeled content may include colors of vehicles in the vehicle sample images and a proportion corresponding to each color, and the labeled vehicle sample images are input to a classification network for training to obtain a converged classification network. The resulting converged classification network may be used to determine the colors of the vehicles in the vehicle image and the corresponding proportions of the respective colors.
It should be noted that, as those skilled in the art can understand, the larger the number of sample images, the better the classification effect of the trained classification network. Therefore, the reliability and stability of the classification network can be improved by increasing the number of sample images.
In step S12, the image to be beautified is beautified according to the color of the target object and the ratio corresponding to each color, so as to obtain an beautified image.
In a possible implementation manner, beautifying the image to be beautified according to the color of the target object and the corresponding ratio of each color to obtain an beautified image (step S12), which may include: determining a beautifying scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color; and beautifying the image to be beautified according to the beautifying scheme to obtain the beautified image.
The color of the target object, the corresponding proportion of each color and the corresponding relationship between the beautifying schemes can be set according to statistical experience. For example, the set rules may include:
in the case where the color of the target object is a single color, if the color of the target object belongs to the cold color system, a beautification scheme for soft colors may be set; if the color of the target object belongs to the warm color family, a beautification scheme for brightening the color may be set.
In the case that the color of the target object is a mixed color, if the ratio of the cold color system to the warm color system of the target object is greater than or equal to a first threshold (for example, the first threshold is 2), a beautifying scheme of adopting a soft color for all the colors of the target object may be set; if the ratio of the cold color system to the warm color system of the target object is less than or equal to a second threshold (e.g., the second threshold is 0.5), a beautification scheme may be set that employs the highlighting color for all colors of the target object; if the ratio of the cold color system to the warm color system of the target object is smaller than the first threshold and larger than the second threshold, a beautifying scheme that a soft color is adopted for the cold color system color of the target object and a brightening color is adopted for the warm color system color of the target object can be set.
Wherein the softness degree, the brightness degree and the softness and increment specific modes can be designed according to the needs.
It should be noted that, although the beautification schemes are described above with a soft color or a highlight color as an example, those skilled in the art will appreciate that the present disclosure should not be limited thereto.
FIG. 4 is a schematic illustration of a vehicle image shown according to an exemplary embodiment. As an example, as shown in fig. 4, a vehicle image is input to the detection network, and a vehicle area 1 in the vehicle image is obtained. The vehicle zone 1 is entered into the classification network. The classification network extracts feature information from the vehicle region 1, and identifies and classifies the colors of the vehicles in the vehicle region 1 based on the extracted feature information, thereby obtaining the colors of the vehicles in the vehicle region 1 and the corresponding proportions of the respective colors, such as red (3) and black (1). And determining the beautification scheme corresponding to the vehicle image as the beautification scheme of the brightening color according to the red color (3) and the black color (1), and beautifying the vehicle image according to the beautification scheme of the brightening color to obtain an beautified vehicle image.
According to the image beautifying method, the color of the target object in the image to be beautified and the proportion corresponding to each color are obtained, the image to be beautified is beautified according to the color of the target object and the proportion corresponding to each color, and the beautified image is obtained, so that different beautifying schemes can be adopted for beautifying the image with the target object with different colors, the pertinence of image beautification is increased, and the effect of image beautification is improved.
FIG. 5 is a block diagram illustrating an image beautification apparatus according to an example embodiment. The device is used in a camera, a smart phone, a tablet computer, a PC or wearable equipment with an image shooting function or an image processing function. Referring to fig. 5, the image beautification apparatus includes:
an obtaining module 51, configured to obtain colors of a target object in an image to be beautified and a ratio corresponding to each color; and the beautifying module 52 is configured to beautify the image to be beautified according to the color of the target object and the proportion corresponding to each color, so as to obtain an beautified image.
FIG. 6 is a block diagram illustrating an image beautification apparatus according to an example embodiment. Referring to fig. 6:
in a possible implementation manner, the obtaining module 51 includes: the detection submodule 511 is configured to input the image to be beautified into a detection network, so as to obtain a target area in the image to be beautified; an obtaining submodule 512, configured to obtain colors of the target object in the target area and a ratio corresponding to each color; wherein the detection network is a Faster regional convolutional neural network, Faster RCNN.
In one possible implementation manner, the obtaining sub-module 512 is configured to: inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
In one possible implementation, the beautification module 52 includes: the determining submodule 521 is configured to determine a beautifying scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color; and a beautification submodule 522, configured to beautify the image to be beautified according to the beautification scheme, so as to obtain an beautified image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the image beautifying device, the color of the target object in the image to be beautified and the proportion corresponding to each color are obtained, the image to be beautified is beautified according to the color of the target object and the proportion corresponding to each color, and the beautified image is obtained, so that different beautifying schemes can be adopted for beautifying the image with the target object with different colors, the pertinence of image beautification is increased, and the effect of image beautification is improved.
Fig. 7 is a block diagram illustrating an apparatus 800 for image beautification according to an example embodiment. For example, the apparatus 800 may be a camera, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or other device having an image capture function or an image processing function.
Referring to fig. 7, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (8)
1. An image beautification method, comprising:
acquiring the color of a target object in an image to be beautified and the proportion corresponding to each color;
beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain an beautified image;
beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain an beautified image, comprising:
determining a beautification scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color, wherein,
under the condition that the color of the target object is a single color, if the color of the target object belongs to a cold color system, a beautifying scheme for soft color is set; if the color of the target object belongs to a warm color system, setting a beautifying scheme for brightening the color;
under the condition that the color of the target object is a mixed color, if the ratio of a cold color system to a warm color system of the target object is greater than or equal to a first threshold value, adopting a beautifying scheme of soft color aiming at all the colors of the target object; if the ratio of the cold color system to the warm color system of the target object is less than or equal to a second threshold value, adopting a beautifying scheme of brightening colors for all colors of the target object; if the ratio of the cold color system to the warm color system of the target object is smaller than a first threshold and larger than a second threshold, adopting a beautifying scheme of soft color for the cold color system color of the target object and brightening color for the warm color system color of the target object;
and beautifying the image to be beautified according to the beautifying scheme to obtain the beautified image.
2. The method according to claim 1, wherein the obtaining of the colors and the corresponding proportions of the respective colors of the target object in the image to be beautified comprises:
inputting the image to be beautified into a detection network to obtain a target area in the image to be beautified;
acquiring the color of a target object in the target area and the proportion corresponding to each color;
wherein the detection network is a Faster regional convolutional neural network, Faster RCNN.
3. The method of claim 2, wherein obtaining the colors of the target objects in the target area and the corresponding proportions of the colors comprises:
inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
4. An image beautification apparatus, comprising:
the acquisition module is used for acquiring the color of a target object in the image to be beautified and the proportion corresponding to each color;
the beautification module is used for beautifying the image to be beautified according to the color of the target object and the proportion corresponding to each color to obtain an beautified image;
the beautification module comprises:
the determining submodule is used for determining a beautifying scheme corresponding to the image to be beautified according to the color of the target object and the proportion corresponding to each color, wherein under the condition that the color of the target object is a single color, if the color of the target object belongs to a cold color system, the beautifying scheme for soft color is set; if the color of the target object belongs to a warm color system, setting a beautifying scheme for brightening the color; under the condition that the color of the target object is a mixed color, if the ratio of a cold color system to a warm color system of the target object is greater than or equal to a first threshold value, adopting a beautifying scheme of soft color aiming at all the colors of the target object; if the ratio of the cold color system to the warm color system of the target object is less than or equal to a second threshold value, adopting a beautifying scheme of brightening colors for all colors of the target object; if the ratio of the cold color system to the warm color system of the target object is smaller than a first threshold and larger than a second threshold, adopting a beautifying scheme of soft color for the cold color system color of the target object and brightening color for the warm color system color of the target object;
and the beautification submodule is used for beautifying the image to be beautified according to the beautification scheme to obtain an beautified image.
5. The apparatus of claim 4, wherein the obtaining module comprises:
the detection submodule is used for inputting the image to be beautified into a detection network to obtain a target area in the image to be beautified;
the obtaining submodule is used for obtaining the color of the target object in the target area and the proportion corresponding to each color;
wherein the detection network is a Faster regional convolutional neural network, Faster RCNN.
6. The apparatus of claim 5, wherein the acquisition sub-module is configured to:
inputting the target area into a classification network to obtain the color of a target object in the target area and the proportion corresponding to each color; and the classification network is obtained by training according to the color of the target object in each sample image and the proportion corresponding to each color.
7. An image beautification apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 3.
8. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088510.7A CN108182658B (en) | 2018-01-30 | 2018-01-30 | Image beautifying method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088510.7A CN108182658B (en) | 2018-01-30 | 2018-01-30 | Image beautifying method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182658A CN108182658A (en) | 2018-06-19 |
CN108182658B true CN108182658B (en) | 2021-10-22 |
Family
ID=62551729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810088510.7A Active CN108182658B (en) | 2018-01-30 | 2018-01-30 | Image beautifying method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182658B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903248B (en) * | 2019-02-20 | 2021-04-16 | 厦门美图之家科技有限公司 | Method for generating automatic white balance model and image processing method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413147A (en) * | 2013-08-28 | 2013-11-27 | 庄浩洋 | Vehicle license plate recognizing method and system |
CN105760868A (en) * | 2016-02-03 | 2016-07-13 | 广东欧珀移动通信有限公司 | Method and device for adjusting color tendency of object in image and mobile terminal |
CN107273836A (en) * | 2017-06-07 | 2017-10-20 | 深圳市深网视界科技有限公司 | A kind of pedestrian detection recognition methods, device, model and medium |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN107358242A (en) * | 2017-07-11 | 2017-11-17 | 浙江宇视科技有限公司 | Target area color identification method, device and monitor terminal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005286625A (en) * | 2004-03-29 | 2005-10-13 | Canon Inc | Image processing apparatus and method thereof |
CN103425989B (en) * | 2013-08-07 | 2017-04-19 | 中山大学 | Vehicle color identification method and system based on significance analysis |
CN104360829B (en) * | 2014-11-07 | 2017-06-30 | 努比亚技术有限公司 | The method and apparatus for adjusting screen color temp |
CN106855797A (en) * | 2015-12-09 | 2017-06-16 | 阿里巴巴集团控股有限公司 | The method to set up and device of a kind of interface element color |
-
2018
- 2018-01-30 CN CN201810088510.7A patent/CN108182658B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413147A (en) * | 2013-08-28 | 2013-11-27 | 庄浩洋 | Vehicle license plate recognizing method and system |
CN105760868A (en) * | 2016-02-03 | 2016-07-13 | 广东欧珀移动通信有限公司 | Method and device for adjusting color tendency of object in image and mobile terminal |
CN107273836A (en) * | 2017-06-07 | 2017-10-20 | 深圳市深网视界科技有限公司 | A kind of pedestrian detection recognition methods, device, model and medium |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN107358242A (en) * | 2017-07-11 | 2017-11-17 | 浙江宇视科技有限公司 | Target area color identification method, device and monitor terminal |
Also Published As
Publication number | Publication date |
---|---|
CN108182658A (en) | 2018-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021017561A1 (en) | Face recognition method and apparatus, electronic device, and storage medium | |
CN105095881B (en) | Face recognition method, face recognition device and terminal | |
CN107945133B (en) | Image processing method and device | |
CN107944447B (en) | Image classification method and device | |
CN110619350B (en) | Image detection method, device and storage medium | |
CN107784279B (en) | Target tracking method and device | |
US10650502B2 (en) | Image processing method and apparatus, and storage medium | |
CN107563994B (en) | Image significance detection method and device | |
CN107463052B (en) | Shooting exposure method and device | |
CN107463903B (en) | Face key point positioning method and device | |
CN107967459B (en) | Convolution processing method, convolution processing device and storage medium | |
CN107025441B (en) | Skin color detection method and device | |
CN111984347A (en) | Interaction processing method, device, equipment and storage medium | |
CN108921178B (en) | Method and device for obtaining image blur degree classification and electronic equipment | |
CN109145878B (en) | Image extraction method and device | |
CN112331158B (en) | Terminal display adjusting method, device, equipment and storage medium | |
CN107657608B (en) | Image quality determination method and device and electronic equipment | |
CN107480773B (en) | Method and device for training convolutional neural network model and storage medium | |
US20190095163A1 (en) | Method and device for displaying an input interface and an electronic device | |
CN108182658B (en) | Image beautifying method and device | |
CN112200040A (en) | Occlusion image detection method, device and medium | |
CN112116670A (en) | Information processing method and device, electronic device and storage medium | |
CN108830194B (en) | Biological feature recognition method and device | |
CN105635573A (en) | Pick-up head visual angle adjusting method and apparatus | |
CN112689047B (en) | Display control method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |