CN108200347A - A kind of image processing method, terminal and computer readable storage medium - Google Patents
A kind of image processing method, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN108200347A CN108200347A CN201810090432.4A CN201810090432A CN108200347A CN 108200347 A CN108200347 A CN 108200347A CN 201810090432 A CN201810090432 A CN 201810090432A CN 108200347 A CN108200347 A CN 108200347A
- Authority
- CN
- China
- Prior art keywords
- face region
- image data
- adjustment
- human face
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, terminal and computer readable storage medium, this method includes:Obtain the first image data comprising facial image using camera shooting;Automatic white balance processing is carried out to described first image data, obtains the second image data;It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted into YUV components;For second image data, according to the coloration method of adjustment of preset human face region, the chromatic component in the YUV components of each pixel of the human face region is adjusted;It so, it is possible the human face region of image data to be avoided colour cast occur using corresponding coloration method of adjustment individually for human face region.
Description
Technical field
The present invention relates to the technique for taking of terminal more particularly to a kind of image processing method, terminal and computer-readable deposit
Storage media.
Background technology
At present, the terminal such as mobile phone with shooting function is widely used;It is clapped using the camera of terminal
When taking the photograph, the image data obtained for shooting can first carry out automatic white balance (AWB) processing, then, at automatic white balance
Image after reason carries out color correction (such as being adjusted to form and aspect and saturation degree);However, for the figure comprising facial image
As data, usually the human face region of image data and non-face region are being implemented using unified color calibration method
When, the image taken using camera is compared with standard picture, as soon as a correction matrix is calculated with this, the matrix
It is the color correction matrix of the imaging sensor.When carrying out image procossing, the matrix can be utilized to captured by camera
All images are corrected, to obtain the image closest to object true colors.
As can be seen that for the image data comprising facial image, obtained according to whole image data in the prior art
Go out unified color correction mode (correction matrix), due to the coloration difference in the human face region in image and non-face region compared with
Greatly, if the two uses unified color correction mode, the human face region of image data can be caused colour cast occur.
Invention content
In order to solve the above technical problems, the embodiment of the present invention provides a kind of image processing method, terminal and computer-readable
Storage medium can avoid the human face region of image data from going out individually for human face region using corresponding coloration method of adjustment
Existing colour cast.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is realized in:
An embodiment of the present invention provides a kind of image processing method, the method includes:
Obtain the first image data comprising facial image using camera shooting;
Automatic white balance processing is carried out to described first image data, obtains the second image data;
It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted
For YUV components;
For second image data, according to the coloration method of adjustment of preset human face region, to the human face region
Each pixel YUV components in chromatic component be adjusted.
Optionally, the method further includes:
Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, it is described for second image data, according to the coloration method of adjustment of preset human face region, to institute
The chromatic component stated in the YUV components of each pixel of human face region is adjusted, including:
In the coloration method of adjustment of the corresponding human face region of the various environment light sources, environment light source determined by selection
The coloration method of adjustment of corresponding human face region;For second image data, according to the coloration of selected human face region
Method of adjustment is adjusted the chromatic component in the YUV components of each pixel of the human face region.
Optionally, after the second image data is obtained, the method further includes:
Determine the non-face region of second image data;
For second image data, according to the coloration method of adjustment in preset non-face region, to described non-face
The chromatic component of each pixel in region is adjusted.
Optionally, the method further includes:
Pre-set the coloration method of adjustment in the corresponding non-face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, it is described for second image data, it is right according to the coloration method of adjustment in preset non-face region
The chromatic component of each pixel in the non-face region is adjusted, including:
In the coloration method of adjustment in the corresponding non-face region of the various environment light sources, ambient light determined by selection
The coloration method of adjustment in the corresponding non-face region in source;For second image data, according to selected non-face region
Coloration method of adjustment, the chromatic component of each pixel in the non-face region is adjusted.
Optionally, the chromatic component includes at least one of following:U components, V component.
The embodiment of the present invention additionally provides a kind of terminal, and the terminal includes camera, memory, processor and is stored in
On the memory and the computer program that can run on the processor;Wherein, the computer program is by the processing
Device realizes following steps when performing:
Obtain the first image data comprising facial image using camera shooting;
Automatic white balance processing is carried out to described first image data, obtains the second image data;
It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted
For YUV components;
For second image data, according to the coloration method of adjustment of preset human face region, to the human face region
Each pixel YUV components in chromatic component be adjusted.
Optionally, following steps are also realized when the computer program is performed by the processor:
Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment of the corresponding human face region of the various environment light sources, environment light source determined by selection
The coloration method of adjustment of corresponding human face region;For second image data, according to the coloration of selected human face region
Method of adjustment is adjusted the chromatic component in the YUV components of each pixel of the human face region.
Optionally, following steps are also realized when the computer program is performed by the processor:
After the second image data is obtained, the non-face region of second image data is determined;
For second image data, according to the coloration method of adjustment in preset non-face region, to described non-face
The chromatic component of each pixel in region is adjusted.
Optionally, following steps are also realized when the computer program is performed by the processor:
Pre-set the coloration method of adjustment in the corresponding non-face region of various environment light sources;Obtaining the first image data
Afterwards, the corresponding environment light source of described first image data is determined;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment in the corresponding non-face region of the various environment light sources, ambient light determined by selection
The coloration method of adjustment in the corresponding non-face region in source;For second image data, according to selected non-face region
Coloration method of adjustment, the chromatic component of each pixel in the non-face region is adjusted.
The embodiment of the present invention additionally provides a kind of computer readable storage medium, and applied in terminal, the computer can
It reads storage medium and is stored with computer program,
When the computer program is performed by least one processor, at least one processor is caused to perform above-mentioned
The step of any one image processing method.
In a kind of image processing method provided in an embodiment of the present invention, terminal and computer readable storage medium, first, obtain
Take the first image data comprising facial image using camera shooting;Then, described first image data are carried out automatic
White balance processing, obtains the second image data;The human face region of second image data is determined, by each of the human face region
The RGB component of pixel is converted to YUV components;Finally, for second image data, according to the color of preset human face region
Method of adjustment is spent, the chromatic component in the YUV components of each pixel of the human face region is adjusted.
In this way, for human face region, it is adjusted using the coloration method of adjustment of preset human face region;Due to face area
The coloration method of adjustment in domain can be configured according to actual needs, in this way, can be to avoid when carrying out coloration adjustment, picture number
According to human face region there is colour cast.
Description of the drawings
A kind of hardware architecture diagram of Fig. 1 mobile terminals of each embodiment to realize the present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the electrical structure block diagram of the camera of the terminal of the embodiment of the present invention;
Fig. 4 is the flow chart of the image processing method of the embodiment of the present invention;
Fig. 5 is an interface schematic diagram of the terminal of the embodiment of the present invention;
Fig. 6 is the structure diagram of the terminal of the embodiment of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for representing that the suffix of such as " module ", " component " or " unit " of element is only
Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention can include such as mobile phone, tablet
Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
The shiftings such as media player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer
The dynamic fixed terminals such as terminal and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special
For moving except the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware architecture diagram of its mobile terminal of each embodiment to realize the present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, audio output unit 103, A/V (audios/regard
Frequently input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, first memory)
109th, the components such as first processor 110 and power supply 111.It will be understood by those skilled in the art that the mobile end shown in Fig. 1
End structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or fewer components or group
Close certain components or different components arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receive and send messages or communication process in, signal sends and receivees, specifically, by base station
Downlink information receive after, handled to first processor 110;In addition, the data of uplink are sent to base station.In general, radio frequency list
Member 101 includes but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..This
Outside, radio frequency unit 101 can also communicate with network and other equipment by radio communication.Above-mentioned wireless communication can use any
(Global System of Mobile communication, the whole world are mobile for communication standard or agreement, including but not limited to GSM
Communication system), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
Formula, speech recognition mode, broadcast reception mode when under isotypes, it is that radio frequency unit 101 or WiFi module 102 are received or
The audio data stored in first memory 109 is converted into audio signal and exports as sound.Moreover, audio output unit
103 can also provide performed with mobile terminal 100 the relevant audio output of specific function (for example, call signal receive sound,
Message sink sound etc.).Audio output unit 103 can include loud speaker, buzzer etc..
A/V input units 104 are used to receive audio or video signal.A/V input units 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the image data of video obtained in image capture mode by image capture apparatus (such as camera) carry out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in first memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.
Microphone 1042 can be in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone
1042 receive sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice)
Data can be converted to the form that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model
Output.Microphone 1042 can implement various types of noises and eliminate (or inhibition) algorithm to eliminate (or inhibition) in reception and send out
The noise generated during sending audio signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, size and the direction of gravity are can detect that when static, can be used to identify the application of mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, details are not described herein.
Display unit 106 is used to show by information input by user or be supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode may be used
Display panel 1061 is configured in forms such as (Organic Light-Emitting Diode, OLED).
User input unit 107 can be used for receiving the number inputted or character information and generation and the use of mobile terminal
The key signals input that family is set and function control is related.Specifically, user input unit 107 may include touch panel 1071 with
And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect user on it or neighbouring touch operation
(for example user uses any suitable objects such as finger, stylus or attachment on touch panel 1071 or in touch panel 1071
Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives first processor 110, and the order that first processor 110 is sent can be received and performed.
Furthermore, it is possible to realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to
Touch panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments
1072 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse,
It is one or more in operating lever etc., it does not limit herein specifically.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or
After neighbouring touch operation, first processor 110 is sent to determine the type of touch event, subsequent first processor 110
Corresponding visual output is provided on display panel 1061 according to the type of touch event.Although in Fig. 1, touch panel 1071 with
Display panel 1061 is the component independent as two to realize the function that outputs and inputs of mobile terminal, but in certain implementations
In example, can touch panel 1071 and display panel 1061 be integrated and realized the function that outputs and inputs of mobile terminal, specifically
It does not limit herein.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example,
External device (ED) can include wired or wireless head-band earphone port, external power supply (or battery charger) port, wired or nothing
Line data port, memory card port, the port for device of the connection with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
First memory 109 can be used for storage software program and various data.First memory 109 can mainly include depositing
Program area and storage data field are stored up, wherein, storing program area can storage program area, the application program needed at least one function
(such as sound-playing function, image player function etc.) etc.;Storage data field can be stored uses created number according to mobile phone
According to (such as audio data, phone directory etc.) etc..In addition, first memory 109 can include high-speed random access memory, may be used also
To include nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-states
Part.
First processor 110 is the control centre of mobile terminal, utilizes various interfaces and the entire mobile terminal of connection
Various pieces, by run or perform the software program being stored in first memory 109 and/or module and call deposit
The data in first memory 109 are stored up, the various functions of mobile terminal and processing data are performed, so as to be carried out to mobile terminal
Integral monitoring.First processor 110 may include one or more processing units;Preferably, first processor 110 can integrate application
Processor and modem processor, wherein, the main processing operation system of application processor, user interface and application program etc.,
Modem processor mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into
In first processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and first processor 110, so as to be charged by power-supply management system realization management,
The functions such as electric discharge and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
For the ease of understanding the embodiment of the present invention, below to the communications network system that is based on of mobile terminal of the present invention into
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
The LTE system united as universal mobile communications technology, the LTE system include the UE (User Equipment, the use that communicate connection successively
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers
Such as the function of home location register (not shown) etc, and some are preserved in relation to use such as service features, data rates
The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201
Address is distributed and other functions, and PCRF2036 is business data flow and the strategy of IP bearing resources and charging control strategic decision-making
Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, it is readily applicable to other wireless communication systems, such as GSM (Global System of Mobile
Communication, global system for mobile communications), GPRS (General Packet Radio Service, general packet without
Line service), CDMA2000 (Code Division Multiple Access2000, CDMA 2000), WCDMA
(Wideband Code Division Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-
Synchronous Code Division Multiple Access, TD SDMA) and following new network system
System etc., does not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the method for the present invention is proposed.
First embodiment
First embodiment of the invention proposes a kind of image processing method, can be applied to the terminal with shooting function
In;For example, the shooting of front camera or rear camera realization to current scene may be used in terminal.
Fig. 3 is the electrical structure block diagram of the camera of the terminal of the embodiment of the present invention, as shown in figure 3, phtographic lens 1211
It is made of the multiple optical lens for being used to form shot object image, is single-focus lens or zoom lens.Phtographic lens 1211 is in mirror
It can be moved in the direction of the optical axis under the control of head driver 1221, lens driver 1221 controls electricity according to from lens driving
The control signal on road 1222, control phtographic lens 1211 focal position, in the case of zoom lens, also can control focus away from
From.Lens driving control circuit 1222 carries out the drive of lens driver 1221 according to the control command from microcomputer 1217
Dynamic control.
It is configured with and takes the photograph near the position of the shot object image formed on the optical axis of phtographic lens 1211, by phtographic lens 1211
Element 1212.Photographing element 1212 is used to image shot object image and obtain image data.On photographing element 1212
Two dimension and it is arranged in a matrix the photodiode for forming each pixel.Each photodiode generates photoelectricity corresponding with light income
Switching current, the opto-electronic conversion electric current carry out charge accumulation by the capacitor being connect with each photodiode.The preceding table of each pixel
Face is configured with the RGB colour filters of bayer arrangement.
Photographing element 1212 is connect with imaging circuit 1213, which carries out charge in photographing element 1212
Accumulation control and picture signal read control, are carried out after reducing resetting noise to the picture signal (analog picture signal) of the reading
Waveform shaping, and then gain raising etc. is carried out to become appropriate signal level.Imaging circuit 1213 connects with A/D converter 1214
It connects, which carries out analog-to-digital conversion to analog picture signal, (following to 1227 output digital image signal of bus
Referred to as image data).
Bus 1227 is for being transmitted in the transmitting path of various data that the inside of camera reads or generates.In bus
1227 are connected to above-mentioned A/D converter 1214, are additionally connected to image processor 1215, jpeg processor 1216, microcomputer
Calculation machine 1217, SDRAM (Synchronous Dynamic random access memory, Synchronous Dynamic Random Access Memory)
1218th, memory interface (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display, liquid crystal display
Device) driver 1220.
Image processor 1215 carries out OB to the image data of the output based on photographing element 1212 and subtracts each other processing, white balance
Adjustment, color matrix operation, gamma conversion, colour difference signal processing, noise removal process while change processing, edge treated etc. are each
Kind image procossing.Jpeg processor 1216 is when by Imagery Data Recording in recording medium 1225, according to JPEG compression mode pressure
Contract the image data read from SDRAM1218.In addition, jpeg processor 1216 is in order to carry out, image reproducing is shown and carries out JPEG
The decompression of image data.When unziping it, the file being recorded in recording medium 1225 is read, in jpeg processor 1216
In implement decompression after, the image data of decompression is temporarily stored in SDRAM1218 and is carried out on LCD1226
Display.In addition, in the present embodiment, as compression of images decompression mode using JPEG modes, however Compress softwares
Contracting mode is without being limited thereto, it is of course possible to using MPEG, TIFF, other compressed and decompressed modes such as H.264.
Microcomputer 1217 plays the function of the control unit as camera entirety, is uniformly controlled the various processing of camera
Sequence.Microcomputer 1217 is connected to operating unit 1223 and flash memory 1224.
Operating unit 1223 includes but not limited to physical button or virtual key, and the entity or virtual key can be electricity
Source button, camera button, edit key, dynamic image button, reproduction button, menu button, cross key, OK button, delete button,
The operational controls such as the various load buttons such as large buttons and various enter keys detect the mode of operation of these operational controls,.
Testing result is exported to microcomputer 1217.In addition, the front surface in the LCD1226 as display is equipped with
Touch panel detects the touch location of user, which is exported to microcomputer 1217.Microcomputer 1217
According to the testing result of the operating position from operating unit 1223, various processing sequences corresponding with the operation of user are performed.
Flash memory 1224 store for perform microcomputer 1217 various processing sequences program.Microcomputer 1217
The control of camera entirety is carried out according to the program.In addition, flash memory 1224 stores the various adjusted values of camera, microcomputer 1217
Adjusted value is read, the control of camera is carried out according to the adjusted value.
SDRAM1218 be for image data etc. temporarily stored can electricity rewrite volatile memory.It should
SDRAM1218 image datas that temporarily storage is exported from A/D converter 1214 and in image processor 1215, jpeg processor
The image data that carried out that treated in 1216 grades.
Memory interface 1219 is connect with recording medium 1225, into the text be about to image data He be attached in image data
The first-class data write-in recording medium 1225 of part and the control read from recording medium 1225.Recording medium 1225 is, for example, can
The recording mediums such as memory card of disassembled and assembled freely on camera main-body, however it is without being limited thereto or be built in camera main-body
In hard disk etc..
LCD driver 1210 is connect with LCD1226, will treated that image data is stored in by image processor 1215
SDRAM1218 when needing display, reads the image data of SDRAM1218 storages and is shown on LCD1226, alternatively, at JPEG
The compressed image data of reason device 1216 is stored in SDRAM1218, and when needing display, jpeg processor 1216 is read
The compressed image data of SDRAM1218, then unzip it, the image data after decompression is carried out by LCD1226
Display.
LCD1226 configurations perform image display at the back side of camera main-body.The LCD1226LCD, however it is without being limited thereto,
The various display panels such as organic EL (LCD1226) may be used, however it is without being limited thereto, the various displays such as organic EL can also be used
Panel.
In the embodiment of the present invention, camera applications in terminal can be set, when control runs camera applications, while can be controlled
The camera of terminal processed starts;It is understood that when the camera for determining terminal starts, the current display circle of control terminal
Face switches to shooting preview picture.
Optionally, the terminal of above-mentioned record can be mobile terminal;The mobile terminal of above-mentioned record can be connected to interconnection
Net, wherein, the mode of the connection can be that the mobile internet provided by operator is attached, and can also be and pass through
Wireless access point is accessed to carry out network connection.
Here, if mobile terminal has operating system, which can be UNIX, Linux, Windows, Android
(Android), Windows Phone etc..
It should be noted that type, shape, size to the display screen on mobile terminal etc. is not limited, it is exemplary
, the display screen on mobile terminal can be liquid crystal display etc..
The following detailed description of the realization method of the embodiment of the present invention.
Fig. 4 is the flow chart of the image processing method of the embodiment of the present invention, as shown in figure 4, the flow can include:
Step 401:Obtain the first image data comprising facial image using camera shooting;
In practical applications, when the processor of terminal receives the shooting instruction of user, control camera is to working as front court
Scape is shot.
Here, for how to judge whether comprising facial image in a kind of example, obtaining to scheme in image data
As after data, being detected using face recognition algorithms to image data, and then whether judge in image data comprising face figure
Picture;In another example, it can receive instruction input by user before using the shooting of the camera of terminal using terminal and believe
Breath when the current image to be captured of instruction information instruction input by user includes facial image, determines the use of camera shooting
Image data include facial image.
Step 402:Automatic white balance processing is carried out to described first image data, obtains the second image data;
It is understood that human visual system has the characteristics that color constancy, therefore the mankind can to the observation of things
Not influenced by light source colour.But imaging sensor in itself and without this color constancy the characteristics of, therefore,
The image taken under different light can be influenced by light source colour and be changed.Such as it is clapped under bright day
The image taken the photograph may be partially blue, and the object color taken under candle light can be partially red.Therefore, in order to eliminate light source colour for
The influence of imaging sensor imaging, automatic white balance function is exactly to simulate the color constancy feature of human visual system to disappear
Except influence of the light source colour to image.
In actual implementation, terminal can store the optimization automatic white balance Adjusted Option for various universal light sources;
The camera of terminal can detect the color of subject automatically according to the light conditions by its camera lens and white balance sensor
Temperature value judges imaging conditions with this, and immediate tone is selected to set, and white balance is transferred to suitable position automatically.
Step 403:The human face region of second image data is determined, by the RGB of each pixel of the human face region
Component is converted to YUV components.
Illustratively, for the second image data, the face that Face datection algorithm determines the second image data may be used
Region.
Face datection algorithm is mainly used for the pretreatment of recognition of face in practice, i.e., accurate calibration goes out face in the picture
Position and size.The pattern feature very abundant included in facial image, as histogram feature, color characteristic, template characteristic,
Structure feature and Haar features etc..Face datection is exactly that information useful among these is picked out, and is realized using these features
Face datection;Illustratively, method for detecting human face is based on features above using Adaboost learning algorithms, and Adaboost algorithm is
A kind of method for classifying, it is combined some weaker sorting techniques, is combined into new very strong sorting technique.
It is understood that when carrying out image procossing, the RGB color that can obtain the pixel of image data first is empty
Between each component, here, each component of RGB color includes R component, B component and G components.
It, can be by the RGB face of the pixel of image data in order to realize the coloration adjustment to image in the embodiment of the present invention
Each component of the colour space is converted to each component of YUV color spaces;Each component of YUV color spaces includes Y-component, U components and V
Component;Wherein, Y-component is luminance component;U components and V component are chromatic component, for describing image color and saturation degree.
Illustratively, the transformational relation of RGB color to YUV color spaces can be illustrated by the following formula:
Y=0.299*R+0.587*G+0.114*B
U=-0.147*R-0.289*G+0.436*B
V=0.615*R-0.515*G-0.100*B
Wherein, R, G and B represent the value of the value of the R component of RGB color, the value of G components and B component, Y, U and V respectively
Value, the value of U components and the value of V component of the Y-component of YUV color spaces are represented respectively.
Step 404:For second image data, according to the coloration method of adjustment of preset human face region, to described
Chromatic component in the YUV components of each pixel of human face region is adjusted.
Illustratively, the coloration method of adjustment of preset human face region can include:
Pre-set the value range of the chromatic component of the pixel of human face region;When the face of second image data
When chromatic component in the YUV components of the pixel in region is in the value range of preset chromatic component, described the is kept
Chromatic component in the YUV components of the pixel of the human face region of two image datas is constant;As the people of second image data
When chromatic component in the YUV components of the pixel in face region is not in the value range of preset chromatic component, institute is adjusted
The chromatic component in the YUV components of the pixel of the human face region of the second image data is stated, makes the second image data after adjustment
Human face region pixel YUV components in chromatic component be in the value range of preset chromatic component.
Here, the chromatic component of above-mentioned record can include at least one of following:U components, V component.
It, can be with terminal user in advance by the Y-component of the YUV components of the pixel of human face region and U points in actual implementation
The value range of amount is input in terminal;Fig. 5 is an interface schematic diagram of the terminal of the embodiment of the present invention, as shown in figure 5, eventually
End is shown " value range that please input U components and V component ", and shows four input frames, is respectively used to input U components most
Big value and minimum value and the maximum value and minimum value of V component;After user inputs the value range of U components and V component, click
Confirming button, can with when terminal know human face region pixel chromatic component value range.
Optionally, can not occur the chromatic component in the YUV components of the history image data of colour cast according to human face region,
The value range of the chromatic component of the pixel of human face region is set;When implementing, can history image be judged by terminal user
Whether the human face region of data there is colour cast.
It should be noted that in the image processing method of the existing image data for camera shooting, in addition to certainly
Outside dynamic white balance processing and color correction, can also include resolution chart, black-level correction, camera lens shadow correction, bad point correction,
Green balance, removal noise, color interpolation, sharpening and etc.;In the embodiment of the present invention, for automatic white balance processing and color school
It is just illustrated, resolution chart, black-level correction, camera lens shade school can also be included by being not intended to limit in image processing process
Just, bad point correction, it is green balance, removal noise, color interpolation, sharpen and etc..
As can be seen that the second image data includes human face region and non-face region, in the embodiment of the present invention, for face
Region is adjusted using the coloration method of adjustment of preset human face region;Since the coloration method of adjustment of human face region can be with
It is configured according to actual needs, in this way, can be to avoid when carrying out coloration adjustment, the human face region of image data occurs partially
Color.
Second embodiment
In order to more embody the purpose of the present invention, on the basis of first embodiment of the invention, carry out further
It illustrates.
In second embodiment of the invention, the flow of image processing method can include:
Step A1:Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
Here, environment light source here can be standard sources, and standard sources includes but not limited to:D75 light sources, D65 light
Source, D50 light sources, TL84 light sources, A light sources, H light sources etc..
Here, it after shooting to obtain image data using camera, when carrying out coloration adjustment for image data, needs
The coloration method of adjustment chosen when according to environment light source;Thus, in the embodiment of the present invention, various ambient lights can be pre-set
The coloration method of adjustment of the corresponding human face region in source, convenient for when carrying out coloration adjustment, determining ring corresponding with image data
The matched coloration method of adjustment of border light source.
Step A2:Obtain the first image data comprising facial image using camera shooting;Determine first figure
As the corresponding environment light source of data;
Here, it for the realization method of the first image data of acquisition, makes an explanation in step 401, here no longer
It repeats.
For determining the realization method of the corresponding environment light source of described first image data, in one example, Ke Yili
The corresponding environment light source of the first image data is determined with white balance algorithm.Specifically, first with white balance algorithm statistics the
White point in one image data when that is, the r/g and b/g of pixel meet gray area condition in the picture, determines pixel to fall into
The white point of gray area, here, r/g represent the R component of pixel and the ratio of G components, and b/g represents the B component of pixel and G components
Ratio;The special parameter of white point to falling into gray area calculates average value, by the average value calculated and each standard sources
Reference point special parameter value carry out respectively make it is poor, obtain multiple differences, by minimum the corresponding standard sources of difference it is true
It is set to the corresponding environment light source of environment described image, here, special parameter can be pixel value, luminance component, chromatic component etc.
Deng.
Step A3:The realization method of this step is identical with the realization method of step 402, and which is not described herein again.
Step A4:The realization method of this step is identical with the realization method of step 403, and which is not described herein again.
Step A5:In the coloration method of adjustment of the corresponding human face region of the various environment light sources, determined by selection
The coloration method of adjustment of the corresponding human face region of environment light source;For second image data, according to selected face area
The coloration method of adjustment in domain is adjusted the chromatic component in the YUV components of each pixel of the human face region.
As can be seen that in the embodiment of the present invention, the coloration of the corresponding human face region of various environment light sources can be pre-set
Method of adjustment, in this way, when carrying out coloration adjustment to the human face region of the second image data, can according to determined by selection ring
The coloration method of adjustment of the corresponding human face region of border light source can promote the effect of coloration adjustment in this way.
3rd embodiment
In order to more embody the purpose of the present invention, on the basis of first embodiment of the invention and second embodiment,
Further illustrated.
In third embodiment of the invention, the flow of image processing method can include:
Step B1:The realization method of this step is identical with the realization method of step 401, and which is not described herein again.
Step B2:Automatic white balance processing is carried out to described first image data, obtains the second image data;It determines described
The non-face region of second image data.
Here, for the realization method to the progress automatic white balance processing of described first image data, in step 402
In make an explanation, which is not described herein again.
In practical applications, Face datection algorithm that can be according to the above records obtains the face area in the second image data
Domain later, obtains the non-face region of the second image data.
Step B3:The realization method of this step is identical with the realization method of step 403, and which is not described herein again.
Step B4:For second image data, according to the coloration method of adjustment of preset human face region, to the people
Chromatic component in the YUV components of each pixel in face region is adjusted;And for second image data, according to pre-
If non-face region coloration method of adjustment, the chromatic component of each pixel in the non-face region is adjusted.
Here, for the realization method of the coloration method of adjustment of the human face region of the second image data, in step 404
In make an explanation, which is not described herein again.
Optionally, the coloration method of adjustment in preset non-face region can be the pixel to non-face region
RGB component carries out coloration adjustment, and the RGB component of the pixel in non-face region can also be converted to YUV components, right later
The YUV components of the pixel in non-face region carry out coloration adjustment.
Optionally, the coloration method of adjustment in the corresponding non-face region of various environment light sources can be pre-set;It is obtaining
After first image data, the corresponding environment light source of described first image data is determined;
Realization method for the coloration method of adjustment for pre-setting the corresponding non-face region of various environment light sources, can be with
The coloration method of adjustment that pre-sets the corresponding human face region of various environment light sources recorded with reference to second embodiment of the invention
Realization method is not repeated herein.
In addition, determine the realization method of the first image data corresponding environment light source in second embodiment of the invention
It makes an explanation.
Correspondingly, it can be chosen in the coloration method of adjustment in the corresponding non-face region of the various environment light sources
The coloration method of adjustment in the corresponding non-face region of identified environment light source;For second image data, according to selected
The coloration method of adjustment in the non-face region taken is adjusted the chromatic component of each pixel in the non-face region.
Fourth embodiment
Based on the identical technical concept of previous embodiment, fourth embodiment of the invention provides a kind of terminal, the terminal
With shooting function;For example, the shooting of front camera or rear camera realization to current scene may be used in terminal.
In the embodiment of the present invention, camera applications in terminal can be set, when control runs camera applications, while can be controlled
The camera of terminal processed starts;It is understood that when the camera for determining terminal starts, the current display circle of control terminal
Face switches to shooting preview picture.
Optionally, the terminal of above-mentioned record can be mobile terminal;The mobile terminal of above-mentioned record can be connected to interconnection
Net, wherein, the mode of the connection can be that the mobile internet provided by operator is attached, and can also be and pass through
Wireless access point is accessed to carry out network connection.
Here, if mobile terminal has operating system, which can be UNIX, Linux, Windows, Android
(Android), Windows Phone etc..
It should be noted that type, shape, size to the display screen on mobile terminal etc. is not limited, it is exemplary
, the display screen on mobile terminal can be liquid crystal display etc..
Referring to Fig. 6, it illustrates a kind of terminals 60 provided in an embodiment of the present invention, can include:Second memory 601,
Second processor 602, camera 603 and it is stored in the meter that can be run on the second memory and in the second processor
Calculation machine program, wherein,
The computer program realizes following steps when being performed by the second processor:
Obtain the first image data comprising facial image using camera shooting;
Automatic white balance processing is carried out to described first image data, obtains the second image data;
It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted
For YUV components;
For second image data, according to the coloration method of adjustment of preset human face region, to the human face region
Each pixel YUV components in chromatic component be adjusted.
In practical applications, above-mentioned second memory 601 can be volatile memory (volatile memory), example
Such as random access memory (RAM, Random-Access Memory);Or nonvolatile memory (non-volatile
Memory), such as read-only memory (ROM, Read-Only Memory), flash memory (flash memory), hard disk
(HDD, Hard Disk Drive) or solid state disk (SSD, Solid-State Drive);Or the memory of mentioned kind
Combination, and provide instruction and data to second processor 602.
Above-mentioned second processor 602 can be application-specific IC (ASIC, Application Specific
Integrated Circuit), digital signal processor (DSP, Digital Signal Processor), Digital Signal Processing
Device (DSPD, Digital Signal Processing Device), programmable logic device (PLD, Programmable
Logic Device), field programmable gate array (FPGA, Field Programmable Gate Array), central processing unit
At least one of (CPU, Central Processing Unit), controller, microcontroller, microprocessor.It is appreciated that
Ground, for different equipment, it can also be other to be used to implement the electronic device of above-mentioned second processor function, and the present invention is implemented
Example is not especially limited.
In practical applications, when the processor of terminal receives the shooting instruction of user, control camera is to working as front court
Scape is shot.
Here, for how to judge whether comprising facial image in a kind of example, obtaining to scheme in image data
As after data, being detected using face recognition algorithms to image data, and then whether judge in image data comprising face figure
Picture;In another example, it can receive instruction input by user before using the shooting of the camera of terminal using terminal and believe
Breath when the current image to be captured of instruction information instruction input by user includes facial image, determines the use of camera shooting
Image data include facial image.
It is understood that human visual system has the characteristics that color constancy, therefore the mankind can to the observation of things
Not influenced by light source colour.But imaging sensor in itself and without this color constancy the characteristics of, therefore,
The image taken under different light can be influenced by light source colour and be changed.Such as it is clapped under bright day
The image taken the photograph may be partially blue, and the object color taken under candle light can be partially red.Therefore, in order to eliminate light source colour for
The influence of imaging sensor imaging, automatic white balance function is exactly to simulate the color constancy feature of human visual system to disappear
Except influence of the light source colour to image.
In actual implementation, terminal can store the optimization automatic white balance Adjusted Option for various universal light sources;
The camera of terminal can detect the color of subject automatically according to the light conditions by its camera lens and white balance sensor
Temperature value judges imaging conditions with this, and immediate tone is selected to set, and white balance is transferred to suitable position automatically.
Illustratively, for the second image data, the face that Face datection algorithm determines the second image data may be used
Region.
Face datection algorithm is mainly used for the pretreatment of recognition of face in practice, i.e., accurate calibration goes out face in the picture
Position and size.The pattern feature very abundant included in facial image, as histogram feature, color characteristic, template characteristic,
Structure feature and Haar features etc..Face datection is exactly that information useful among these is picked out, and is realized using these features
Face datection;Illustratively, method for detecting human face is based on features above using Adaboost learning algorithms, and Adaboost algorithm is
A kind of method for classifying, it is combined some weaker sorting techniques, is combined into new very strong sorting technique.
It is understood that when carrying out image procossing, the RGB color that can obtain the pixel of image data first is empty
Between each component, here, each component of RGB color includes R component, B component and G components.
It, can be by the RGB face of the pixel of image data in order to realize the coloration adjustment to image in the embodiment of the present invention
Each component of the colour space is converted to each component of YUV color spaces;Each component of YUV color spaces includes Y-component, U components and V
Component;Wherein, Y-component is luminance component;U components and V component are chromatic component, for describing image color and saturation degree.
Illustratively, the transformational relation of RGB color to YUV color spaces can be illustrated by the following formula:
Y=0.299*R+0.587*G+0.114*B
U=-0.147*R-0.289*G+0.436*B
V=0.615*R-0.515*G-0.100*B
Wherein, R, G and B represent the value of the value of the R component of RGB color, the value of G components and B component, Y, U and V respectively
Value, the value of U components and the value of V component of the Y-component of YUV color spaces are represented respectively.
Illustratively, the coloration method of adjustment of preset human face region can include:
Pre-set the value range of the chromatic component of the pixel of human face region;When the face of second image data
When chromatic component in the YUV components of the pixel in region is in the value range of preset chromatic component, described the is kept
Chromatic component in the YUV components of the pixel of the human face region of two image datas is constant;As the people of second image data
When chromatic component in the YUV components of the pixel in face region is not in the value range of preset chromatic component, institute is adjusted
The chromatic component in the YUV components of the pixel of the human face region of the second image data is stated, makes the second image data after adjustment
Human face region pixel YUV components in chromatic component be in the value range of preset chromatic component.
Here, the chromatic component of above-mentioned record can include at least one of following:U components, V component.
It, can be with terminal user in advance by the Y-component of the YUV components of the pixel of human face region and U points in actual implementation
The value range of amount is input in terminal;Fig. 5 is an interface schematic diagram of the terminal of the embodiment of the present invention, as shown in figure 5, eventually
End is shown " value range that please input U components and V component ", and shows four input frames, is respectively used to input U components most
Big value and minimum value and the maximum value and minimum value of V component;After user inputs the value range of U components and V component, click
Confirming button, can with when terminal know human face region pixel chromatic component value range.
Optionally, can not occur the chromatic component in the YUV components of the history image data of colour cast according to human face region,
The value range of the chromatic component of the pixel of human face region is set;When implementing, can history image be judged by terminal user
Whether the human face region of data there is colour cast.
It should be noted that in the image processing method of the existing image data for camera shooting, in addition to certainly
Outside dynamic white balance processing and color correction, can also include resolution chart, black-level correction, camera lens shadow correction, bad point correction,
Green balance, removal noise, color interpolation, sharpening and etc.;In the embodiment of the present invention, for automatic white balance processing and color school
It is just illustrated, resolution chart, black-level correction, camera lens shade school can also be included by being not intended to limit in image processing process
Just, bad point correction, it is green balance, removal noise, color interpolation, sharpen and etc..
Optionally, following steps are also realized when the computer program is performed by the processor:
Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment of the corresponding human face region of the various environment light sources, environment light source determined by selection
The coloration method of adjustment of corresponding human face region;For second image data, according to the coloration of selected human face region
Method of adjustment is adjusted the chromatic component in the YUV components of each pixel of the human face region.
Optionally, following steps are also realized when the computer program is performed by the processor:
After the second image data is obtained, the non-face region of second image data is determined;
For second image data, according to the coloration method of adjustment in preset non-face region, to described non-face
The chromatic component of each pixel in region is adjusted.
Optionally, following steps are also realized when the computer program is performed by the processor:
Pre-set the coloration method of adjustment in the corresponding non-face region of various environment light sources;Obtaining the first image data
Afterwards, the corresponding environment light source of described first image data is determined;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment in the corresponding non-face region of the various environment light sources, ambient light determined by selection
The coloration method of adjustment in the corresponding non-face region in source;For second image data, according to selected non-face region
Coloration method of adjustment, the chromatic component of each pixel in the non-face region is adjusted.
Optionally, the chromatic component includes at least one of following:U components, V component.
5th embodiment
Based on the technical concept identical with previous embodiment, fifth embodiment of the invention provides a kind of computer-readable Jie
Matter can be applied in terminal;The part that the technical solution of previous embodiment substantially in other words contributes to the prior art
Or all or part of the technical solution can be embodied in the form of software product, which is stored in
In one computer readable storage medium, used including some instructions so that computer equipment (can be personal computer,
Server or the network equipment etc.) or processor (processor) perform all or part of step of the present embodiment the method
Suddenly.And aforementioned storage medium includes:USB flash disk, read-only memory (ROM, Read Only Memory), is deposited mobile hard disk at random
The various media that can store program code such as access to memory (RAM, Random Access Memory), magnetic disc or CD.
Specifically, the corresponding computer program instructions of a kind of image processing method in the present embodiment can be stored in
On the storage mediums such as CD, hard disk, USB flash disk, when computer program corresponding with a kind of image processing method in storage medium refers to
It enables and is read or be performed by an electronic equipment, at least one processor is caused to perform the arbitrary of present invention
A kind of step described in image processing method.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), used including some instructions so that a station terminal (can be mobile phone, computer services
Device, air conditioner or network equipment etc.) perform method described in each embodiment of the present invention.
The embodiment of the present invention is described above in conjunction with attached drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned specific embodiment is only schematical rather than restricted, those of ordinary skill in the art
Under the enlightenment of the present invention, present inventive concept and scope of the claimed protection are not being departed from, can also made very much
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of image processing method, which is characterized in that the method includes:
Obtain the first image data comprising facial image using camera shooting;
Automatic white balance processing is carried out to described first image data, obtains the second image data;
It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted into YUV
Component;
For second image data, according to the coloration method of adjustment of preset human face region, to each of the human face region
Chromatic component in the YUV components of pixel is adjusted.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, it is described for second image data, according to the coloration method of adjustment of preset human face region, to the people
Chromatic component in the YUV components of each pixel in face region is adjusted, including:
In the coloration method of adjustment of the corresponding human face region of the various environment light sources, environment light source determined by selection corresponds to
Human face region coloration method of adjustment;For second image data, adjusted according to the coloration of selected human face region
Method is adjusted the chromatic component in the YUV components of each pixel of the human face region.
3. according to the method described in claim 1, it is characterized in that, after the second image data is obtained, the method further includes:
Determine the non-face region of second image data;
For second image data, according to the coloration method of adjustment in preset non-face region, to the non-face region
The chromatic component of each pixel be adjusted.
4. according to the method described in claim 3, it is characterized in that, the method further includes:
Pre-set the coloration method of adjustment in the corresponding non-face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, it is described for second image data, according to the coloration method of adjustment in preset non-face region, to described
The chromatic component of each pixel in non-face region is adjusted, including:
In the coloration method of adjustment in the corresponding non-face region of the various environment light sources, environment light source pair determined by selection
The coloration method of adjustment in the non-face region answered;For second image data, according to the color in selected non-face region
Method of adjustment is spent, the chromatic component of each pixel in the non-face region is adjusted.
5. according to the method described in claim 1, it is characterized in that, the chromatic component is including at least one of following:U components, V
Component.
6. a kind of terminal, which is characterized in that the terminal includes camera, memory, processor and is stored in the memory
Computer program that is upper and can running on the processor;Wherein, it is real when the computer program is performed by the processor
Existing following steps:
Obtain the first image data comprising facial image using camera shooting;
Automatic white balance processing is carried out to described first image data, obtains the second image data;
It determines the human face region of second image data, the RGB component of each pixel of the human face region is converted into YUV
Component;
For second image data, according to the coloration method of adjustment of preset human face region, to each of the human face region
Chromatic component in the YUV components of pixel is adjusted.
7. terminal according to claim 6, which is characterized in that reality is gone back when the computer program is performed by the processor
Existing following steps:
Pre-set the coloration method of adjustment of the corresponding human face region of various environment light sources;
After the first image data is obtained, the corresponding environment light source of described first image data is determined;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment of the corresponding human face region of the various environment light sources, environment light source determined by selection corresponds to
Human face region coloration method of adjustment;For second image data, adjusted according to the coloration of selected human face region
Method is adjusted the chromatic component in the YUV components of each pixel of the human face region.
8. terminal according to claim 6, which is characterized in that reality is gone back when the computer program is performed by the processor
Existing following steps:
After the second image data is obtained, the non-face region of second image data is determined;
For second image data, according to the coloration method of adjustment in preset non-face region, to the non-face region
The chromatic component of each pixel be adjusted.
9. terminal according to claim 8, which is characterized in that reality is gone back when the computer program is performed by the processor
Existing following steps:
Pre-set the coloration method of adjustment in the corresponding non-face region of various environment light sources;After the first image data is obtained,
Determine the corresponding environment light source of described first image data;
Correspondingly, following steps are implemented when the computer program is performed by the processor:
In the coloration method of adjustment in the corresponding non-face region of the various environment light sources, environment light source pair determined by selection
The coloration method of adjustment in the non-face region answered;For second image data, according to the color in selected non-face region
Method of adjustment is spent, the chromatic component of each pixel in the non-face region is adjusted.
10. a kind of computer readable storage medium, which is characterized in that applied in terminal, the computer readable storage medium
It is stored with computer program,
When the computer program is performed by least one processor, lead at least one processor perform claim requirement
The step of any one of 1 to 5 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090432.4A CN108200347A (en) | 2018-01-30 | 2018-01-30 | A kind of image processing method, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810090432.4A CN108200347A (en) | 2018-01-30 | 2018-01-30 | A kind of image processing method, terminal and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108200347A true CN108200347A (en) | 2018-06-22 |
Family
ID=62591953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810090432.4A Pending CN108200347A (en) | 2018-01-30 | 2018-01-30 | A kind of image processing method, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108200347A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934783A (en) * | 2019-03-04 | 2019-06-25 | 天翼爱音乐文化科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN110322416A (en) * | 2019-07-09 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer readable storage medium |
CN110942487A (en) * | 2018-09-25 | 2020-03-31 | 瑞昱半导体股份有限公司 | Chrominance adjustment system, method and non-transitory computer readable medium thereof |
CN111369470A (en) * | 2020-03-10 | 2020-07-03 | 昇显微电子(苏州)有限公司 | Image area tone adjusting method, device, storage medium and equipment |
CN111696039A (en) * | 2020-05-28 | 2020-09-22 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111767868A (en) * | 2020-06-30 | 2020-10-13 | 创新奇智(北京)科技有限公司 | Face detection method and device, electronic equipment and storage medium |
CN112822370A (en) * | 2021-01-12 | 2021-05-18 | Oppo广东移动通信有限公司 | Electronic device, pre-image signal processor and image processing method |
CN113762188A (en) * | 2021-09-14 | 2021-12-07 | 深圳市优必选科技股份有限公司 | Image processing method, image processing device, killing equipment and computer readable storage medium |
CN114630045A (en) * | 2022-02-11 | 2022-06-14 | 珠海格力电器股份有限公司 | Photographing method and device, readable storage medium and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1662071A (en) * | 2004-02-24 | 2005-08-31 | 豪威科技有限公司 | Image data processing in color spaces |
CN101193314A (en) * | 2006-11-30 | 2008-06-04 | 北京思比科微电子技术有限公司 | Image processing device and method for image sensor |
CN101242476A (en) * | 2008-03-13 | 2008-08-13 | 北京中星微电子有限公司 | Automatic correction method of image color and digital camera system |
US20090040336A1 (en) * | 2007-08-08 | 2009-02-12 | Fujifilm Corporation | Image pickup apparatus and method |
CN101465972A (en) * | 2007-12-21 | 2009-06-24 | 三星Techwin株式会社 | Apparatus and method for blurring image background in digital image processing device |
CN101472188A (en) * | 2007-12-27 | 2009-07-01 | 佳能株式会社 | White balance control device and white balance control method |
CN101706874A (en) * | 2009-12-25 | 2010-05-12 | 青岛朗讯科技通讯设备有限公司 | Method for face detection based on features of skin colors |
CN105894458A (en) * | 2015-12-08 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Processing method and device of image with human face |
CN107343188A (en) * | 2017-06-16 | 2017-11-10 | 广东欧珀移动通信有限公司 | image processing method, device and terminal |
-
2018
- 2018-01-30 CN CN201810090432.4A patent/CN108200347A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1662071A (en) * | 2004-02-24 | 2005-08-31 | 豪威科技有限公司 | Image data processing in color spaces |
CN101193314A (en) * | 2006-11-30 | 2008-06-04 | 北京思比科微电子技术有限公司 | Image processing device and method for image sensor |
US20090040336A1 (en) * | 2007-08-08 | 2009-02-12 | Fujifilm Corporation | Image pickup apparatus and method |
CN101465972A (en) * | 2007-12-21 | 2009-06-24 | 三星Techwin株式会社 | Apparatus and method for blurring image background in digital image processing device |
CN101472188A (en) * | 2007-12-27 | 2009-07-01 | 佳能株式会社 | White balance control device and white balance control method |
CN101242476A (en) * | 2008-03-13 | 2008-08-13 | 北京中星微电子有限公司 | Automatic correction method of image color and digital camera system |
CN101706874A (en) * | 2009-12-25 | 2010-05-12 | 青岛朗讯科技通讯设备有限公司 | Method for face detection based on features of skin colors |
CN105894458A (en) * | 2015-12-08 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Processing method and device of image with human face |
CN107343188A (en) * | 2017-06-16 | 2017-11-10 | 广东欧珀移动通信有限公司 | image processing method, device and terminal |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942487A (en) * | 2018-09-25 | 2020-03-31 | 瑞昱半导体股份有限公司 | Chrominance adjustment system, method and non-transitory computer readable medium thereof |
CN110942487B (en) * | 2018-09-25 | 2023-08-29 | 瑞昱半导体股份有限公司 | Chroma adjustment system, method and non-transitory computer readable medium thereof |
CN109934783A (en) * | 2019-03-04 | 2019-06-25 | 天翼爱音乐文化科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN109934783B (en) * | 2019-03-04 | 2021-05-07 | 天翼爱音乐文化科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN110322416B (en) * | 2019-07-09 | 2022-11-18 | 腾讯科技(深圳)有限公司 | Image data processing method, apparatus and computer readable storage medium |
CN110322416A (en) * | 2019-07-09 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer readable storage medium |
CN111369470A (en) * | 2020-03-10 | 2020-07-03 | 昇显微电子(苏州)有限公司 | Image area tone adjusting method, device, storage medium and equipment |
CN111369470B (en) * | 2020-03-10 | 2024-05-31 | 昇显微电子(苏州)股份有限公司 | Image area tone adjustment method, device, storage medium and apparatus |
CN111696039A (en) * | 2020-05-28 | 2020-09-22 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111767868A (en) * | 2020-06-30 | 2020-10-13 | 创新奇智(北京)科技有限公司 | Face detection method and device, electronic equipment and storage medium |
CN111767868B (en) * | 2020-06-30 | 2024-06-11 | 创新奇智(北京)科技有限公司 | Face detection method and device, electronic equipment and storage medium |
WO2022151813A1 (en) * | 2021-01-12 | 2022-07-21 | Oppo广东移动通信有限公司 | Electronic device, front image signal processor, and image processing method |
CN112822370B (en) * | 2021-01-12 | 2022-11-15 | Oppo广东移动通信有限公司 | Electronic device, pre-image signal processor and image processing method |
CN112822370A (en) * | 2021-01-12 | 2021-05-18 | Oppo广东移动通信有限公司 | Electronic device, pre-image signal processor and image processing method |
CN113762188A (en) * | 2021-09-14 | 2021-12-07 | 深圳市优必选科技股份有限公司 | Image processing method, image processing device, killing equipment and computer readable storage medium |
CN113762188B (en) * | 2021-09-14 | 2023-09-26 | 深圳市优必选科技股份有限公司 | Image processing method, image processing device, killing device and computer readable storage medium |
CN114630045A (en) * | 2022-02-11 | 2022-06-14 | 珠海格力电器股份有限公司 | Photographing method and device, readable storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108200347A (en) | A kind of image processing method, terminal and computer readable storage medium | |
JP6967160B2 (en) | Image processing methods and related devices | |
US10497097B2 (en) | Image processing method and device, computer readable storage medium and electronic device | |
CN108605099B (en) | Terminal and method for terminal photographing | |
CN107302663B (en) | Image brightness adjusting method, terminal and computer readable storage medium | |
CN108605097A (en) | Optical imaging method and its device | |
CN105491358B (en) | A kind of image processing method and device, terminal | |
CN107438163A (en) | A kind of photographic method, terminal and computer-readable recording medium | |
CN107454342A (en) | A kind of control method, mobile terminal and computer-readable recording medium | |
CN108848294A (en) | A kind of shooting parameter adjustment method, terminal and computer readable storage medium | |
CN107566753A (en) | Method, photo taking and mobile terminal | |
CN107465881A (en) | A kind of dual camera focusing method, mobile terminal and computer-readable recording medium | |
CN106993136B (en) | Mobile terminal and multi-camera-based image noise reduction method and device thereof | |
CN108156393A (en) | Image capturing method, mobile terminal and computer readable storage medium | |
CN105469357B (en) | Image processing method, device and terminal | |
CN108200352B (en) | Method, terminal and storage medium for adjusting picture brightness | |
CN108701365A (en) | Luminous point recognition methods, device and system | |
CN107040723A (en) | A kind of imaging method based on dual camera, mobile terminal and storage medium | |
CN108184105A (en) | A kind of method, apparatus and computer readable storage medium for adjusting brightness | |
CN109377531A (en) | Image color cast method of adjustment, device, mobile terminal and readable storage medium storing program for executing | |
CN107705247A (en) | A kind of method of adjustment of image saturation, terminal and storage medium | |
CN108156392A (en) | A kind of image pickup method, terminal and computer readable storage medium | |
CN107613222B (en) | Light supplement adjusting method of terminal, terminal and computer readable storage medium | |
CN107911620A (en) | Definite method, terminal and the computer-readable recording medium of a kind of white balance | |
CN108156377A (en) | A kind of image pickup method, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180622 |
|
RJ01 | Rejection of invention patent application after publication |