CN106791365A - Facial image preview processing method and processing device - Google Patents

Facial image preview processing method and processing device Download PDF

Info

Publication number
CN106791365A
CN106791365A CN201611051275.3A CN201611051275A CN106791365A CN 106791365 A CN106791365 A CN 106791365A CN 201611051275 A CN201611051275 A CN 201611051275A CN 106791365 A CN106791365 A CN 106791365A
Authority
CN
China
Prior art keywords
face
facial image
time
facial
amplitude levels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611051275.3A
Other languages
Chinese (zh)
Inventor
邱情
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201611051275.3A priority Critical patent/CN106791365A/en
Publication of CN106791365A publication Critical patent/CN106791365A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • G06T5/77
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention discloses a kind of facial image preview processing unit, the mobile terminal with camera function is applied to, described device includes:Symmetrical detection module, the facial image taken pictures for real-time detection in preview interface calculates the current amplitude levels in no time of user according to facial symmetry axle;U.S. face adjusting module, for amplitude levels to select corresponding U.S. face grade in no time described in;U.S. face display module, carries out U.S. face to the facial image for detecting and operates and show by the facial image after the operation of U.S. face for the U.S. face grade according to selection.So as to, by calculating the amplitude levels in no time of user, correspondence U.S. face grade is thus selected, the U.S. face effect of the facial image in preview range is gradually changed according to the change of amplitude in no time, the U.S. face effect saltus step that preview range inside-paint dough figurine face is highlighted is avoided, makes display humanized.

Description

Facial image preview processing method and processing device
Technical field
The present invention relates to communication technique field, more particularly to mobile terminal capable of taking pictures facial image preview processing method and Device.
Background technology
Possessed by increasing user with mobile terminals such as mobile phones, now, dependence of the people for mobile terminal Property more and more higher, thus, the diversified and intellectuality for the function of mobile terminal also improves requirement higher.Currently, move The camera function of dynamic terminal is increasingly favored by user, and particularly auto heterodyne aspect is even more and is liked by numerous users, thus, use Intelligent requirements more and more higher of the family to the Self-timer of mobile terminal.And it is current, carrying U.S. more at the auto heterodyne interface of mobile terminal Face function, but, there is unstable situation in current U.S.'s face function, for example, front and rear picture U.S. face effect difference is more, or user Still it can be seen that face from preview interface, but now preview display effect because of system delay or algorithm errors trouble in human face recognition And without U.S. face effect, cause interface blackening (i.e. the colour of skin is more black).The mutation of U.S. face effect makes to be brought using upper for user It is inadaptable.
Therefore, it is necessary to provide a kind of facial image preview processing method and processing device, it is to avoid the appearance of above-mentioned situation, improve Consumer's Experience.
The content of the invention
It is a primary object of the present invention to propose a kind of facial image preview processing method and processing device, it is intended to solve existing skill The problem of the U.S. face effect mutation existed when mobile terminal is taken pictures in art.
To achieve the above object, the present invention proposes a kind of facial image preview processing unit, is applied to capable of taking pictures mobile whole End, described device includes:
Symmetrical detection module, the facial image taken pictures for real-time detection in preview interface is calculated according to facial symmetry axle The current amplitude levels in no time of user;
U.S. face adjusting module, for amplitude levels to select corresponding U.S. face grade in no time described in;
U.S. face display module, U.S. face operation is carried out for the U.S. face grade according to selection to the facial image for detecting And show by the facial image after the operation of U.S. face.
Alternatively, the symmetrical detection module is additionally operable to:
When face is detected just to camera, facial symmetry axle is determined according to currently detected facial image.
Alternatively, the symmetrical detection module is specifically included:
Face datection unit, the facial image taken pictures for real-time detection in preview interface, and determine the facial image In each facial regional area;
Symmetrical computing unit, for the distance according to each facial regional area centre distance and the facial symmetry axle Change determine described in amplitude levels in no time.
Alternatively, the facial image preview processing unit also includes:
U.S. face presetting module, the corresponding relation for presetting the amplitude levels in no time and the U.S. face grade, its In, amplitude levels are higher in no time, then its corresponding U.S. face lower grade.
Alternatively, the facial image preview processing unit also includes:
Preview correction module, for cannot detect face when present frame and during the detectable face of previous frame, will it is described currently Frame is compared with the previous frame, and the view data of the present frame is corrected according to comparison result.
Additionally, to achieve the above object, the present invention also proposes a kind of facial image preview processing method, is applied to capable of taking pictures Mobile terminal, methods described includes:
Real-time detection is taken pictures the facial image in preview interface, and the current amplitude in no time of user is calculated according to facial symmetry axle Grade;
Amplitude levels select corresponding U.S. face grade in no time described in;
U.S. face is carried out to the facial image for detecting according to the U.S. face grade of selection to operate and show by U.S. face behaviour The facial image after work.
Alternatively, methods described also includes:
When face is detected just to camera, facial symmetry axle is determined according to currently detected facial image.
Alternatively, the real-time detection is taken pictures the facial image in preview interface, and calculating user according to facial symmetry axle works as Preceding amplitude levels in no time are specifically included:
Real-time detection is taken pictures the facial image in preview interface, and determines each facial partial zones in the facial image Domain;
According to the distance change of each facial regional area centre distance and the facial symmetry axle determine described in no time Amplitude levels.
Alternatively, before the facial image that the real-time detection is taken pictures in preview interface, methods described also includes:
The corresponding relation of the amplitude levels in no time and the U.S. face grade is preset, wherein, amplitude levels are got in no time Height, then its corresponding U.S. face lower grade.
Alternatively, methods described also includes:
When present frame cannot detect face and previous frame can detect face, the present frame is carried out with the previous frame Compare, the view data of the present frame is corrected according to comparison result.
Facial image preview processing method and processing device proposed by the present invention, mobile terminal real-time detection is taken pictures in preview interface Facial image, calculate the current amplitude levels in no time of user according to facial symmetry axle;The amplitude levels selection in no time described in Corresponding U.S. face grade;U.S. face is carried out according to the U.S. face grade of selection to the facial image for detecting to operate and show process The facial image after U.S. face operation.So as to by calculating the amplitude levels in no time of user, thus select correspondence U.S. face etc. Level, make the U.S. face effect of the facial image in preview range be gradually changed according to the change of amplitude in no time, it is to avoid preview model The U.S. face effect saltus step that inside-paint dough figurine face is highlighted is enclosed, makes display humanized.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing the optional mobile terminal of each embodiment one of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
The module diagram of the facial image preview processing unit that Fig. 3 is provided for first embodiment of the invention;
Another module diagram of the facial image preview processing unit that Fig. 4 is provided for first embodiment of the invention;
Another module diagram of the facial image preview processing unit that Fig. 5 is provided for first embodiment of the invention;
Fig. 6 is the default exemplary plot of interface one of mobile terminal camera U.S. face in the present invention;
Fig. 7 is mobile terminal camera U.S. default another exemplary plot in interface of face for shown in;
The schematic flow sheet of the facial image preview processing method that Fig. 8 is provided for second embodiment of the invention;
Fig. 9 is the refinement schematic flow sheet of step 800 in second embodiment of the invention.
The realization of the object of the invention, functional characteristics and advantage will be described further referring to the drawings in conjunction with the embodiments.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the invention is realized referring now to Description of Drawings.In follow-up description, use For represent element such as " module ", " part " or " unit " suffix only for being conducive to explanation of the invention, itself Not specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP The mobile terminal of (portable media player), guider etc. and such as numeral TV, desktop computer etc. are consolidated Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that, except being used in particular for movement Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is that the hardware configuration for realizing the optional mobile terminal of each embodiment one of the invention is illustrated.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190 Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups for showing Part.More or less component can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 1 00 and wireless communication system Or the radio communication between network.For example, wireless communication unit can include broadcasting reception module 111, mobile communication module 112nd, at least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting Signal etc..And, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase Pass information can also be provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould Block 112 is received.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of DMB (DMB) The form of program guide (EPG), the electronic service guidebooks (ESG) of digital video broadcast-handheld (DVB-H) etc. and exist.Broadcast Receiver module 111 can receive signal and broadcast by using various types of broadcast systems.Especially, broadcasting reception module 111 Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video Broadcast-hand-held (DVB-H), forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111 may be constructed such that and be adapted to provide for extensively Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via broadcasting reception module 111 receive broadcast singal and/ Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module Typical case be GPS (global positioning system).According to current technology, GPS module 115 is calculated and comes from three or more satellites Range information and correct time information and the Information application triangulation for calculating, so as to according to longitude, latitude Highly accurately calculate three-dimensional current location information.Currently, defended using three for calculating the method for position and temporal information Star and the position that is calculated by using other satellite correction and the error of temporal information.Additionally, GPS module 115 Can be by Continuous plus current location information in real time come calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include microphone 122, Microphone 122 can be received in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone Sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after treatment can be with The form that can be sent to mobile communication base station via mobile communication module 112 is converted in the case of telephone calling model defeated Go out.Microphone 122 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send The noise produced during audio signal or interference.
User input unit 130 can generate key input data to control each of mobile terminal according to the order of user input Plant operation.User input unit 130 allows the various types of information of user input, and can include keyboard, metal dome, touch Plate (for example, detection due to being touched caused by resistance, pressure, electric capacity etc. change sensitive component), roller, rocking bar etc. Deng.Especially, when touch pad is superimposed upon on display unit 151 in the form of layer, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 1 00, (for example, mobile terminal 1 00 opens or closes shape State), the presence or absence of the contact (that is, touch input) of the position of mobile terminal 1 00, user for mobile terminal 1 00, mobile terminal The acceleration or deceleration movement of 100 orientation, mobile terminal 1 00 and direction etc., and generate for controlling mobile terminal 1 00 The order of operation or signal.For example, when mobile terminal 1 00 is embodied as sliding-type mobile phone, sensing unit 140 can be sensed The sliding-type phone is opened or closed.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or Whether person's interface unit 170 couples with external device (ED).
Interface unit 170 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example, External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Identification module can be that storage uses each of mobile terminal 1 00 for verifying user Kind of information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included Etc..In addition, the device (hereinafter referred to as " identifying device ") with identification module can take the form of smart card, therefore, know Other device can be connected via port or other attachment means with mobile terminal 1 00.Interface unit 170 can be used for reception and come from The input (for example, data message, electric power etc.) of the external device (ED) and input that will be received is transferred in mobile terminal 1 00 One or more elements can be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 170 can serve as allowing by it by electricity Power provides to the path of mobile terminal 1 00 from base or can serve as allowing the various command signals being input into from base to pass through it It is transferred to the path of mobile terminal.Be can serve as recognizing that mobile terminal is from the various command signals or electric power of base input The no signal being accurately fitted within base.Output unit 150 is configured to provide defeated with vision, audio and/or tactile manner Go out signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include display Unit 151, dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity During words call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure UI or GUI of picture and correlation function etc..
Meanwhile, when display unit 151 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit 151 can serve as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT) In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific Desired implementation method, mobile terminal 1 00 can include two or more display units (or other display devices), for example, moving Dynamic terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect touch Input pressure and touch input position and touch input area.
Dio Output Modules 152 can mobile terminal be in call signal reception pattern, call mode, logging mode, It is that wireless communication unit 110 is received or in memory 160 when under the isotypes such as speech recognition mode, broadcast reception mode The voice data transducing audio signal of middle storage and it is output as sound.And, dio Output Modules 152 can be provided and movement The audio output (for example, call signal receives sound, message sink sound etc.) of the specific function correlation that terminal 100 is performed. Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output and be notified to mobile terminal 1 00 with by event.Typical event can be with Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video is exported, alarm unit 153 can in a different manner provide output with the generation of notification event.For example, alarm unit 153 can be in the form of vibrating Output is provided, when calling, message or some other entrance communication (incomingcommunication) are received, alarm list Unit 153 can provide tactile output (that is, vibrating) to notify to user.Exported by providing such tactile, even if When in pocket of the mobile phone of user in user, user also can recognize that the generation of various events.Alarm unit 153 The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
Memory 160 can store software program for the treatment and control operation performed by controller 180 etc., Huo Zheke Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And And, memory 160 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal Data.
Memory 160 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, many Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can perform memory with by network connection The network storage device cooperation of 160 store function.
The overall operation of the generally control mobile terminal of controller 180.For example, controller 180 is performed and voice call, data Communication, video calling etc. related control and treatment.In addition, controller 180 can be included for reproducing (or playback) many matchmakers The multi-media module 1810 of volume data, multi-media module 1810 can be constructed in controller 180, or can be structured as and control Device processed 180 is separated.Controller 180 can be with execution pattern identifying processing, the handwriting input that will be performed on the touchscreen or figure Piece draws input and is identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each unit Appropriate electric power needed for part and component.
Various implementation methods described herein can be with use such as computer software, hardware or its any combination of calculating Machine computer-readable recording medium is implemented.Implement for hardware, implementation method described herein can be by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein At least one in electronic unit is implemented, and in some cases, such implementation method can be implemented in controller 180. For software implementation, the implementation method of such as process or function can with allow to perform the single of at least one function or operation Software module is implemented.Software code can be come by the software application (or program) write with any appropriate programming language Implement, software code can be stored in memory 160 and performed by controller 180.
So far, oneself according to its function through describing mobile terminal.Below, for the sake of brevity, will description such as folded form, Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc. is used as showing Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 1 00 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any Line and wireless communication system and satellite-based communication system are operated.
The communication system that mobile terminal wherein of the invention can be operated is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system can include multiple mobile terminal 1s 00, multiple base station (BS) 270, base station Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN) 290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link. If any one in the interface that back haul link can be known according to Ganji is constructed, the interface includes such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system can include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction Each subregion of line covering is radially away from BS270.Or, each subregion can be by two or more for diversity reception Antenna is covered.Each BS270 may be constructed such that the multiple frequency distribution of support, and the distribution of each frequency has specific frequency spectrum (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS270 can be claimed It is multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system 100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT295 Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful Location information.GPS module 115 as shown in Figure 1 is generally configured to coordinate with satellite 300 to be believed with obtaining desired positioning Breath.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use other of the position of mobile terminal can be tracked Technology.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb transmission.
Used as a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminal 1s 00 Signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of the reception of certain base station 270 is anti- Processed in specific BS270 to link signal.The data of acquisition are forwarded to the BSC275 of correlation.BSC provides call Resource allocation and the mobile management function of the coordination including the soft switching process between BS270.The number that BSC275 will also be received According to MSC280 is routed to, it provides the extra route service for forming interface with PSTN290.Similarly, PSTN290 with MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 correspondingly controls BS270 with by forward link signals It is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal hardware configuration and communication system, the inventive method each embodiment is proposed.
As shown in figure 3, first embodiment of the invention proposes a kind of facial image preview processing unit, shifting capable of taking pictures is applied to Dynamic terminal, described device includes:
Symmetrical detection module 300, the facial image taken pictures for real-time detection in preview interface, according to facial symmetry axle meter Calculate the current amplitude levels in no time of user;
Further, symmetrical detection module 300 is additionally operable to when face is detected just to camera, according to currently detected Facial image determine facial symmetry axle.
Specifically, in general, when user starts the camera function of mobile terminal, usually positive face is to camera, And face is generally symmetrical in error range.After camera startup, symmetrical detection module 300 determines currently first When whether the face for detecting is just to camera, when face is detected just to camera, you can rapid according to currently detected Facial image determine facial symmetry axle.After determining facial symmetry axle, symmetrical detection module 300 continues detection and takes pictures preview interface Interior facial image, the symmetrical degree of left and right face of the facial image of detection every time is calculated according to facial symmetry axle, according to left and right face Symmetrical degree determines currently displaying facial image for positive face or side face, when for side face, further according to symmetrical journey Degree determines amplitude levels in no time.Herein, amplitude levels can be set by system default in no time, can also be changed by user.With positive face When amplitude in no time be 0%, with left/right in no time 90 degree when side face be amplitude 100% in no time.In no time 5% can for example be given tacit consent to Within then regard as not rotating (i.e. amplitude levels are 0 grade in no time), 5%-15% then assert that amplitude levels are 1 grade in no time in no time, 15%-25% then assert that amplitude levels are 2 grades in no time in no time, the like.
Further, in the present embodiment, symmetrical detection module 300 is specifically included:
Face datection unit 301, the facial image taken pictures for real-time detection in preview interface, and determine the face figure Each facial regional area as in;
Symmetrical computing unit 302, for according to each facial regional area centre distance and the facial symmetry axle Distance determine described in amplitude levels in no time.
Specifically, when Face datection unit 301 detects facial image in preview interface of taking pictures, further determining that face Each facial regional area in image, for example, ocular, nasal area, lip region, double cheek regions and face wheel Exterior feature etc..Symmetrical computing unit determines regional center, and calculate the region and face pair according to each the facial regional area for determining Claim the distance of axle.For example, the center of ocular to facial symmetry axle distance include left eye center to facial symmetry axle away from From L1 and right eye center to facial symmetry axle distance L2.Then it is just it is believed that now face is symmetrical as L1=L2 Face.Then it is side face as L1 ≠ L2.In ceaselessly detection process, symmetrical computing unit 302 can be local according to each face Regional center distance determines that the current face of user rotates trend with the distance change trend of the facial symmetry axle, for example, such as Fruit left eye center determines that face symmetry axis becomes larger apart from L1 and right eye center is gradually reduced to facial symmetry wheelbase from L2 to people, Then it can be assumed that the face is rotated from left to right.Rotation amplitude difference, each facial regional area centre distance and the people The distance of face symmetry axis is inevitable different, thus, symmetrical computing unit can according to each facial regional area centre distance with The distance of the facial symmetry axle determines amplitude in no time, and then obtains amplitude levels in no time.
In other embodiments, symmetrical computing unit 302 can also by other means determine amplitude in no time, for example, After Face datection unit 301 determines each the facial regional area in facial image, symmetrical computing unit 302 can be according to face The organ coincidence degree of symmetry axis the right and left judges whether symmetrically, amplitude in no time directly to be determined according to symmetrical degree.And for example, Symmetrical computing unit 302 directly can be contrasted the pixel of the right and left symmetric position, and two pixel differences are less than default threshold Value then thinks that two pixels can overlap, and counts the pixel of the coincidence of two and half face pictures of the facial zone on facial symmetry axle both sides Number, accounting according to the coincident pixel number total pixel of accounting human face region judges the left and right symmetrical degree of half face.
U.S. face adjusting module 310, for amplitude levels to select corresponding U.S. face grade in no time described in.
U.S. face display module 320, U.S. face is carried out for the U.S. face grade according to selection to the facial image for detecting Operate and show by the facial image after the operation of U.S. face
Specifically, after having obtained amplitude levels in no time, U.S. face adjusting module 330 can be according to grade in no time set in advance Amplitude and U.S. face grade corresponding relation, automatically select the corresponding U.S. face grade of amplitude levels in no time, and amplitude levels are higher in no time Then select U.S. face lower grade.For example amplitude levels often increase by 2 grades of then 1 grade of U.S. face grade reductions in no time, until U.S. face grade is 0 (not doing U.S. face treatment).U.S. face display module 320 carries out U.S. face according to the U.S. face grade for selecting, the facial image to detecting Operation, for example, the operation such as whitening, thin face, mill skin, big eye, then shown by the face after the operation of U.S. face in preview interface of taking pictures Image.
Further, Fig. 4 is refer to, in the present embodiment, the facial image preview processing unit also includes:
U.S. face presetting module 330, the corresponding relation for presetting the amplitude levels in no time and the U.S. face grade, Wherein, amplitude levels are higher in no time, then its corresponding U.S. face lower grade.
Specifically, please also refer to Fig. 6, shown is the default interface of U.S. face of mobile terminal camera, and user can be by the boundary The preview mode of preview interface when face selection is taken pictures, including intelligent gradual-change mode and general mode, intelligent gradual-change mode are The intelligent mode of face preview processing unit of the present invention is applied, general mode is application, therefore is likely to occur U.S. face effect and adjusts Become.After user selects intelligent gradual-change mode, user can also be further right with the U.S. face grade to amplitude levels in no time Should be related to and be set, into being as shown in Figure 7 the default interface of Sino-U.S.'s face of the present invention, select different amplitude levels scopes in no time And U.S. face rate range, selection difference, then corresponding relation is different, for example, amplitude levels scope is fixed as 5 in no time when selection Level, and U.S. face rate range is 5 grades, then amplitude levels often increase by 1 grade of then 1 grade of U.S. face grade reduction in no time;When selection width in no time Degree rate range is fixed as 5 grades, and U.S. face rate range is 10 grades, then amplitude levels often increase by 1 grade of then U.S. face grade drop in no time It is low 2 grades.Herein, it is to be understood that when selection, amplitude levels range set is different in no time herein, then it is right with amplitude in no time Should be related to also change therewith, but, amplitude actual range does not change in no time, is 0% still with the amplitude in no time during positive face, with Left/right in no time 90 degree when side face be amplitude 100% in no time.For example, still then regarding as not turning within 5% in no time can give tacit consent to As a example by dynamic (i.e. amplitude levels are 0 grade in no time), when amplitude levels are 5 grades in no time, then 5%-24% corresponds to amplitude in no time in no time Grade is 1 grade, and amplitude levels are 2 grades to 24%-43% correspondences in no time in no time, the like.When amplitude levels are 10 grades in no time, Then the corresponding amplitude levels in no time of 5%-14.5% are 1 grade in no time, and amplitude levels are 2 grades to 14.5%-24% correspondences in no time in no time, according to It is secondary to analogize.U.S. face presetting module 330 can determine amplitude levels and the U.S. face grade in no time according to the input of above-mentioned user Corresponding relation is simultaneously stored, it is also possible in the case where user is for input, according to the unit of amplitude levels in no time and U.S. of factory default The corresponding relation of the amplitude levels in no time given tacit consent to of face rate range and the U.S. face grade.
Further, Fig. 5 is refer to, in the present embodiment, the facial image preview processing unit also includes:
Preview correction module 340, during the detectable face of previous frame, works as that cannot detect face when present frame to described Previous frame is compared with the previous frame, and the view data of the present frame is corrected according to comparison result.
When face is recognizable, facial image preview processing unit can be according to the U.S. face of correspondence adjustment of amplitude levels in no time Grade.However, trouble in human face recognition phenomenon in short-term caused by there is also at present because of face recognition algorithms defect or postponing.It is this kind of In the case of, user still can be seen face from preview interface, but mobile terminal can't show U.S. face effect because being not detected by face, Now, preview correction module 340 can be corrected to preview display effect.
Specifically, when the previous frame picture of Face datection unit 301 still detectable face, present frame picture cannot be examined When surveying face, then now may lead to not recognition of face for algorithm errors, it is also possible to for user moves suddenly, at this time, it may be necessary to The raw image data of front and rear two frames picture is compared, will the not U.S. face of previous frame view data and present frame image Data are compared, if compare find before and after picture and unchanged, then it is assumed that need to carry out present frame U.S. face treatment because this When None- identified present frame human face region, can according to previous frame U.S. face after view data, correct present frame picture number According to making it carry U.S. face effect.Detailed, the area data after the U.S. face of previous frame human face region can be intercepted, according to default The weight of ratio (such as 100% or 50%) carries out Region Matching or pixel come the human face data to present frame co-located region Match to synthesize new human face data.For example, the ocular after previous frame U.S. face can be matched into the position of present frame, and press The block data and the block data of present frame of current previous frame ocular are merged according to preset ratio, the new number of the position is obtained According to.Pixel matching the like, difference only its data fusion is in units of pixel without in units of region.
The mode of the raw image data of two frame pictures is not limited in the following manner including working as before and after comparing:1. it is overall to compare. Difference between the pixel of the correspondence position for contrasting each pixel and present frame of previous frame, difference is less than predetermined threshold value (such as picture Plain difference 5 or pixel number difference percentage 1% etc.) then regard as same pixel.Statistics same pixel accounts for whole pixels Percentage, if the value exceedes predetermined threshold value (such as 95%), regards as previous frame and present frame does not occur data variation, then make The correspondence position of currently displayed frame is corrected with the data after the U.S. face of previous frame.2. region compares.Due to previous frame Face information is detected, then can directly record the differentiation pixel region (such as human face region and part of its periphery) of face information And these region positions, and matched one by one with the pixel in the corresponding region of the correspondence position of present frame.If both are right Pixel difference is answered to regard as identical picture if being less than predetermined threshold value (such as pixel value difference 5 or pixel number difference percentage 1% etc.) Element.Statistics same pixel accounts for the percentage of whole pixels, if the value exceedes predetermined threshold value (such as 95%), regard as previous frame and Not there is data variation in present frame, then the corresponding position of currently displayed frame is corrected using the data after the U.S. face of previous frame Put.
Facial image preview processing unit proposed by the present invention, the real-time detection of symmetrical detection module 300 is taken pictures preview interface Interior facial image, the current amplitude levels in no time of user are calculated according to facial symmetry axle;U.S. face adjusting module 310 is according to described Amplitude levels select corresponding U.S. face grade in no time;U.S. face display module 320 is detected according to the U.S. face grade of selection to described Facial image carry out U.S. face operate and show by U.S. face operation after the facial image.So as to by calculating user's Amplitude levels in no time, thus select correspondence U.S. face grade, make the U.S. face effect of facial image in preview range according to width in no time The change of degree and gradually change, it is to avoid the U.S. face effect saltus step that preview range inside-paint dough figurine face is highlighted, make display more human nature Change.
As shown in figure 8, second embodiment of the invention is it is further proposed that a kind of facial image preview processing method, being applied to can Take pictures mobile terminal, methods described includes:
Step 800, real-time detection is taken pictures the facial image in preview interface, and it is current to calculate user according to facial symmetry axle Amplitude levels in no time;
Further, before the amplitude levels in no time current according to facial symmetry axle calculating user, methods described is also Including:
When face is detected just to camera, facial symmetry axle is determined according to currently detected facial image.
Specifically, in general, when user starts the camera function of mobile terminal, usually positive face is to camera, And face is generally symmetrical in error range.After camera startup, mobile terminal determines currently detected first Face whether just to camera when, when face is detected just to camera, you can rapid according to currently detected face Image determines facial symmetry axle.After determining facial symmetry axle, mobile terminal continues to detect the facial image taken pictures in preview interface, The symmetrical degree of left and right face of the facial image of detection every time is calculated according to facial symmetry axle, determines to work as according to the symmetrical degree of left and right face The facial image of preceding display is positive face or side face, when for side face, further determines amplitude in no time according to symmetrical degree Grade.Herein, amplitude levels can be set by system default in no time, can also be changed by user.Amplitude in no time during with positive face is 0%, with left/right in no time 90 degree when side face be amplitude 100% in no time.Can for example give tacit consent to and then regard as not within 5% in no time Rotate (i.e. in no time amplitude levels be 0 grade), 5%-15% then assert that amplitude levels are 1 grade in no time in no time, 15%-25% is then in no time Assert that amplitude levels are 2 grades in no time, the like.
Further, Fig. 9 is refer to, step 800 is specifically included in the present embodiment:
Step 900, real-time detection is taken pictures the facial image in preview interface, and determines each face in the facial image Regional area;
Step 910, determines described according to each facial regional area centre distance and the distance of the facial symmetry axle Amplitude levels in no time.
Specifically, when mobile terminal detects facial image in preview interface of taking pictures, in further determining that facial image Each facial regional area, for example, ocular, nasal area, lip region, double cheek regions and face contour etc.. Symmetrical computing unit determines regional center, and calculate the region and facial symmetry axle according to each the facial regional area for determining Distance.For example, the center of ocular includes left eye center to facial symmetry axle apart from L1 to the distance of facial symmetry axle With right eye center to facial symmetry axle apart from L2.Then it is positive face it is believed that now face is symmetrical as L1=L2.When L1 ≠ L2, then be side face.In ceaselessly detection process, mobile terminal can be according to each facial regional area centre distance Determine that the current face of user rotates trend with the distance change trend of the facial symmetry axle, if for example, left eye center is arrived People determines that face symmetry axis becomes larger apart from L1 and right eye center is gradually reduced to facial symmetry wheelbase from L2, then it can be assumed that should Face is rotated from left to right.Rotation amplitude is different, each facial regional area centre distance and the facial symmetry axle away from From inevitable difference, thus, it is possible to be determined with the distance of the facial symmetry axle according to each facial regional area centre distance Amplitude in no time, and then obtain amplitude levels in no time.
In other embodiments, mobile terminal can also by other means determine amplitude in no time, for example, it is determined that face After each facial regional area in image, mobile terminal can be sentenced according to the organ coincidence degree of facial symmetry axle the right and left It is disconnected whether symmetrical, amplitude in no time is directly determined according to symmetrical degree.Again for example, mobile terminal can be directly symmetrical by the right and left The pixel of position is contrasted, and two pixel differences then think that two pixels can overlap less than predetermined threshold value, statistics facial zone on The number of the pixel of the coincidence of two and half face pictures of facial symmetry axle both sides, according to the coincident pixel number total picture of accounting human face region Element accounting come judge left and right the symmetrical degree of half face.
Step 810, for amplitude levels to select corresponding U.S. face grade in no time described in.
Step 820, carries out U.S. face and operates and show for the U.S. face grade according to selection to the facial image for detecting Show by the facial image after the operation of U.S. face
Specifically, after having obtained amplitude levels in no time, mobile terminal can according to rank amplitude in no time set in advance with U.S. face grade corresponding relation, automatically selects the corresponding U.S. face grade of amplitude levels in no time, and amplitude levels are more high in no time, select U.S. face lower grade.For example amplitude levels often increase by 2 grades of then U.S. 1 grade of face grade reductions in no time, until U.S. face grade is 0 (i.e. not Do U.S. face treatment).According to the U.S. face grade of selection, mobile terminal carries out U.S. face operation to the facial image for detecting, for example, beautiful In vain, the operation such as thin face, mill skin, big eye, then shown by the facial image after the operation of U.S. face in preview interface of taking pictures.
Further, in the present embodiment, the facial image preview processing method is further comprising the steps of:
The corresponding relation of the amplitude levels in no time and the U.S. face grade is preset, wherein, amplitude levels are got in no time Height, then its corresponding U.S. face lower grade.
Specifically, please also refer to Fig. 6, shown is the default interface of U.S. face of mobile terminal camera, and user can be by the boundary The preview mode of preview interface when face selection is taken pictures, including intelligent gradual-change mode and general mode, intelligent gradual-change mode are The intelligent mode of face preview processing unit of the present invention is applied, general mode is application, therefore is likely to occur U.S. face effect and adjusts Become.After user selects intelligent gradual-change mode, user can also be further right with the U.S. face grade to amplitude levels in no time Should be related to and be set, into being as shown in Figure 7 the default interface of Sino-U.S.'s face of the present invention, select different amplitude levels scopes in no time And U.S. face rate range, selection difference, then corresponding relation is different, for example, amplitude levels scope is fixed as 5 in no time when selection Level, and U.S. face rate range is 5 grades, then amplitude levels often increase by 1 grade of then 1 grade of U.S. face grade reduction in no time;When selection width in no time Degree rate range is fixed as 5 grades, and U.S. face rate range is 10 grades, then amplitude levels often increase by 1 grade of then U.S. face grade drop in no time It is low 2 grades.Herein, it is to be understood that when selection, amplitude levels range set is different in no time herein, then it is right with amplitude in no time Should be related to also change therewith, but, amplitude actual range does not change in no time, is 0% still with the amplitude in no time during positive face, with Left/right in no time 90 degree when side face be amplitude 100% in no time.For example, still then regarding as not turning within 5% in no time can give tacit consent to As a example by dynamic (i.e. amplitude levels are 0 grade in no time), when amplitude levels are 5 grades in no time, then 5%-24% corresponds to amplitude in no time in no time Grade is 1 grade, and amplitude levels are 2 grades to 24%-43% correspondences in no time in no time, the like.When amplitude levels are 10 grades in no time, Then the corresponding amplitude levels in no time of 5%-14.5% are 1 grade in no time, and amplitude levels are 2 grades to 14.5%-24% correspondences in no time in no time, according to It is secondary to analogize.Mobile terminal can determine the corresponding relation of amplitude levels and the U.S. face grade in no time according to the input of above-mentioned user And store, it is also possible in the case where user is for input, according to the unit of amplitude levels in no time and the U.S. face grade model of factory default The corresponding relation of the amplitude levels in no time given tacit consent to for enclosing and the U.S. face grade.
Further, in the present embodiment, the facial image preview processing method is further comprising the steps of:
When present frame cannot detect face and previous frame can detect face, the present frame is carried out with the previous frame Compare, the view data of the present frame is corrected according to comparison result.
When face is recognizable, facial image preview processing unit can be according to the U.S. face of correspondence adjustment of amplitude levels in no time Grade.However, trouble in human face recognition phenomenon in short-term caused by there is also at present because of face recognition algorithms defect or postponing.It is this kind of In the case of, user still can be seen face from preview interface, but mobile terminal can't show U.S. face effect because being not detected by face, Now need to be corrected preview display effect.
Specifically, when previous frame picture still can detect face and present frame picture cannot detect face, then now Recognition of face may be led to not for algorithm errors, it is also possible to for user moves suddenly, at this time, it may be necessary to by front and rear two frames picture Raw image data is compared, will the not U.S. view data of face of previous frame compare with the view data of present frame, if The front and rear picture of comparison discovery is simultaneously unchanged, then it is assumed that need to carry out present frame U.S. face treatment, because now None- identified is current The human face region of frame, can correct the view data of present frame according to the view data after previous frame U.S. face, it is carried U.S. face Effect.It is detailed, the area data after the U.S. face of previous frame human face region can be intercepted, according to preset ratio (such as 100% or 50% etc.) it is new to synthesize that weight carries out Region Matching or pixel matching come the human face data to present frame co-located region Human face data.For example, the ocular after previous frame U.S. face can be matched into the position of present frame, and merged according to preset ratio The block data of current previous frame ocular and the block data of present frame, obtain the new data of the position.Pixel matching according to Secondary to analogize, difference only its data fusion is in units of pixel without in units of region.
The mode of the raw image data of two frame pictures is not limited in the following manner including working as before and after comparing:1. it is overall to compare. Difference between the pixel of the correspondence position for contrasting each pixel and present frame of previous frame, difference is less than predetermined threshold value (such as picture Plain difference 5 or pixel number difference percentage 1% etc.) then regard as same pixel.Statistics same pixel accounts for whole pixels Percentage, if the value exceedes predetermined threshold value (such as 95%), regards as previous frame and present frame does not occur data variation, then make The correspondence position of currently displayed frame is corrected with the data after the U.S. face of previous frame.2. region compares.Due to previous frame Face information is detected, then can directly record the differentiation pixel region (such as human face region and part of its periphery) of face information And these region positions, and matched one by one with the pixel in the corresponding region of the correspondence position of present frame.If both are right Pixel difference is answered to regard as identical picture if being less than predetermined threshold value (such as pixel value difference 5 or pixel number difference percentage 1% etc.) Element.Statistics same pixel accounts for the percentage of whole pixels, if the value exceedes predetermined threshold value (such as 95%), regard as previous frame and Not there is data variation in present frame, then the corresponding position of currently displayed frame is corrected using the data after the U.S. face of previous frame Put.
Facial image preview processing method proposed by the present invention, mobile terminal real-time detection is taken pictures the face in preview interface Image, the current amplitude levels in no time of user are calculated according to facial symmetry axle;Amplitude levels selection in no time is corresponding described in U.S. face grade;U.S. face is carried out to the facial image for detecting according to the U.S. face grade of selection to operate and show by U.S. face behaviour The facial image after work.So as to, by calculating the amplitude levels in no time of user, correspondence U.S. face grade is thus selected, make pre- The U.S. face effect of the facial image in the range of looking at gradually changes according to the change of amplitude in no time, it is to avoid picture in preview range The U.S. face effect saltus step that face is highlighted, makes display humanized.
It should be noted that herein, term " including ", "comprising" or its any other variant be intended to non-row His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and And also include other key elements being not expressly set out, or also include for this process, method, article or device institute are intrinsic Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented, in the case where not conflicting, Feature in the embodiment of the present invention and embodiment can be mutually combined implementation.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably implementation method.Based on such understanding, technical scheme is substantially done to prior art in other words The part for going out contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions are used to so that a station terminal equipment (can be mobile phone, computer, clothes Business device, air-conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
The preferred embodiments of the present invention are these are only, the scope of the claims of the invention is not thereby limited, it is every to utilize this hair Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of facial image preview processing unit, is applied to mobile terminal capable of taking pictures, it is characterised in that described device includes:
Symmetrical detection module, the facial image taken pictures for real-time detection in preview interface calculates user according to facial symmetry axle Current amplitude levels in no time;
U.S. face adjusting module, for amplitude levels to select corresponding U.S. face grade in no time described in;
U.S. face display module, carries out U.S. face and operates and show for the U.S. face grade according to selection to the facial image for detecting Show by the facial image after the operation of U.S. face.
2. facial image preview processing unit according to claim 1, it is characterised in that the symmetrical detection module is also used In:
When face is detected just to camera, facial symmetry axle is determined according to currently detected facial image.
3. facial image preview processing unit according to claim 1, it is characterised in that the symmetrical detection module is specific Including:
Face datection unit, the facial image taken pictures for real-time detection in preview interface, and determine each in the facial image Individual facial regional area;
Symmetrical computing unit, for determining with the distance of the facial symmetry axle according to each facial regional area centre distance The amplitude levels in no time.
4. facial image preview processing unit according to claim 1, it is characterised in that the facial image preview treatment Device also includes:
U.S. face presetting module, the corresponding relation for presetting the amplitude levels in no time and the U.S. face grade, wherein, turn Face amplitude levels are higher, then its corresponding U.S. face lower grade.
5. facial image preview processing unit according to claim 1, it is characterised in that the facial image preview treatment Device also includes:
Preview correction module, for cannot detect face when present frame and during the detectable face of previous frame, by the present frame with The previous frame is compared, and the view data of the present frame is corrected according to comparison result.
6. a kind of facial image preview processing method, is applied to mobile terminal capable of taking pictures, it is characterised in that methods described includes:
Real-time detection is taken pictures the facial image in preview interface, and current amplitude in no time of user etc. is calculated according to facial symmetry axle Level;
Amplitude levels select corresponding U.S. face grade in no time described in;
U.S. face is carried out to the facial image for detecting according to the U.S. face grade of selection to operate and show by after the operation of U.S. face The facial image.
7. facial image preview processing method according to claim 6, it is characterised in that described according to facial symmetry axle meter Also include before calculating the current amplitude levels in no time of user:
When face is detected just to camera, facial symmetry axle is determined according to currently detected facial image.
8. facial image preview processing method according to claim 6, it is characterised in that the real-time detection is taken pictures preview Facial image in interface, calculates the current amplitude levels in no time of user and specifically includes according to facial symmetry axle:
Real-time detection is taken pictures the facial image in preview interface, and determines each facial regional area in the facial image;
According to the distance of each facial regional area centre distance and the facial symmetry axle determine described in amplitude levels in no time.
9. facial image preview processing method according to claim 6, it is characterised in that the real-time detection is taken pictures preview Before facial image in interface, methods described also includes:
The corresponding relation of the amplitude levels in no time and the U.S. face grade is preset, wherein, amplitude levels are higher in no time, then Its corresponding U.S. face lower grade.
10. facial image preview processing method according to claim 6, it is characterised in that methods described also includes:
When present frame cannot detect face and previous frame can detect face, the present frame is compared with the previous frame It is right, the view data of the present frame is corrected according to comparison result.
CN201611051275.3A 2016-11-25 2016-11-25 Facial image preview processing method and processing device Pending CN106791365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611051275.3A CN106791365A (en) 2016-11-25 2016-11-25 Facial image preview processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611051275.3A CN106791365A (en) 2016-11-25 2016-11-25 Facial image preview processing method and processing device

Publications (1)

Publication Number Publication Date
CN106791365A true CN106791365A (en) 2017-05-31

Family

ID=58911179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611051275.3A Pending CN106791365A (en) 2016-11-25 2016-11-25 Facial image preview processing method and processing device

Country Status (1)

Country Link
CN (1) CN106791365A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566736A (en) * 2017-09-30 2018-01-09 努比亚技术有限公司 A kind of grasp shoot method and mobile terminal
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN110096958A (en) * 2019-03-27 2019-08-06 深圳和而泰家居在线网络科技有限公司 A kind of method, apparatus and calculating equipment of identification face image
CN110598648A (en) * 2019-09-17 2019-12-20 江苏慧眼数据科技股份有限公司 Video face detection method, video face detection unit and system
CN112135047A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
US10929646B2 (en) 2017-10-31 2021-02-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for image processing, and computer-readable storage medium
CN114845048A (en) * 2022-04-06 2022-08-02 福建天创信息科技有限公司 Photographing method and system based on intelligent terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183638A1 (en) * 2000-06-14 2007-08-09 Minolta Co., Ltd. Image extracting apparatus and image extracting method
US20090115864A1 (en) * 2007-11-02 2009-05-07 Sony Corporation Imaging apparatus, method for controlling the same, and program
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN104715236A (en) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 Face beautifying photographing method and device
CN105046660A (en) * 2015-07-02 2015-11-11 广东欧珀移动通信有限公司 Image beautifying method and device
CN106470315A (en) * 2015-08-20 2017-03-01 卡西欧计算机株式会社 Image processing apparatus and image processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183638A1 (en) * 2000-06-14 2007-08-09 Minolta Co., Ltd. Image extracting apparatus and image extracting method
US20090115864A1 (en) * 2007-11-02 2009-05-07 Sony Corporation Imaging apparatus, method for controlling the same, and program
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN104715236A (en) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 Face beautifying photographing method and device
CN105046660A (en) * 2015-07-02 2015-11-11 广东欧珀移动通信有限公司 Image beautifying method and device
CN106470315A (en) * 2015-08-20 2017-03-01 卡西欧计算机株式会社 Image processing apparatus and image processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵淼: "面向人机交互的人脸识别方法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566736A (en) * 2017-09-30 2018-01-09 努比亚技术有限公司 A kind of grasp shoot method and mobile terminal
US10929646B2 (en) 2017-10-31 2021-02-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for image processing, and computer-readable storage medium
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN110096958A (en) * 2019-03-27 2019-08-06 深圳和而泰家居在线网络科技有限公司 A kind of method, apparatus and calculating equipment of identification face image
CN110598648A (en) * 2019-09-17 2019-12-20 江苏慧眼数据科技股份有限公司 Video face detection method, video face detection unit and system
CN110598648B (en) * 2019-09-17 2023-05-09 无锡慧眼人工智能科技有限公司 Video face detection method, video face detection unit and system
CN112135047A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN114845048A (en) * 2022-04-06 2022-08-02 福建天创信息科技有限公司 Photographing method and system based on intelligent terminal
CN114845048B (en) * 2022-04-06 2024-01-19 福建天创信息科技有限公司 Photographing method and system based on intelligent terminal

Similar Documents

Publication Publication Date Title
CN104750420B (en) Screenshotss method and device
CN106791365A (en) Facial image preview processing method and processing device
CN104866265B (en) Multi-media file display method and device
CN105094613B (en) Terminal control mechanism and method
CN104935044B (en) Charging method and charging unit
CN105263226B (en) Control the method and mobile terminal of lighting apparatus
CN105357367B (en) Recognition by pressing keys device and method based on pressure sensor
CN106873855A (en) A kind of suspension icon control method, device and terminal
CN106686451A (en) Terminal and video playing control method
CN105430258B (en) A kind of method and apparatus of self-timer group photo
CN104811532A (en) Terminal screen display parameter adjustment method and device
CN107016639A (en) A kind of image processing method and device
CN106873936A (en) Electronic equipment and information processing method
CN106897004A (en) A kind of method of adjustment of mobile terminal and display interface
CN107087074A (en) A kind of method, device and terminal for adjusting screen intensity
CN106851113A (en) A kind of photographic method and mobile terminal based on dual camera
CN106791141A (en) A kind of method of adjustment and mobile terminal of sound effect parameters of conversing
CN106791155A (en) A kind of volume adjustment device, volume adjusting method and mobile terminal
CN106383707A (en) Picture display method and system
CN106250130A (en) A kind of mobile terminal and the method for response button operation
CN108076231A (en) A kind of movement exchange householder method and device
CN105242483B (en) The method and apparatus that a kind of method and apparatus for realizing focusing, realization are taken pictures
CN104731484B (en) The method and device that picture is checked
CN106851114A (en) A kind of photo shows, photo generating means and method, terminal
CN106125898A (en) The method and device of screen rotation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531

RJ01 Rejection of invention patent application after publication