CN105979194A - Video image processing apparatus and method - Google Patents
Video image processing apparatus and method Download PDFInfo
- Publication number
- CN105979194A CN105979194A CN201610362164.8A CN201610362164A CN105979194A CN 105979194 A CN105979194 A CN 105979194A CN 201610362164 A CN201610362164 A CN 201610362164A CN 105979194 A CN105979194 A CN 105979194A
- Authority
- CN
- China
- Prior art keywords
- face
- video
- video image
- module
- instrument
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 165
- 238000012545 processing Methods 0.000 title claims abstract description 71
- 230000008569 process Effects 0.000 claims description 125
- 230000001815 facial effect Effects 0.000 claims description 41
- 230000006835 compression Effects 0.000 claims description 32
- 238000007906 compression Methods 0.000 claims description 32
- 235000002673 Dioscorea communis Nutrition 0.000 claims description 21
- 241000544230 Dioscorea communis Species 0.000 claims description 21
- 208000035753 Periorbital contusion Diseases 0.000 claims description 21
- 230000002087 whitening effect Effects 0.000 claims description 21
- 238000012546 transfer Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 abstract description 12
- 238000004891 communication Methods 0.000 description 23
- 230000000875 corresponding effect Effects 0.000 description 14
- 210000003128 head Anatomy 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000010295 mobile communication Methods 0.000 description 6
- 230000005611 electricity Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000003255 anti-acne Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000010408 film Substances 0.000 description 1
- 239000009730 ganji Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000013386 optimize process Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000002463 transducing effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a video image processing apparatus and method. The apparatus comprises a collection module, a beautifying module, and a processing module. To be specific, the collection module is used for collecting all frames of video images in a previewing video and an opposite-terminal video, wherein the previewing video refers to a video that is used for previewing and is displayed at a display interface of a current terminal in a conversation video and the opposite-terminal video refers to a video that is displayed at a display interface of a terminal of the other conversation side; the beautifying module is used for carrying out beautifying processing on each frame of collected video image; and the processing module is used for displaying the previewing video after beautifying processing on the display interface of the current terminal and sending the opposite-terminal video after beautifying processing to the terminal of the other conversation side. According to the technical scheme, the display effect of the video image can be improved and the user experience can be enhanced.
Description
Technical field
The present invention relates to terminal applies field, particularly relate to a kind of video image processing device and method.
Background technology
Along with the fast development of terminal applies technology, video calling has been increasingly becoming the daily call of user
In a kind of universal call form.But, in current video call technology, display is at both call sides
Video image in terminal all can seem that due to the factor such as light or angle comparison is dim, and visual experience is very poor,
For facial image, it appears the colour of skin is dim, lackluster, has a strong impact on the image of user.
It is to accept for the user that personal image is paid special attention to by this point for those, such as female user
, therefore, need person skilled badly and propose a kind of effective solution, improve the aobvious of video image
Show effect, improve the experience sense of user.
Summary of the invention
Present invention is primarily targeted at a kind of video image processing device of proposition and method, it is possible to improve and regard
Frequently the display effect of image, improves the experience sense of user.
For achieving the above object, the invention provides a kind of video image processing device, this device includes:
Acquisition module, U.S. face module and processing module.
Acquisition module, for gathering each frame video image in preview video and opposite end video respectively;Its
In, preview video refers to: in call video, for preview on currently displayed terminal demonstration interface
Video;Described opposite end video refers to, for display video on partner terminal demonstration interface.
U.S. face module, processes for each frame video image gathered carries out U.S. face.
Processing module, the display interface of the currently displayed terminal of preview video for processing through U.S. face
On, and the opposite end video processed through U.S. face is sent to partner terminal.
Alternatively, this device also includes: face recognition module, judge module and determine module.
Face recognition module, for carrying out people according to the face recognition algorithms preset to each frame video image
Face identification.
Judge module, judges current this frame video identified for the recognition result according to face recognition module
Whether image exists facial image.
Determine module, in judging current this frame video image identified when judge module, there is face figure
During picture, the U.S. face module of order carries out U.S. face to current this frame video image identified and processes;When judge module is sentenced
When this frame video image identified before settled does not exists facial image, ignore this frame video of current identification
Image.
Alternatively, U.S. face module carries out U.S. face and processes and include each frame video image gathered:
Default human face is identified respectively from the facial image identified.
Transfer default U.S. face and process packet;This U.S.'s face processes and comprises one or more U.S. face in packet
Handling implement.
Use one or more U.S. face handling implement that human face carries out corresponding U.S. face to process.
Alternatively,
Human face includes: face, eyes and lip.
U.S. face handling implement includes: whitening instrument, thinning face instrument, remove black eye instrument and rich lip instrument.
Use one or more U.S. face handling implement that human face carries out corresponding U.S. face process to include:
Use whitening instrument that face is carried out whitening process.
Use thinning face instrument that face is carried out thinning face process.
Employing goes black eye instrument to go black eye to process eyes.
Use rich lip instrument that lip carries out rich lip process.
Alternatively, the opposite end video processed through U.S. face is sent to partner terminal and includes by processing module:
By each frame video image of the opposite end video through U.S. face process according to each frame video image
Acquisition order stores.
Enter according to the full video image of the default video compression algorithm opposite end video to processing through U.S. face
Row compression.
Full video image through overcompression is converted to the signal of telecommunication with analog signal form, by this electricity
The analogue signal of signal is converted to digital signal, and this digital signal is carried out signal processing.
Digital signal through signal processing is sent to partner terminal.
For achieving the above object, present invention also offers a kind of method of video image processing, the method includes:
Gather each frame video image in preview video and opposite end video respectively;Wherein, preview video is
Refer to: in call video, for the video of preview on currently displayed terminal demonstration interface;Opposite end video
Refer to, for display video on partner terminal demonstration interface.
The each frame video image gathered carries out U.S. face process.
On the display interface of the currently displayed terminal of preview video that U.S. for process face is processed, and will be through beautiful
The opposite end video that face processes is sent to partner terminal.
Alternatively, the method also includes:
Judge whether current this frame video image identified exists facial image according to recognition result.
When judging current this frame video image identified exists facial image, to current this frame identified
Video image carries out U.S. face and processes;When judging that current this frame video image identified does not exists facial image
Time, ignore this frame video image of current identification.
Alternatively, each frame video image gathered carries out U.S. face process to include:
According to default face recognition algorithms, each frame video image is carried out recognition of face.
Default human face is identified respectively from the facial image identified.
Transfer default U.S. face and process packet;This U.S.'s face processes and comprises one or more U.S. face in packet
Handling implement.
Adopt the one or more U.S. face handling implement and human face is carried out corresponding U.S. face process.
Alternatively,
Human face includes: face, eyes and lip.
U.S. face handling implement includes: whitening instrument, thinning face instrument, remove black eye instrument and rich lip instrument.
Use one or more U.S. face handling implement that human face carries out corresponding U.S. face process to include:
Use whitening instrument that face is carried out whitening process.
Use thinning face instrument that face is carried out thinning face process.
Employing goes black eye instrument to go black eye to process eyes.
Use rich lip instrument that lip carries out rich lip process.
Alternatively, the opposite end video processed through U.S. face is sent to partner terminal include:
By each frame video image of the opposite end video through U.S. face process according to each frame video image
Acquisition order stores.
Enter according to the full video image of the default video compression algorithm opposite end video to processing through U.S. face
Row compression.
Full video image through overcompression is converted to the signal of telecommunication with analog signal form, by this electricity
The analogue signal of signal is converted to digital signal, and this digital signal is carried out signal processing.
Digital signal through signal processing is sent to partner terminal.
The present invention proposes a kind of video image processing device and method, and this device includes: acquisition module,
U.S. face module and processing module.Acquisition module, every for gather in preview video and opposite end video respectively
One frame video image;Wherein, preview video refers to: in call video, currently displayed terminal demonstration
For the video of preview on interface;Described opposite end video refers to, for display at partner terminal demonstration
Video on interface.U.S. face module, processes for each frame video image gathered carries out U.S. face.Place
Reason module, the display interface of the currently displayed terminal of preview video for processing through U.S. face, and
The opposite end video processed through U.S. face is sent to partner terminal..Pass through the present invention program, it is possible to change
The display effect of kind video image, improves the experience sense of user.
Accompanying drawing explanation
Fig. 1 is the hardware architecture diagram realizing each one optional mobile terminal of embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the video image processing device composition frame chart of the embodiment of the present invention;
Fig. 4 is the method for video image processing flow chart of the embodiment of the present invention;
Fig. 5 is the method for video image processing schematic diagram of the embodiment of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, do referring to the drawings further
Explanation.
Detailed description of the invention
Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not used to limit
Determine the present invention.
Describe referring now to accompanying drawing and realize the present invention one optional mobile terminal of each embodiment.Rear
In continuous description, use such as " module ", " parts " or the suffix of " unit " for representing element
Only for the explanation of the beneficially present invention, itself do not has specific meaning.Therefore, " module " and " portion
Part " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can include
(individual digital helps for such as mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA
Reason), PAD (panel computer), PMP (portable media player), the mobile end of guider etc.
The fixed terminal of end and such as numeral TV, desk computer etc..Hereinafter it is assumed that terminal is mobile whole
End.However, it will be understood by those skilled in the art that, in addition to being used in particular for the element of mobile purpose,
Structure according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can include wireless communication unit 110, A/V (audio/video) input block 120,
User input unit 130, sensing unit 140, output unit 150, memorizer 160, interface unit 170,
Controller 180 and power subsystem 190 etc..Fig. 1 shows the mobile terminal with various assembly, but should
It is understood by, it is not required that implement all assemblies illustrated.Can alternatively implement more or less of group
Part.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows mobile terminal 100 with wireless
Radio communication between communication system or network.Such as, wireless communication unit can include broadcast reception
Module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and position letter
At least one in breath module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receive broadcast singal and/
Or broadcast related information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast control services
Device can be to generate and send generation before broadcast singal and/or the server of broadcast related information or reception
Broadcast singal and/or broadcast related information and send it to the server of terminal.Broadcast singal is permissible
Including TV broadcast singal, radio signals, data broadcasting signal etc..And, broadcast singal can
To farther include the broadcast singal combined with TV or radio signals.Broadcast related information can also
There is provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 receives.Broadcast singal can exist in a variety of manners, and such as, it can be wide with digital multimedia
Broadcast the electronic program guides (EPG) of (DMB), the electronic service guidebooks of digital video broadcast-handheld (DVB-H)
Etc. (ESG) form and exist.Broadcast reception module 111 can be by using various types of broadcast system
System receives signal broadcast.Especially, broadcast reception module 111 can by use such as multimedia broadcasting-
Ground (DMB-T), DMB-satellite (DMB-S), DVB-hand-held (DVB-H),
Forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T)
Etc. digit broadcasting system receive digital broadcasting.Broadcast reception module 111 may be constructed such that and is adapted to provide for
The various broadcast systems of broadcast singal and above-mentioned digit broadcasting system.Receive via broadcast reception module 111
Broadcast singal and/or broadcast related information can be stored in memorizer 160 (or other type of storage is situated between
Matter) in.
Mobile communication module 112 send radio signals to base station (such as, access point, node B etc.),
In exterior terminal and server at least one and/or receive from it radio signal.Such radio
Signal can include voice call signal, video calling signal or according to text and/or Multimedia Message
The various types of data sent and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can internal or
Externally it is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can include WLAN (nothing
Line LAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (at a high speed
Downlink packets accesses) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology
Including bluetoothTM, RF identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybeeTM
Etc..
Positional information module 115 is the module of positional information for checking or obtain mobile terminal.Position is believed
The typical case of breath module is GPS (global positioning system).According to current technology, GPS module 115 calculates
From the range information of three or more satellites and correct time information and for the Information application calculated
Triangulation, thus according to longitude, latitude and highly accurately calculating three-dimensional current location information.When
Before, use three satellites and by using other one for calculating the method for position and temporal information
Satellite corrects the position and the error of temporal information calculated.Additionally, GPS module 115 can be by real time
Ground Continuous plus current location information calculates velocity information.
A/V input block 120 is used for receiving audio or video signal.A/V input block 120 can include phase
Machine 121 and mike 1220, camera 121 is caught by image in Video Capture pattern or image capture mode
The view data of the static images or video that obtain device acquisition processes.Picture frame after process can show
Show on display unit 151.Picture frame after camera 121 processes can be stored in memorizer 160 (or other
Storage medium) in or be transmitted via wireless communication unit 110, can carry according to the structure of mobile terminal
For two or more cameras 1210.Mike 122 can be known at telephone calling model, logging mode, voice
Other pattern etc. operational mode receives sound (voice data) via mike, and can be by such sound
Sound is processed as voice data.Audio frequency (voice) data after process can turn in the case of telephone calling model
It is changed to be sent to the form output of mobile communication base station via mobile communication module 112.Mike 122 can
Eliminate (or suppression) algorithm with the various types of noises of enforcement and in reception and send audio frequency letter to eliminate (or suppression)
The noise produced during number or interference.
It is mobile to control that user input unit 130 can generate key input data according to the order of user's input
The various operations of terminal.User input unit 130 allows user to input various types of information, and permissible
Including keyboard, metal dome, touch pad (such as, detection due to touched and cause resistance, pressure, electricity
The sensitive component of change held etc.), roller, rocking bar etc..Especially, when touch pad as a layer
When being superimposed upon on display unit 151, touch screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or
Closed mode), the position of mobile terminal 100, user is for the contact (that is, touch input) of mobile terminal 100
Presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc.,
And generate the order or signal being used for controlling the operation of mobile terminal 100.Such as, when mobile terminal 100
When being embodied as sliding-type mobile phone, it is to engage on or off that sensing unit 140 can sense this sliding-type number
Close.It addition, sensing unit 140 can detect whether power subsystem 190 provides electric power or interface unit 170
Whether couple with external device (ED).It is tactile that sensing unit 140 can include that proximity transducer 1410 will combine below
Touch screen this is described.
Interface unit 170 is used as at least one external device (ED) and is connected connecing of can passing through with mobile terminal 100
Mouthful.Such as, external device (ED) can include wired or wireless head-band earphone port, external power source (or battery
Charger) port, wired or wireless FPDP, memory card port, for connect there is identification module
The port of device, audio frequency input/output (I/O) port, video i/o port, ear port etc..Identify mould
Block can be that storage is for verifying that user uses the various information of mobile terminal 100 and can include user
Identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc..It addition,
The device (hereinafter referred to as " identifying device ") with identification module can be to take the form of smart card, therefore, knows
Other device can be connected with mobile terminal 100 via port or other attachment means.Interface unit 170 is permissible
For receiving from the input (such as, data message, electric power etc.) of external device (ED) and defeated by receive
Enter to be transferred to the one or more elements in mobile terminal 100 or may be used in mobile terminal and outside
Data are transmitted between device.
It addition, when mobile terminal 100 is connected with external base, interface unit 170 can serve as allowing to lead to
Cross it provide the path of mobile terminal 100 by electric power from base or can serve as allowing to input from base
Various command signals be transferred to the path of mobile terminal by it.Various command signals from base input
Or electric power may serve as identifying whether mobile terminal is accurately fitted within the signal on base.Output is single
Unit 150 be configured to vision, audio frequency and/or tactile manner provide output signal (such as, audio signal,
Video signal, alarm signal, vibration signal etc.).Output unit 150 can include display unit 151,
Dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 100.Such as, mobile terminal is worked as
100 when being in telephone calling model, display unit 151 can show and call or other communicate (such as, civilian
This information receiving and transmitting, multimedia file download etc.) relevant user interface (UI) or graphic user interface
(GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151
Can show capture image and/or the image of reception, illustrate video or image and the UI of correlation function or
GUI etc..
Meanwhile, when display unit 151 and touch pad the most superposed on one another to form touch screen time, aobvious
Show that unit 151 can serve as input equipment and output device.Display unit 151 can include liquid crystal display
(LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexibility show
Show at least one in device, three-dimensional (3D) display etc..Some in these display may be constructed such that
Transparence is to allow user to watch from outside, and this is properly termed as transparent display, typical transparent display
Can for example, TOLED (transparent organic light emitting diode) display etc..According to the specific enforcement wanted
Mode, mobile terminal 100 can include two or more display units (or other display device), such as,
Mobile terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch screen can
For detecting touch input pressure and touch input position and touch input area.
Dio Output Modules 152 can be in call signal at mobile terminal and receive pattern, call mode, note
Time under the isotypes such as record pattern, speech recognition mode, broadcast reception mode, wireless communication unit 110 is connect
Receive or in memorizer 160 storage voice data transducing audio signal and be output as sound.And
And, the audio frequency that dio Output Modules 152 can provide relevant to the specific function of mobile terminal 100 execution is defeated
Go out (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can wrap
Include speaker, buzzer etc..
Alarm unit 153 can provide output to notify event to mobile terminal 100.Typically
Event can include calling reception, message sink, key signals input, touch input etc..Except audio frequency
Or outside video frequency output, alarm unit 153 can provide in a different manner and export sending out with notification event
Raw.Such as, alarm unit 153 can with vibration form provide output, when receive calling, message or
During some other entrance communication (incomingcommunication), alarm unit 153 can provide sense of touch defeated
Go out (that is, vibration) to notify to user.By providing such sense of touch to export, even if in the shifting of user
When mobile phone is in the pocket of user, user also is able to identify the generation of various event.Alarm unit
The output of 153 generations that notification event can also be provided via display unit 151 or dio Output Modules 152.
Memorizer 160 can store the process performed by controller 180 and control the software program etc. of operation
Deng, or can temporarily store oneself through output maybe will export data (such as, telephone directory, message,
Still image, video etc.).And, memorizer 160 can store about when touch is applied to touch screen
The vibration of the various modes of output and the data of audio signal.
Memorizer 160 can include the storage medium of at least one type, described storage medium include flash memory,
Hard disk, multimedia card, card-type memorizer (such as, SD or DX memorizer etc.), random access storage device
(RAM), static random-access memory (SRAM), read only memory (ROM), electrically erasable
Read only memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, light
Dish etc..And, mobile terminal 100 can be connected the storage function performing memorizer 160 with by network
Network storage device cooperation.
Controller 180 generally controls the overall operation of mobile terminal.Such as, controller 180 performs and voice
Control that call, data communication, video calling etc. are relevant and process.It addition, controller 180 can wrap
Including the multi-media module 1810 for reproducing (or playback) multi-medium data, multi-media module 1810 can construct
In controller 180, or it is so structured that separate with controller 180.Controller 180 can perform pattern
Identifying processing, with the handwriting input performed on the touchscreen or picture are drawn input be identified as character or
Image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides behaviour
Make the suitable electric power needed for each element and assembly.
Various embodiment described herein can be to use such as computer software, hardware or its any group
The computer-readable medium closed is implemented.Implementing for hardware, embodiment described herein can pass through
Use application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device
(DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, control
Device, microcontroller, microprocessor, it is designed to perform in the electronic unit of function described herein extremely
Few one is implemented, and in some cases, such embodiment can be implemented in controller 180.Right
Implementing in software, the embodiment of such as process or function can perform at least one function or behaviour with permission
The single software module made is implemented.Software code can be by writing with any suitable programming language
Software application (or program) is implemented, and software code can be stored in memorizer 160 and by controlling
Device 180 performs.
So far, oneself is through describing mobile terminal according to its function.Below, for the sake of brevity, will describe
Various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc.
In slide type mobile terminal as example.Therefore, the present invention can be applied to any kind of mobile whole
End, and it is not limited to slide type mobile terminal.
As shown in Figure 1 mobile terminal 100 may be constructed such that and utilizes via frame or packet transmission data
The most wired and wireless communication system and satellite-based communication system operate.
The communication system being wherein operable to according to the mobile terminal of the present invention is described referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer.Such as, by communication system
The air interface that system uses includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA
(CDMA) move lead to UMTS (UMTS) (especially, Long Term Evolution (LTE)), the whole world
Communication system (GSM) etc..As non-limiting example, explained below relates to cdma communication system, but
It is that such teaching is equally applicable to other type of system.
With reference to Fig. 2, cdma wireless communication system can include multiple mobile terminal 100, multiple base station
(BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to
Interface is formed with Public Switched Telephony Network (PSTN) 290.MSC280 is also structured to and can be via returning
Journey circuit is couple to the BSC275 of base station 270 and forms interface.If the interface that back haul link can be known according to Ganji
In any one construct, described interface includes such as E1/T1, ATM, IP, PPP, frame relay, HDSL,
ADSL or xDSL.It will be appreciated that system as shown in Figure 2 can include multiple BSC2750.
Each BS270 can service one or more subregion (or region), by multidirectional antenna or sensing certain party
To antenna cover each subregion radially away from BS270.Or, each subregion can be by for dividing
Two or more antennas that collection receives cover.Each BS270 may be constructed such that support multiple frequencies distribution,
And the distribution of each frequency has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Intersecting that subregion and frequency are distributed can be referred to as CDMA Channel.BS270 can also be referred to as base station
Transceiver subsystem (BTS) or other equivalent terms.In this case, term " base station " can be used
In broadly representing single BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".
Or, each subregion of specific BS270 can be referred to as multiple cellular station.
As shown in Figure 2, broadcast singal is sent in system the shifting operated by broadcsting transmitter (BT) 295
Dynamic terminal 100.Broadcast reception module 111 is arranged on mobile terminal 100 and sentences reception as shown in Figure 1
The broadcast singal sent by BT295.In fig. 2 it is shown that several global positioning systems (GPS) satellite 300.
Satellite 300 helps to position at least one in multiple mobile terminals 100.
In fig. 2, depict multiple satellite 300, it is understood that be, it is possible to use any number of defend
Star obtains useful location information.GPS module 115 is generally configured to and satellite 300 as shown in Figure 1
Coordinate the location information wanted with acquisition.Substitute GPS tracking technique or outside GPS tracking technique, can
To use other technology of the position that can follow the tracks of mobile terminal.It addition, at least one gps satellite 300 can
Optionally or additionally to process satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives from various mobile terminals 100
Reverse link signal.Mobile terminal 100 generally participates in call, information receiving and transmitting communicates with other type of.Special
The each reverse link signal determining base station 270 reception is processed in specific BS270.The data obtained
It is forwarded to the BSC275 being correlated with.BSC provides call resource distribution and the soft handover included between BS270
The mobile management function of the coordination of process.The data received also are routed to MSC280 by BSC275, its
Extra route service for forming interface with PSTN290 is provided.Similarly, PSTN290 with
MSC280 forms interface, MSC Yu BSC275 forms interface, and BSC275 correspondingly controls BS270
Forward link signals to be sent to mobile terminal 100.
Based on above-mentioned optional mobile terminal hardware configuration and communication system, propose the inventive method each
Embodiment.
Along with the fast development of terminal applies technology, video calling has been increasingly becoming the daily call of user
In a kind of universal call form.But, in current video call technology, display is at both call sides
Video image in terminal all can seem that due to the factor such as light or angle comparison is dim, and visual experience is very poor,
For facial image, it appears the colour of skin is dim, lackluster, has a strong impact on the image of user.
It is to accept for the user that personal image is paid special attention to by this point for those, such as female user
Need a kind of effective solution badly, improve the display effect of video image, improve the experience sense of user.
As it is shown on figure 3, first embodiment of the invention proposes a kind of video image processing device 01, this device
Including: acquisition module 02, U.S. face module 03 and processing module 04.At the video image of the embodiment of the present invention
Reason device 01 can apply in any terminal with video capability, and can process and include that call regards
Frequency is in interior any visual form, for example, it is possible to the Video processing being applied in volte video call process.
Acquisition module 02, for gathering each frame video image in preview video and opposite end video respectively;
Wherein, preview video refers to: in call video, for preview on currently displayed terminal demonstration interface
Video described in opposite end video refer to, for display video on partner terminal demonstration interface.
In embodiments of the present invention, in order to realize improving the purpose of the display effect of video image, it is proposed that
A kind of processing scheme that video image is carried out U.S. face, i.e. photographic head gathers user's by camera function
Video image, and the video image of collection is sent to the Computer Vision in embodiment of the present invention scheme
Device 01, due in volte video calling, including on currently displayed terminal demonstration interface for preview
Preview video and for display opposite end video on partner terminal demonstration interface, therefore, in order to
Reach good U.S. face effect, and make screen both sides all have good visual experience, to preview video
All carry out U.S. face to process with opposite end video.Before this, need to first pass through in embodiment of the present invention scheme
Acquisition module 02 gathers each frame video image in preview video and opposite end video respectively.Due to for often
For one video, being all made up of the video image of a lot of frames, each frame video image is exactly one
Picture, if it is desired to all obtain good display effect in the whole video stage, need in the middle of to video
Each frame video image carries out U.S. face and processes, accordingly, it would be desirable to acquisition module 02 and the transmission of video of photographic head
Speed keeps consistent, is acquired each frame video image in video flowing, and according to every in video
The broadcasting of frame video image successively puts in order and gathers each frame video image.And for acquisition module 02
First preview video is acquired, is the most first acquired to end data, or gather not do simultaneously and have
Body limits, and can carry out different settings according to different application scenarios.
It is further to note that the above-mentioned photographic head preset can be pre-installed in terminal interior
Put photographic head, it is also possible to be the independent photographic head that temporarily connects of user, and for the specification chi of photographic head
Very little, pixel height, transmission speed etc. do not limit.Any embodiment of the present invention scheme of being capable of
Photographic head is all within protection scope of the present invention.
U.S. face module 03, processes for each frame video image gathered carries out U.S. face.
In embodiments of the present invention, preview video and opposite end video are collected by above-mentioned acquisition module 02
In each frame video image after, U.S. face module 03 just can be entered for each frame video image obtained
The corresponding U.S. face of row processes.
Process, therefore, in U.S. further, since the embodiment of the present invention carries out U.S. face mainly for facial image
Before face module 03 carries out U.S. face process to the video image gathered, need to first pass through default face and know
Other algorithm carries out recognition of face to each frame video image parsed.Only for identifying facial image
Video image just carry out U.S. face and process, the video image those not being identified to facial image just may be used
Directly to ignore or to skip, do not carry out U.S. face and process.Specifically can be by the recognition of face in below scheme
Module 05, judge module 06 and determine module 07 realize each frame video image recognition of face work.
Alternatively, this device also includes: face recognition module 05, judge module 06 and determine module 07.
Face recognition module 05, for carrying out each frame video image according to the face recognition algorithms preset
Recognition of face.
In embodiments of the present invention, in order to avoid video image being carried out recognition of face each by video time
The order of frame video image is mixed up, before each frame video image is carried out recognition of face, first according to regarding
In Pin, the broadcasting of every frame video image successively puts in order and obtains each frame video image, and is carrying out people
After face identification, still successively put in order to regarding after identifying according to playing of frame video image every in video
Frequently image is kept in.
In embodiments of the present invention, this face recognition algorithms preset can be current existing any one
Enforceable face recognition algorithms, does not limits for specific algorithm.Alternatively, this face recognition algorithms
It can be principal component analysis PCA face recognition algorithms.
PCA face recognition algorithms is otherwise known as " eigenface technology ", and basic thought is: find facial image
The basic element (eyes, buccal, lower jaw, lip etc.) of distribution, i.e. facial image sample set covariance square
The characteristic vector (characteristic vector is referred to as eigenface) of battle array, characterizes facial image approx with this.By eyes,
Buccal, lower jaw sample set covariance matrix characteristic vector be referred to as " the sub-face of feature "." the sub-face of feature "
Generated subspace in corresponding image space, is referred to as in " sub-face space ".Calculate test image window to exist
The projector distance in " sub-face space ", if video in window meets default threshold value comparison condition, then judges
For face.
For the recognition result according to face recognition module, judge module 06, judges that current this frame identified regards
Frequently whether image exists facial image.
In embodiments of the present invention, know from current this frame video image identified when face recognition module 05
When not going out face, it is judged that module 06 decides that in current this frame video image identified and there is facial image;
When face recognition module 05 does not identifies face from current this frame video image identified, it is judged that module
There is not facial image in 06 this frame video image deciding that current identification.
Determine module 07, in judging current this frame video image identified when judge module, there is face
During image, the U.S. face module of order carries out U.S. face to current this frame video image identified and processes;Work as judge module
When judging to there is not facial image in current this frame video image identified, this frame ignoring current identification regards
Frequently image.
In embodiments of the present invention, after providing result of determination by above-mentioned judge module 06, just can root
Process accordingly according to result of determination, i.e. judge, when judge module 06, this frame video figure of currently identifying
When there is facial image in Xiang, determining that module 07 can activate U.S. face module 03, the U.S. face module 03 of order is to working as
This frame video image of front identification carries out U.S. face and processes.This scheme makes U.S. face module 03 to locate always
In duty, in order to save terminal resource, when need not U.S. face module U.S.'s face, U.S. face can be set
Module 03 is in holding state or is in default low power consumpting state, this terminal resource of having clamoured further
Consume.When there is not facial image during judge module 06 judges this frame video image currently identified, really
Cover half block 07 ignores this frame video image of current identification, enters the handling process of next frame video image.
In embodiments of the present invention, by above-mentioned face recognition module 05, judge module 06 with determine mould
After block 07 carries out the identification work of some row, just may determine that whether this currently processed frame video image needs
Carry out U.S. face to process, for needing the video image carrying out U.S. face process specifically to use following methods real
The U.S. face scheme of the existing embodiment of the present invention.
Alternatively, U.S. face module 03 uses one or more U.S. face handling implement video image to parsing
Carry out U.S. face to process and include step S101-S103:
S101, from the facial image identified, identify default human face respectively.
In embodiments of the present invention, owing to face includes facial with face such as gill, eyebrow, eye, nose, mouths,
The face process of so-called U.S. is aiming at face and beautifies and perfect process with what face were carried out, therefore, is passing through
After face recognition algorithms identifies facial image, in addition it is also necessary to determine face further from facial image
Organ.
In embodiments of the present invention, specifically need to identify that who face can according to each user not
Arrange voluntarily with needs or different application scenarios, be not particularly limited at this.
Alternatively, human face includes: face, eyes and lip.Such as, have always black-eyed
User can be set to the human face preset when identifying eyes, in order to carry out when U.S. face processes for
The black eye of eye portion process, and the user that ratio of skin tone is more black always can be set to face to identify
Time human face, in order to carry out U.S. face and process hour hands blee is carried out whitening process.
In embodiments of the present invention, from the facial image identified, default human face is identified respectively
Method equally use above-mentioned PCA face recognition algorithms to complete, it is also possible to the completeest
Become:
Gather the characteristic area of face part on the facial image identified, and resolve the spy in this feature region
Levy area data;Basic feature by the characteristic area data that parse from the different human face preset
Data compare;By the difference value with described characteristic area data less than or equal to the discrepancy threshold preset
The human face corresponding to basic feature data as the human face identified.
S102, transfer U.S. face and process packet comprises one or more U.S. face handling implement.
In embodiments of the present invention, default one or more faces are identified as user by step S101
After organ, just can carry out corresponding U.S. face for the human face identified respectively and process.Here
Concrete processing method needs the various U.S. face handling implement processing in packet by default U.S. face
Become.
Alternatively, U.S. face handling implement includes: whitening instrument, thinning face instrument, remove black eye instrument and rich
Lip instrument.
Process in packet in above-mentioned default U.S. face, multiple U.S. with different U.S. face function can be comprised
Face handling implement, except above-mentioned whitening instrument, thinning face instrument, removes black eye instrument, rich lip instrument etc.,
Anti-acne instrument, wrinkle removing rasp tool, speckle dispelling instrument can also be included, change camber instrument, eyes amplification work
The various U.S. face instrument such as tool, numerous to list herein.But use in the middle of process user, can be according to individual
Which instrument people's demand or different application scenarios arrange voluntarily is active, and which instrument is in non-
State of activation, to avoid user being regarded by the video image processing device of embodiment of the present invention scheme
Frequently, during image procossing, default U.S. face processes the whole U.S. face handling implement comprised in packet and all enters work
Make state, because for different users or for different application scenarios, it may be necessary to different
U.S. face instrument, whole U.S.s face handling implement that U.S. face processes in packet all enters duty, carries out
It may not be that user wants that various U.S. face process, and this state can be also user terminal band not
The necessary wasting of resources.Embodiment of the present invention scheme can avoid the waste of terminal resource, and improves use
The experience sense at family.
S103, use one or more U.S. face handling implement that human face carries out corresponding U.S. face to process.
In embodiments of the present invention, identify human face by step S101, and by step S102
After transferring corresponding U.S. face handling implement, the U.S. face handling implement transferred just can be used corresponding people
Face carries out U.S. face and processes.
Alternatively, use the one or more U.S. face handling implement transferred that human face is carried out corresponding U.S.
Face processes and includes:
1, use whitening instrument that face is carried out whitening process.
2, use thinning face instrument that face is carried out thinning face process.
3, employing goes black eye instrument to go black eye to process eyes.
4, use rich lip instrument that lip carries out rich lip process.
In other embodiments of the present invention, it is also possible to use other U.S. face handling implement to different organs
Carry out different U.S. face to process, numerous to list herein.It should be noted that at above-mentioned various U.S. face
Science and engineering tool can be realized by single functional software, it is also possible to by having multi-functional integrated software
Realize.
Alternatively, above-mentioned whitening instrument, black eye instrument is gone can to realize by grinding soft and soggy part.
Mill skin, i.e. uses the figure layer in picture instrument PS (photoshop) software, masking-out, passage, work
Tool, filter or other software eliminate the speckle of parts of skin, flaw to the personage in picture, variegated etc..
It is that character facial grinds skin with photoshop, it is possible to making character facial finer and smoother, smooth, profile is more
Clearly.
Alternatively, the mill skin algorithm preset includes: single channel mill skin algorithm and based on guarantor limit wave filter three
Passage mill skin algorithm.
In embodiments of the present invention, passage mill skin algorithm comprises the following steps S201-S206:
S201, open image, enter passage tuned plate, replicate blue channel.
S202, to blue channel copy perform filter other high contrast retain.
S203, with the Eyedropper tool draw neighbouring color then cover parts to be protected with paintbrush.Including eye,
Nose, eyebrow, mouth, the shadow detail of hair.
S204, image adjust calculate, generate Alpha1 passage.And carry out parameter setting at this passage.
S205, by predetermined registration operation (pinning Ctrl click Alpha1 passage) or preset instructions
Load Selection, and select by predetermined registration operation (such as Shift+Ctrl+I) is counter.Return to layers palette and click on sharp
Background layer alive.Then setting up a curve adjustment layer, adjust curve, the change of image is observed on limit.Now
It is not eager to remove speckle completely, simply they is significantly weakened.Because before being repeated once further below
Operation.
S206, impress visible by predetermined registration operation (by Shift+Ctrl+Alt+E Macintosh) or preset instructions
Figure layer, comes again operation above to it.Operating parameter below is carried out with the observation controlled oneself.?
The principle held is all to carry out the adjustment of trace.Reaching to keep image tone tone balance, despeckle effect is more
Good purpose.Such as, if finding the mottle of some yellow of dark place.Including hair on the face.In workbox
In take the Sponge tool, the mode option is for discoloring.If a less numerical value careful wiping mottle.Then use
Paintbrush tool, chooses neighbouring color colouring (paintbrush color mode).
Processing module 04, display circle of the currently displayed terminal of preview video for processing through U.S. face
On face, and the opposite end video processed through U.S. face is sent to partner terminal.
In embodiments of the present invention, by U.S. face module 03, each frame video image gathered carried out U.S. face
After process, just obtaining the video image with preferable display effect, next step needs will be at U.S. face
On the display interface of the currently displayed terminal of preview video of reason, the method that is particularly shown no longer illustrates at this.
And need on the opposite end transmission of video after this process to partner terminal, so that this video is shown
Show.
Alternatively, the opposite end video processed through U.S. face is sent to partner terminal bag by processing module 04
Include step S301-S304:
S301, by each frame video image of opposite end video of processing through U.S. face according to each frame video
The acquisition order of image stores.
In embodiments of the present invention, so that carry out the video after U.S. face processes still there is original company
Coherence, needs each frame video image through U.S. face process according to the collection to each frame video image
Order stores.That is, implementing the present invention program initially, acquisition module 02 according to what kind of order is adopted
Each frame video image that photographic head sends preset by collection, and processing module 04 will be according to what kind of order to warp
The each frame video image crossing U.S. face process preserves.In other words, according to frame video image every in video
Broadcasting successively put in order each frame video image of preservation.
S302, all videos of the opposite end video processed according to default video compression algorithm face U.S. to process
Image is compressed.
In embodiments of the present invention, if the video image of acquisition to be sent to the electricity in strange land by the Internet
Show on brain, it is necessary to video image is compressed, Normal squeezing mode as the most H.261, JPEG,
MPEG etc., otherwise transmit required bandwidth and can become the biggest.Such as, when playing film when,
The lower section of player has transmission speed 250kbps, 400kbps, a 1000kbps ... the quality of picture
The highest, this speed is the biggest.And it is also this principle that photographic head carries out transmission of video, if will take the photograph
As the resolution of head is transferred to 640 × 480, the picture captured often magnify little be about about 50kb, per second 30
Frame, then the speed needed for thecamera head video is 50 × 30/s=1500kbps=1.5Mbps.And in reality
In the life of border, resolution when people are generally used for Internet video chat is 320 × 240 even lower, passes
Defeated frame number is 24 frames per second.In other words, now video transmission rate will be less than 300kbps, and people are just
More smooth transmission of video chat can be carried out.If using higher compression video mode, as
MPEG-1 etc., transfer rate can be reduced to 200kbps less than.This is exactly general Video chat
Time, the network transfer speeds needed for photographic head.
The compression of video is the core of Video processing, according to whether real-time can be divided into non real-time compression and
Real Time Compression.And transmission of video (such as QQ video instant chat) requires that video compress is Real Time Compression.
Video compress is lossy compression method, it is, in general, that the compression ratio of video compress is the highest, it is possible to accomplish
The highest compression ratio is because video image the redundancy in the biggest time and space.So-called
It is relatively similar that temporal redundancy refers to their the pixel value ratio of same position of the adjacent image of two frames, has very
Big dependency, especially rest image, even two two field pictures are identical, to moving image, pass through
Certain computing (estimation), it should say that they also have the highest dependency;And what spatial coherence referred to
Being same two field picture, two adjacent pixels also possess certain dependency.These dependencys are video pressures
The original hypothesis of compression algorithm, in other words, if being unsatisfactory for the two condition (full white noise image, field
Scape frequently switches image etc.), the effect of video compress is can be very poor.Remove the crucial calculation of temporal correlation
Method is estimation, and it finds out the position that current image macroblock mates most in previous frame image, Hen Duoshi
Waiting, we have only to this relative coordinate to record the most much of that, which offers a saving a large amount of code word,
Improve compression ratio.In video compression algorithm, estimation is most critical, most crucial part forever.
Removal spatial coherence is converted by discrete cosine transform and realizes, and the data in time domain are reflected
It is mapped on frequency domain, then DCT coefficient is carried out quantification treatment, essentially all of lossy compression method, all can
Having quantization, it is the most obvious that it improves compression ratio.
The original document of image is bigger, it is necessary to can quickly transmit through compression of images
And play smoothly.And compression ratio weighs the parameter of image compression size just.In general, take the photograph
As the compression ratio of head is mostly 5:1.If it is to say, before uncompressed the appearance of the image of 30 seconds
Amount is 30MB, then after being compressed image according to the compression ratio of photographic head 5:1, it big
Little reform into 6MB.
Alternatively, the video compression algorithm preset includes: motor rest image (or frame by frame) compress technique
M-JPEG, dynamic image expert group Mpeg, H.264, Wavelet (wavelet compression), joint image special
Family group JPEG 2000, digital audio/video encoding and decoding technique AVS.
S303, the full video image through overcompression is converted to the signal of telecommunication with analog signal form,
The analogue signal of this signal of telecommunication is converted to digital signal, and this digital signal is carried out signal processing.
In embodiments of the present invention, video image can be realized by any one feasible method current to arrive
The later stage of the conversion of the signal of telecommunication, the conversion of analogue signal to digital signal, and digital signal processes work.
The later stage of digital signal processes the mathematical algorithm computing being primarily referred to as by series of complex, the number to image
Word signal parameter is optimized process, is mainly realized by digital signal processing chip DSP.
S304, the digital signal through signal processing is sent to partner terminal.
After video image being processed by above step, just can be by the video image after U.S. face
Being sent to video display end, video display end here includes the display terminal of call video both sides, example
As, the mobile phone of both sides, computer, Ipad etc..For the display terminal of local terminal, directly will not be compressed
Preview video be sent to display interface device and carry out video and show;Display terminal for the other side comes
Say, need first the digital signal of the video image after compression to be sent to the other side's display terminal, show the other side
Show that this video could be shown after receiving this digital signal and decompressing by terminal.
It should be noted that can be by any wired or wireless transmission means by the numeral of video image
Signal is sent to the display terminal of the other side, such as, broadband, 3G, 4G etc..It is not particularly limited at this.
So far, it has been explained that whole basic features of the present invention program, it should be noted that above-mentioned in
Appearance is only the specific embodiment of the present invention, it is impossible to as the final scheme of the present invention, in other embodiments,
Can also use other embodiment, every with the same or analogous embodiment of embodiments of the invention,
And the combination in any of the present invention program basic feature is all within protection scope of the present invention.
For achieving the above object, present invention also offers a kind of method of video image processing, as shown in Figure 4,
The method comprising the steps of S401-S403:
S401, gather each frame video image in preview video and opposite end video respectively;Wherein, preview
Video refers to: in call video, for the video of preview on currently displayed terminal demonstration interface;Right
End video refers to, for display video on partner terminal demonstration interface.
S402, each frame video image gathered is carried out U.S. face process.
Alternatively, the method also includes:
According to default face recognition algorithms, each frame video image is carried out recognition of face.
Judge whether current this frame video image identified exists facial image according to recognition result.
When judging current this frame video image identified exists facial image, to current this frame identified
Video image carries out U.S. face and processes;When judging that current this frame video image identified does not exists facial image
Time, ignore this frame video image of current identification.
Alternatively, each frame video image gathered carries out U.S. face process to include:
Default human face is identified respectively from the facial image identified.
Transfer default U.S. face and process packet;This U.S.'s face processes and comprises one or more U.S. face in packet
Handling implement.
Adopt the one or more U.S. face handling implement and human face is carried out corresponding U.S. face process.
Alternatively,
Human face includes: face, eyes and lip.
U.S. face handling implement includes: whitening instrument, thinning face instrument, remove black eye instrument and rich lip instrument.
Alternatively, use one or more U.S. face handling implement that human face carries out corresponding U.S. face to process
Including:
Use whitening instrument that face is carried out whitening process.
Use thinning face instrument that face is carried out thinning face process.
Employing goes black eye instrument to go black eye to process eyes.
Use rich lip instrument that lip carries out rich lip process.
S403, by the display interface of the currently displayed terminal of preview video processed through U.S. face, and will
The opposite end video processed through U.S. face is sent to partner terminal.
Alternatively, the opposite end video processed through U.S. face is sent to partner terminal include:
By each frame video image of the opposite end video through U.S. face process according to each frame video image
Acquisition order stores.
Enter according to the full video image of the default video compression algorithm opposite end video to processing through U.S. face
Row compression.
Full video image through overcompression is converted to the signal of telecommunication with analog signal form, by this electricity
The analogue signal of signal is converted to digital signal, and this digital signal is carried out signal processing.
Digital signal through signal processing is sent to partner terminal.
The present invention proposes a kind of video image processing device and method, and this device includes: acquisition module,
U.S. face module and processing module.Acquisition module, every for gather in preview video and opposite end video respectively
One frame video image;Wherein, preview video refers to: in call video, currently displayed terminal demonstration
Refer to for opposite end video described in the video of preview on interface, for display at partner terminal demonstration circle
Video on face.U.S. face module, processes for each frame video image gathered carries out U.S. face.Process
Module, the display interface of the currently displayed terminal of preview video for processing through U.S. face, and will
The opposite end video processed through U.S. face is sent to partner terminal.Pass through the present invention program, it is possible to improve
The display effect of video image, improves the experience sense of user.
It should be noted that in this article, term " include ", " comprising " or its any other variant
Be intended to comprising of nonexcludability so that include the process of a series of key element, method, article or
Person's device not only includes those key elements, but also includes other key elements being not expressly set out, or also
Including the key element intrinsic for this process, method, article or device.In the feelings not having more restriction
Under condition, statement " including ... " key element limited, it is not excluded that include this key element process,
Method, article or device there is also other identical element.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive above-mentioned
Embodiment method can add the mode of required general hardware platform by software and realize, naturally it is also possible to logical
Cross hardware, but a lot of in the case of the former is more preferably embodiment.Based on such understanding, the present invention's
The part that prior art is contributed by technical scheme the most in other words can be with the form body of software product
Revealing to come, this computer software product is stored in a storage medium (such as ROM/RAM, magnetic disc, light
Dish) in, including some instructions with so that a station terminal equipment (can be mobile phone, computer, service
Device, air-conditioner, or the network equipment etc.) perform the method described in each embodiment of the present invention.
These are only the preferred embodiments of the present invention, not thereby limit the scope of the claims of the present invention, every
Utilize equivalent structure or equivalence flow process conversion that description of the invention and accompanying drawing content made, or directly or
Connect and be used in other relevant technical fields, be the most in like manner included in the scope of patent protection of the present invention.
Claims (10)
1. a video image processing device, it is characterised in that described device includes: acquisition module, U.S.
Face module and processing module;
Described acquisition module, for gathering each frame video image in preview video and opposite end video respectively;
Wherein, described preview video refers to: in call video, and currently displayed terminal demonstration interface is used for
The video of preview;Described opposite end video refers to, for display regarding on partner terminal demonstration interface
Frequently;
Described U.S. face module, processes for the described each frame video image gathered carries out U.S. face;
Described processing module, being used for will be through the currently displayed terminal of preview video of described U.S. face process
On display interface, and the opposite end video processed through described U.S. face is sent to partner terminal.
2. video image processing device as claimed in claim 1, it is characterised in that described device also wraps
Include: face recognition module, judge module and determine module;
Described face recognition module, is used for according to the face recognition algorithms preset described each frame video figure
As carrying out recognition of face;
Described judge module, judges current identification for the recognition result according to described face recognition module
Whether this frame video image exists facial image;
Described determine module, deposit in judging current this frame video image identified when described judge module
When facial image, make described U.S. face module that current this frame video image identified is carried out U.S. face and process;
When there is not facial image during described judge module judges current this frame video image identified, ignoring and working as
This frame video image of front identification.
3. video image processing device as claimed in claim 2, it is characterised in that described U.S. face module
The described each frame video image gathered carries out U.S. face process include:
Default human face is identified respectively from the facial image identified;
Transfer default U.S. face and process packet;Described U.S. face processes in packet and comprises one or more U.S.
Face handling implement;
The one or more U.S. face handling implement is used described human face to be carried out at corresponding U.S. face
Reason.
4. video image processing device as claimed in claim 3, it is characterised in that
Described human face includes: face, eyes and lip;
Described U.S. face handling implement includes: whitening instrument, thinning face instrument, go black eye instrument and rich lip work
Tool;
The one or more U.S. face handling implement of described employing carries out corresponding U.S. face to described human face
Process includes:
Use described whitening instrument that described face is carried out whitening process;
Use described thinning face instrument that described face is carried out thinning face process;
Black eye instrument is gone to go black eye to process described eyes described in employing;
Use described rich lip instrument that described lip carries out rich lip process.
5. video image processing device as claimed in claim 1, it is characterised in that described processing module
The opposite end video processed through described U.S. face is sent to partner terminal include:
By each frame video image of the described opposite end video through U.S. face process according to each frame video figure
The acquisition order of picture stores;
Enter according to the full video image of the default video compression algorithm opposite end video to processing through U.S. face
Row compression;
Full video image through overcompression is converted to the signal of telecommunication with analog signal form, by described
The analogue signal of the signal of telecommunication is converted to digital signal, and described digital signal is carried out signal processing;
Described digital signal through signal processing is sent to described partner terminal.
6. a method of video image processing, it is characterised in that described method includes:
Gather each frame video image in preview video and opposite end video respectively;Wherein, described preview regards
Frequency refers to: in call video, for the video of preview on currently displayed terminal demonstration interface;Described
Opposite end video refers to, for display video on partner terminal demonstration interface;
The described each frame video image gathered carries out U.S. face process;
By on the display interface of the currently displayed terminal of preview video processed through described U.S. face, and will be through
The opposite end video crossing described U.S. face process is sent to partner terminal.
7. method of video image processing as claimed in claim 6, it is characterised in that described method is also wrapped
Include:
According to default face recognition algorithms, described each frame video image is carried out recognition of face;
Judge whether current this frame video image identified exists facial image according to recognition result;
When judging current this frame video image identified exists facial image, to current this frame identified
Video image carries out U.S. face and processes;When judging that current this frame video image identified does not exists facial image
Time, ignore this frame video image of current identification.
8. method of video image processing as claimed in claim 7, it is characterised in that described to gathering
Described each frame video image carries out U.S. face process and includes:
Default human face is identified respectively from the facial image identified;
Transfer default U.S. face and process packet;Described U.S. face processes in packet and comprises one or more U.S.
Face handling implement;
The one or more U.S. face handling implement is used described human face to be carried out at corresponding U.S. face
Reason.
9. method of video image processing as claimed in claim 8, it is characterised in that
Described human face includes: face, eyes and lip;
Described U.S. face handling implement includes: whitening instrument, thinning face instrument, go black eye instrument and rich lip work
Tool;
The one or more U.S. face handling implement of described employing carries out corresponding U.S. face to described human face
Process includes:
Use described whitening instrument that described face is carried out whitening process;
Use described thinning face instrument that described face is carried out thinning face process;
Black eye instrument is gone to go black eye to process described eyes described in employing;
Use described rich lip instrument that described lip carries out rich lip process.
10. method of video image processing as claimed in claim 6, it is characterised in that described will pass through
The opposite end video that described U.S. face processes is sent to partner terminal and includes:
By each frame video image of the described opposite end video through U.S. face process according to each frame video figure
The acquisition order of picture stores;
Enter according to the full video image of the default video compression algorithm opposite end video to processing through U.S. face
Row compression;
Full video image through overcompression is converted to the signal of telecommunication with analog signal form, by described
The analogue signal of the signal of telecommunication is converted to digital signal, and described digital signal is carried out signal processing;
Described digital signal through signal processing is sent to described partner terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610362164.8A CN105979194A (en) | 2016-05-26 | 2016-05-26 | Video image processing apparatus and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610362164.8A CN105979194A (en) | 2016-05-26 | 2016-05-26 | Video image processing apparatus and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105979194A true CN105979194A (en) | 2016-09-28 |
Family
ID=56955941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610362164.8A Pending CN105979194A (en) | 2016-05-26 | 2016-05-26 | Video image processing apparatus and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105979194A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709430A (en) * | 2016-11-30 | 2017-05-24 | 努比亚技术有限公司 | Mobile terminal and mobile terminal based fingerprint information image processing method |
CN107392110A (en) * | 2017-06-27 | 2017-11-24 | 五邑大学 | Beautifying faces system based on internet |
CN108337465A (en) * | 2017-02-09 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Method for processing video frequency and device |
CN108718385A (en) * | 2018-07-31 | 2018-10-30 | 北京会播科技有限公司 | Image processing apparatus and method |
CN108848312A (en) * | 2018-08-02 | 2018-11-20 | 北京奇虎科技有限公司 | It takes pictures method for previewing, device and electronic equipment |
CN108881782A (en) * | 2018-08-23 | 2018-11-23 | 维沃移动通信有限公司 | A kind of video call method and terminal device |
CN108965770A (en) * | 2018-08-30 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing template generation method, device, storage medium and mobile terminal |
CN108989901A (en) * | 2018-08-07 | 2018-12-11 | 北京奇虎科技有限公司 | Method for processing video frequency, client and terminal |
CN109509140A (en) * | 2017-09-15 | 2019-03-22 | 阿里巴巴集团控股有限公司 | Display methods and device |
CN109558839A (en) * | 2018-11-29 | 2019-04-02 | 徐州立讯信息科技有限公司 | Adaptive face identification method and the equipment and system for realizing this method |
CN110012291A (en) * | 2019-03-13 | 2019-07-12 | 佛山市顺德区中山大学研究院 | Video coding algorithm for U.S. face |
CN110677713A (en) * | 2019-10-15 | 2020-01-10 | 广州酷狗计算机科技有限公司 | Video image processing method and device and storage medium |
CN112073770A (en) * | 2019-06-10 | 2020-12-11 | 海信视像科技股份有限公司 | Display device and video communication data processing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101018314A (en) * | 2006-02-07 | 2007-08-15 | Lg电子株式会社 | The apparatus and method for image communication of mobile communication terminal |
US20140184726A1 (en) * | 2013-01-02 | 2014-07-03 | Samsung Electronics Co., Ltd. | Display apparatus and method for video calling thereof |
CN104853134A (en) * | 2014-02-13 | 2015-08-19 | 腾讯科技(深圳)有限公司 | Video communication method and video communication device |
CN105611387A (en) * | 2015-12-25 | 2016-05-25 | 北京小鸟科技发展有限责任公司 | Method and system for previewing video image |
-
2016
- 2016-05-26 CN CN201610362164.8A patent/CN105979194A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101018314A (en) * | 2006-02-07 | 2007-08-15 | Lg电子株式会社 | The apparatus and method for image communication of mobile communication terminal |
US20140184726A1 (en) * | 2013-01-02 | 2014-07-03 | Samsung Electronics Co., Ltd. | Display apparatus and method for video calling thereof |
CN104853134A (en) * | 2014-02-13 | 2015-08-19 | 腾讯科技(深圳)有限公司 | Video communication method and video communication device |
CN105611387A (en) * | 2015-12-25 | 2016-05-25 | 北京小鸟科技发展有限责任公司 | Method and system for previewing video image |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709430A (en) * | 2016-11-30 | 2017-05-24 | 努比亚技术有限公司 | Mobile terminal and mobile terminal based fingerprint information image processing method |
CN108337465A (en) * | 2017-02-09 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Method for processing video frequency and device |
CN108337465B (en) * | 2017-02-09 | 2021-05-14 | 腾讯科技(深圳)有限公司 | Video processing method and device |
CN107392110A (en) * | 2017-06-27 | 2017-11-24 | 五邑大学 | Beautifying faces system based on internet |
CN109509140A (en) * | 2017-09-15 | 2019-03-22 | 阿里巴巴集团控股有限公司 | Display methods and device |
CN108718385A (en) * | 2018-07-31 | 2018-10-30 | 北京会播科技有限公司 | Image processing apparatus and method |
CN108848312A (en) * | 2018-08-02 | 2018-11-20 | 北京奇虎科技有限公司 | It takes pictures method for previewing, device and electronic equipment |
CN108989901A (en) * | 2018-08-07 | 2018-12-11 | 北京奇虎科技有限公司 | Method for processing video frequency, client and terminal |
CN108881782A (en) * | 2018-08-23 | 2018-11-23 | 维沃移动通信有限公司 | A kind of video call method and terminal device |
CN108881782B (en) * | 2018-08-23 | 2021-08-03 | 维沃移动通信有限公司 | Video call method and terminal equipment |
CN108965770A (en) * | 2018-08-30 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing template generation method, device, storage medium and mobile terminal |
CN109558839A (en) * | 2018-11-29 | 2019-04-02 | 徐州立讯信息科技有限公司 | Adaptive face identification method and the equipment and system for realizing this method |
CN110012291A (en) * | 2019-03-13 | 2019-07-12 | 佛山市顺德区中山大学研究院 | Video coding algorithm for U.S. face |
CN112073770A (en) * | 2019-06-10 | 2020-12-11 | 海信视像科技股份有限公司 | Display device and video communication data processing method |
WO2020248697A1 (en) * | 2019-06-10 | 2020-12-17 | 海信视像科技股份有限公司 | Display device and video communication data processing method |
CN112073770B (en) * | 2019-06-10 | 2022-12-09 | 海信视像科技股份有限公司 | Display device and video communication data processing method |
US11917329B2 (en) | 2019-06-10 | 2024-02-27 | Hisense Visual Technology Co., Ltd. | Display device and video communication data processing method |
CN110677713A (en) * | 2019-10-15 | 2020-01-10 | 广州酷狗计算机科技有限公司 | Video image processing method and device and storage medium |
CN110677713B (en) * | 2019-10-15 | 2022-02-22 | 广州酷狗计算机科技有限公司 | Video image processing method and device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105979194A (en) | Video image processing apparatus and method | |
CN105872447A (en) | Video image processing device and method | |
US11711623B2 (en) | Video stream processing method, device, terminal device, and computer-readable storage medium | |
CN109639996B (en) | High dynamic scene imaging method, mobile terminal and computer readable storage medium | |
CN105979195B (en) | A kind of video image processing device and method | |
CN105825485A (en) | Image processing system and method | |
CN106713640B (en) | A kind of brightness adjusting method and equipment | |
CN108197554B (en) | Camera starting method, mobile terminal and computer readable storage medium | |
CN105897564A (en) | Photo sharing apparatus and method | |
CN106534619A (en) | Method and apparatus for adjusting focusing area, and terminal | |
CN115834897B (en) | Processing method, processing apparatus, and storage medium | |
CN113179374A (en) | Image processing method, mobile terminal and storage medium | |
CN103581728A (en) | Selective post-processing of decoded video frames based on focus point determination | |
CN105763710A (en) | Picture setting system and method | |
CN106506778A (en) | A kind of dialing mechanism and method | |
CN107071263A (en) | A kind of image processing method and terminal | |
CN114210052A (en) | Game fluency optimization method and device, terminal and computer-readable storage medium | |
CN112135053A (en) | Image processing method, mobile terminal and computer readable storage medium | |
CN107743198B (en) | Photographing method, terminal and storage medium | |
CN113393398A (en) | Image noise reduction processing method and device and computer readable storage medium | |
WO2018040751A1 (en) | Image generation apparatus and method therefor, and image processing device and storage medium | |
CN112598678A (en) | Image processing method, terminal and computer readable storage medium | |
CN106355569A (en) | Image generating device and method thereof | |
CN110971822A (en) | Picture processing method and device, terminal equipment and computer readable storage medium | |
WO2024055333A1 (en) | Image processing method, smart device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160928 |