CN106878588A - A kind of video background blurs terminal and method - Google Patents
A kind of video background blurs terminal and method Download PDFInfo
- Publication number
- CN106878588A CN106878588A CN201710106577.4A CN201710106577A CN106878588A CN 106878588 A CN106878588 A CN 106878588A CN 201710106577 A CN201710106577 A CN 201710106577A CN 106878588 A CN106878588 A CN 106878588A
- Authority
- CN
- China
- Prior art keywords
- video
- scene
- background blurring
- background
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a kind of video background virtualization terminal and method, it is related to communication technical field, the video background virtualization terminal includes the first judge module and starting module, wherein:First judge module, for determining current video scene according to current time and current location information;The starting module, is when presetting background blurring scene, to open background blurring pattern for current video scene.The present invention determines current video scene according to current time and current location information; current video object is determined according to address list; background blurring pattern can be automatically begun to according to video scene or object video in Video chat or net cast; privacy of user is protected by background blurring pattern well, the Consumer's Experience in Video chat or net cast is improved.
Description
Technical field
The present invention relates to communication technical field, more particularly to a kind of video background virtualization terminal and method.
Background technology
It is background blurring because its can rapidly stressing main and known to numerous shutterbugs and use.With instant messaging
The development of technology, during network direct broadcasting or video, live person or video person can be protected by background blurring technology
Individual privacy.
Under normal circumstances, when photo or recorded video is shot, user can typically enter for reference object interested
Line focusing, particularly when portrait is shot, background blurring effect is very popular, and under this effect, reference object is in itself
It is enhanced, and background parts then thicken.However, people are in the photo shot with background blurring effect or record tool
During the video of the virtualization effect that has powerful connections, it is necessary to using the slr camera of specialty, and reason is can be only achieved by complicated adjustment operation
The effect thought.
In order to meet the demand of amateur photography user, existing one kind is processed image by software approach, to obtain
The method for obtaining background blurring effect:User opens the video after one section of video is recorded by image processing software, uses image
The manual frame of instrument that treatment software is carried selects background area.The background area that image processing software is selected to subscriber frame carries out phase
The Gaussian Blur of same or gradual change yardstick, video image of the output with background blurring effect.
The above method needs user to take video in advance, then video is processed, it is impossible to realize network direct broadcasting or and
Real-time background virtual effect required for Shi Tongxin.Additionally, the video for taking in advance opens image by software and manual frame is selected
Background area, operating process is complicated, it is impossible to protect privacy of user in time.
The content of the invention
It is a primary object of the present invention to propose a kind of video background virtualization terminal and method, it is intended to solve user in video
The technical problem of privacy of user cannot be in time protected during live or Video chat.
To achieve the above object, a kind of video background virtualization terminal that the present invention is provided, including the first judge module, startup
Module and background blurring module,
Wherein:First judge module, for determining current video scene according to current time and current location information;
The starting module, for judging whether current video scene is to preset background blurring scene, current video scene
To preset during background blurring scene, background blurring pattern is opened;
The background blurring module, for when background blurring pattern is opened, background blurring region being extracted, to the background
Virtualization region carries out virtualization treatment.
Further, the current video scene includes:First scene, the second scene ... N scenes;The default background
Virtualization scene is one or more scenes in first scene to the N scenes;Wherein, the N is natural number;
First judge module includes:
Memory cell, for moment position and the scene information table of comparisons;The moment position and scene information table of comparisons bag
Include the corresponding relation of time period, position range and current video scene;
Query unit, for being searched and current time and current location in the moment position and the scene information table of comparisons
The corresponding current video scene of information.
Further, the video background virtualization terminal also includes:
Second judge module, for determining current video object according to address list;
The starting module, is additionally operable to when the current video object is when presetting background blurring object video, to open the back of the body
Scape blurs pattern.
Further, the current video object includes:First object video, the second object video ... M object videos;
The default background blurring object video is one or more videos in first object video to the M object videos
Object;Wherein, M is natural number;
Second judge module, the communication information for obtaining current video object determines according to the communication information
Current video object is the one kind in first object video to the M object videos.
Further, the video background virtualization terminal also includes:
Memory, for storing background blurring scene and/or background blurring object video;
The starting module, be additionally operable to current video scene for the background blurring scene and/or, current video object is
During the background blurring object video, background blurring pattern is opened.
Further, the video background virtualization terminal also includes:First camera and second camera, respectively with it is described
First camera and the controller of second camera connection, the display unit being connected with the controller, wherein:
First camera, for catching destination object, and focuses to the destination object;
The second camera, for when the background blurring pattern is opened, void being carried out to the background blurring region
Change is processed;
The controller, for will enter to the described background blurring region after the defocused destination object and virtualization treatment
Row synthesis, and control the display unit display synthetic object.
To reach above-mentioned purpose, another aspect of the present invention also provides a kind of video background weakening method, and methods described includes:
Current video scene is determined according to current time and current location information;
Judge whether current video scene is to preset background blurring scene, if current video scene is to preset background blurring field
Jing Shi, opens background blurring pattern;
When background blurring pattern is opened, background blurring region is extracted, virtualization treatment is carried out to the background blurring region.
Further, the current video scene includes:First scene, the second scene ... N scenes;The default background
Virtualization scene is one or more scenes in first scene to the N scenes;Wherein, the N is natural number;
It is described to determine that current video scene includes according to current time and current location information:
The moment position with the scene information table of comparisons search it is corresponding with current time and current location information ought
Preceding video scene;
Wherein, the moment position includes time period, position range and current video scene with the scene information table of comparisons
Corresponding relation.
Further, methods described also includes:
Current video object is determined according to address list;
When the current video object is when presetting background blurring object video, to open background blurring pattern.
Further, methods described also includes:
Store background blurring scene and/or background blurring object video;
If current video scene be the background blurring scene and/or, current video object be the background blurring video
During object, background blurring pattern is opened.
Video background proposed by the present invention blurs terminal and method, is determined according to current time and current location information current
Video scene, current video object is determined according to address list, can in Video chat or net cast according to video scene or
Object video automatically begins to background blurring pattern, and privacy of user is protected well by background blurring pattern, improves Video chat
Or the Consumer's Experience in net cast.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram of the optional mobile terminal for realizing each embodiment of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
A kind of front view of mobile terminal that Fig. 3 is provided for the present invention;
A kind of rearview of mobile terminal that Fig. 4 is provided for the present invention;
Fig. 5 is the schematic diagram that mobile terminal is hold by one hand in the present invention;
Fig. 6 is the composition structural representation of video background virtualization terminal in an embodiment of the present invention;
Fig. 7 is the composition structural representation of video background virtualization terminal in another kind embodiment of the invention;
Fig. 8 is the composition structural representation of video background virtualization terminal in another kind embodiment of the invention;
Fig. 9 is the composition structural representation of video background virtualization terminal in another kind embodiment of the invention;
Figure 10 is the first camera for providing of the invention and the structural representation of second camera;
Figure 11 a are binocular range measurement principle figure in the present invention;
Figure 11 b are 3D distances calculating schematic diagram in the present invention;
A kind of flow chart of video background weakening method that Figure 12 is provided for the present invention;
The realization of the object of the invention, functional characteristics and advantage will be described further referring to the drawings in conjunction with the embodiments.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the invention is realized referring now to Description of Drawings.In follow-up description, use
For represent element such as " module ", " part " or " unit " suffix only for being conducive to explanation of the invention, itself
Not specific meaning.Therefore, " module " can be used mixedly with " part ".
The video background virtualization terminal that the present invention is provided can be applied in the mobile terminal implemented in a variety of manners.For example,
Mobile terminal described in the present invention can include such as mobile phone, smart phone, notebook computer, Digital Broadcasting Receiver
The movement of device, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc.
Terminal and the such as fixed terminal of numeral TV, desktop computer etc..Hereinafter it is assumed that terminal is mobile terminal.However, ability
Field technique personnel will be appreciated that in addition to the element except being used in particular for moving purpose, structure according to the embodiment of the present invention
Make the terminal that can also apply to fixed type.
Fig. 1 is that the hardware configuration of the optional mobile terminal for realizing each embodiment of the invention is illustrated.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190
Etc..Fig. 1 shows the mobile terminal with various assemblies, it should be understood that being not required for implementing all groups for showing
Part.More or less component can alternatively be implemented.The element of mobile terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more assemblies, and it allows mobile terminal 1 00 and wireless communication system
Or the radio communication between network.For example, wireless communication unit can include broadcasting reception module 111, mobile communication module
112nd, at least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server
Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent
The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information
And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting
Signal etc..And, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Pass information can also be provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 is received.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of DMB (DMB)
The form of program guide (EPG), the electronic service guidebooks (ESG) of digital video broadcast-handheld (DVB-H) etc. and exist.Broadcast
Receiver module 111 can receive signal and broadcast by using various types of broadcast systems.Especially, broadcasting reception module 111
Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video
Broadcast-hand-held (DVB-H), forward link media (MediaFLO@) Radio Data System, received terrestrial digital broadcasting integrated service
Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111 may be constructed such that and be adapted to provide for extensively
Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via broadcasting reception module 111 receive broadcast singal and/
Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice
Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant
ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module
115 typical case is GPS (global positioning system).According to current technology, GPS module 115 is calculated and comes from three or more
The range information and correct time information of satellite and for calculate Information application triangulation, so as to according to longitude,
The three-dimensional current location information of latitude and highly accurately calculating.Currently, three are used for calculating the method for position and temporal information
Satellite and the position that is calculated by using other satellite correction and the error of temporal information.Additionally, GPS module
115 can be by Continuous plus current location information in real time come calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121
Microphone 122, the static images that 121 pairs, camera is obtained in Video Capture pattern or image capture mode by image capture apparatus
Or the view data of video is processed.Picture frame after treatment may be displayed on display unit 151.Processed through camera 121
Picture frame afterwards can be stored in memory 160 (or other storage mediums) or sent out via wireless communication unit 110
Send, two or more cameras 121 can be provided according to the construction of mobile terminal.
Memory 160 can store software program for the treatment and control operation performed by controller 180 etc., Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And, memory 160 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal
Data.
Specifically, memory 160 can include at least one type storage medium, the storage medium include flash memory,
Hard disk, multimedia card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random
Access memory (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read-only
Memory (PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can be performed with by network connection
The network storage device cooperation of the store function of memory 160.
The overall operation of the generally control mobile terminal of controller 180.For example, controller 180 is performed and voice call, data
Communication, video calling etc. related control and treatment.In addition, controller 180 can be included for reproducing (or playback) many matchmakers
The multi-media module 181 of volume data, multi-media module 181 can be constructed in controller 180, or can be structured as and control
Device 180 is separated.Controller 180 can be with execution pattern identifying processing, the handwriting input that will be performed on the touchscreen or picture
Draw input and be identified as character or image.
Output unit 150 is configured to provide output signal (for example, audio is believed with vision, audio and/or tactile manner
Number, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include display unit 151, audio output mould
Block 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity
During words call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file
Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern
Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure
UI or GUI of picture and correlation function etc..
Meanwhile, when display unit 151 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit
151 can serve as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired implementation method, mobile terminal 1 00 can include two or more display units (or other display devices), for example, moving
Dynamic terminal can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect touch
Input pressure and touch input position and touch input area.
Dio Output Modules 152 can mobile terminal be in call signal reception pattern, call mode, logging mode,
It is that wireless communication unit 110 is received or in memory 160 when under the isotypes such as speech recognition mode, broadcast reception mode
The voice data transducing audio signal of middle storage and it is output as sound.And, dio Output Modules 152 can be provided and movement
The audio output (for example, call signal receives sound, message sink sound etc.) of the specific function correlation that terminal 100 is performed.
Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output and be notified to mobile terminal 1 00 with by event.Typical event can be with
Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video is exported, alarm unit
153 can in a different manner provide output with the generation of notification event.For example, alarm unit 153 can be in the form of vibrating
Output is provided, when calling, message or some other entrance communication (incomingcommunication) are received, alarm list
Unit 153 can provide tactile output (that is, vibrating) to notify to user.Exported by providing such tactile, even if
When in pocket of the mobile phone of user in user, user also can recognize that the generation of various events.Alarm unit 153
The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
User input unit 130 can generate key input data to control each of mobile terminal according to the order of user input
Plant operation.User input unit 130 allows the various types of information of user input, and can include keyboard, metal dome, touch
Plate (for example, detection due to being touched caused by resistance, pressure, electric capacity etc. change sensitive component), roller, rocking bar etc.
Deng.Especially, when touch pad is superimposed upon on display unit 151 in the form of layer, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 1 00, (for example, mobile terminal 1 00 opens or closes shape
State), the presence or absence of the contact (that is, touch input) of the position of mobile terminal 1 00, user for mobile terminal 1 00, mobile terminal
The acceleration or deceleration movement of 100 orientation, mobile terminal 1 00 and direction etc., and generate for controlling mobile terminal 1 00
The order of operation or signal.For example, when mobile terminal 1 00 is embodied as sliding-type mobile phone, sensing unit 140 can be sensed
The sliding-type phone is opened or closed.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).
Interface unit 170 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 170 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and one or more elements for being transferred in mobile terminal 1 00 of the input that will receive or can be with
For transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 170 can serve as allowing by it by electricity
Power provides to the path of mobile terminal 1 00 from base or can serve as allowing the various command signals being input into from base to pass through it
It is transferred to the path of mobile terminal.Be can serve as recognizing that mobile terminal is from the various command signals or electric power of base input
The no signal being accurately fitted within base.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each unit
Appropriate electric power needed for part and component.
Various implementation methods described herein can be with use such as computer software, hardware or its any combination of calculating
Machine computer-readable recording medium is implemented.Implement for hardware, implementation method described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein
At least one in electronic unit is implemented, and in some cases, such implementation method can be implemented in controller 180.
For software implementation, the implementation method of such as process or function can with allow to perform the single of at least one function or operation
Software module is implemented.Software code can be come by the software application (or program) write with any appropriate programming language
Implement, software code can be stored in memory 160 and performed by controller 180.
So far, oneself according to its function through describing mobile terminal.Below, for the sake of brevity, will description such as folded form,
Slide type mobile terminal in various types of mobile terminals of board-type, oscillating-type, slide type mobile terminal etc. is used as showing
Example.Therefore, the present invention can be applied to any kind of mobile terminal, and be not limited to slide type mobile terminal.
Mobile terminal 1 00 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any
Line and wireless communication system and satellite-based communication system are operated.
The communication system that mobile terminal wherein of the invention can be operated is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system can include multiple mobile terminal 1s 00, multiple base station (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link.
If any one in the interface that back haul link can be known according to Ganji is constructed, the interface includes such as E1/T1, ATM, IP,
PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system can include multiple BSC2750 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction
Each subregion of line covering is radially away from BS270.Or, each subregion can be by two or more for diversity reception
Antenna is covered.Each BS270 may be constructed such that the multiple frequency distribution of support, and the distribution of each frequency has specific frequency spectrum
(for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single
BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS270 can be claimed
It is multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals
At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful
Location information.GPS module 115 as shown in Figure 1 is generally configured to coordinate with satellite 300 to be believed with obtaining desired positioning
Breath.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use other of the position of mobile terminal can be tracked
Technology.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb transmission.
Used as a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminal 1s 00
Signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of the reception of certain base station 270 is anti-
Processed in specific BS270 to link signal.The data of acquisition are forwarded to the BSC275 of correlation.BSC provides call
Resource allocation and the mobile management function of the coordination including the soft switching process between BS270.The number that BSC275 will also be received
According to MSC280 is routed to, it provides the extra route service for forming interface with PSTN290.Similarly, PSTN290 with
MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 correspondingly controls BS270 with by forward link signals
It is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of the invention is proposed.
An embodiment of the present invention provides device on mobile terminals to video background virtualization, it is clear that in the present invention the
In one embodiment, the device to video background virtualization can be realized by mobile terminal on mobile terminals.
It should be noted that the memory for data storage is provided with mobile terminal, here, to being deposited on mobile terminal
The type of reservoir is not limited.
Here, mobile terminal include but is not limited to mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA,
PAD, PMP, guider etc..
Here, if mobile terminal has operating system, the operating system can be UNIX, Linux, Windows, Android
(Android), Windows Phone etc..
Illustrated in case of mobile terminal is mobile phone below.
In the first embodiment of the invention, Fig. 3 is the front view of mobile terminal in first embodiment of the invention, and Fig. 4 is this
The rearview of mobile terminal in invention first embodiment.
Here, the mobile terminal also has portability, and specifically, mobile terminal can be realized being hold by one hand, in this way, each
When kind scene needs to be blurred using video background, it is possible to use the portability of mobile terminal realizes that video background is blurred, Fig. 5 is this
The schematic diagram of mobile terminal is hold by one hand in invention first embodiment.
Fig. 6 is the composition structure chart that a kind of video background of first embodiment of the invention blurs terminal, as shown in fig. 6, described
Terminal 6 includes:First judge module 61, starting module 62 and background blurring module 63, wherein:
Wherein, the first judge module 61, for determining current video scene according to current time and current location information.
Starting module 62, for judging whether current video scene is to preset background blurring scene, current video scene is
When presetting background blurring scene, background blurring pattern is opened.
Background blurring module 63, it is empty to the background for when background blurring pattern is opened, extracting background blurring region
Changing region carries out virtualization treatment.
In the present embodiment, current time is the specific moment of current video chat or net cast, for example:Morning 8:
00。
Current location information is that terminal seat manages position, can specifically be realized by GPS module.
In one embodiment, the current video scene includes:First scene, the second scene ... N scenes;It is described
It is one or more scenes in first scene to the N scenes to preset background blurring scene;Wherein, the N is certainly
So count.For example, in a kind of specific embodiment:The current video scene includes:Office scene, home scenarios and public
Scene;It is described to preset background blurring scene for home scenarios;
First judge module 61 includes:
Memory cell 611, for moment position and the scene information table of comparisons;The moment position and the scene information table of comparisons
Corresponding relation including time period, position range and current video scene;
Query unit 612, for being searched in the moment position and the scene information table of comparisons and current time and currently
The corresponding current video scene of positional information.
For example, in a kind of situation, when the current time is the working time, and the current location information is office ground
During location, determine that current video scene is office scene;
In another situation, when the current time is the non-working time, and the current location information is lived for family
During location, determine that current video scene is home scenarios;
In another situation, the current location information is the position beyond the business address and the home address
When, determine that current video scene is common scene.
Wherein, the working time preset for terminal user the working time of setting, such as Mon-Fri morning 9:00~12:
00, afternoon 2:00~5:00.Business address and home address can be configured and change according to the actual conditions of terminal user.
For example when terminal user's job change or removal of home, corresponding business address or home address can be changed in time, so as to knot
Fruit is accurately judged.
For example, the morning 10:20 points of terminal users open Video chat function, and the current location information of terminal device is user
The business address for setting in advance, then the determination current video scene of query unit 612 is office scene, due to presetting background blurring field
Scape is home scenarios, and now, starting module 62 is not turned on background blurring module.
In another example, afternoon 10:20 points of terminal users open Video chat function, the present bit confidence of terminal device
The home address set in advance for user is ceased, then query unit 612 determines that current video scene is home scenarios, due to the default back of the body
Scape virtualization scene is home scenarios, and now, starting module 62 opens background blurring module.
In another embodiment, the morning 12:20 points of terminal users open Video chat function, the current location of terminal device
Information is the position beyond the business address and the home address, then query unit 612 determines that current video scene is public affairs
Scene, is home scenarios due to presetting background blurring scene altogether, and now, starting module 62 is not turned on background blurring module.
The video background virtualization terminal that the embodiment of the present invention is proposed, determines current according to current time and current location information
Video scene, current video object is determined according to address list, can in Video chat or net cast according to video scene or
Object video automatically begins to background blurring pattern, and privacy of user is protected well by background blurring pattern, improves Video chat
Or the Consumer's Experience in net cast.
The video background virtualization terminal 7 of another embodiment of the present invention proposition is illustrated in figure 7, including:Second judge module
71 and starting module 72, wherein:
Second judge module 71, for determining current video object according to address list;
Starting module 72, for judging whether current video scene is to preset background blurring scene, when the current video
Object is when presetting background blurring object video, to open background blurring pattern.
Wherein, the current video object includes:First object video, the second object video ... M object videos;It is described
It is one or more videos pair in first object video to the M object videos to preset background blurring object video
As;Wherein, M is natural number.
In one embodiment, the second judge module 71, the communication information for obtaining current video object, according to institute
State communication information and determine that current video object is the one kind in the first object video to the M object videos, wherein, first regards
Frequency object, the second object video ... M object videos can be specifically the one kind in relatives and friends, colleague, acquaintance or stranger.
Wherein, the communication information can including the name of communication object, phone number, micro-signal etc. contact method, with
And to the packet marking of the communication object.Wherein, packet marking can be including relatives and friends, colleague, acquaintance and stranger etc..
Packet marking can be end user manual mark, or automatic mark.For example this is being marked automatically
In the mode of kind, the second judge module 71 is further included:
Statistical module 711, the communication number of times for counting terminal user and communication object;Wherein, the communication includes beating
The speech communications such as phone and video communication.
Mark module 712, the packet marking for marking communication object according to the communication number of times;For example, can lead to
Mark of the news number of times less than 5 times is people, and mark of the communication number of times more than 5 times less than 10 times is that communication number of times is more than
10 marks less than 15 times are that mark of the communication number of times more than 15 times is.
Determining module 713, for determining that current video object is relatives and friends, colleague, acquaintance according to the packet marking
Or the one kind in stranger.
In the present embodiment, the default background blurring object video can be at least in colleague, acquaintance or stranger
It is individual;It can be colleague, acquaintance or stranger to preset background blurring object video;Presetting background blurring object video can also be
Acquaintance and stranger;Or it can also be colleague and stranger etc. to preset background blurring object video, be will not be repeated here.
The video background virtualization terminal that the present embodiment is provided, judges whether current video object is the default back of the body by address list
Scape blurs object, and privacy of user is protected well by background blurring pattern, improves the user in Video chat or net cast
Experience.
The video background virtualization terminal 8 of another embodiment of the present invention proposition is illustrated in figure 8, including:Memory 81 and open
Dynamic model block 82, wherein:
In one embodiment, memory 81, for storing background blurring scene;
The starting module 82, when being additionally operable to current video scene for the background blurring scene, opens background blurring mould
Formula.
In another embodiment, memory 81, for storing background blurring object video;
The starting module 82, for judging whether current video scene is to preset background blurring scene, current video pair
During as the background blurring object video, background blurring pattern is opened.
In another embodiment, memory 81, for storing background blurring scene and background blurring object video;
The starting module 82, is the background blurring scene for current video scene, and current video object is institute
When stating background blurring object video, background blurring pattern is opened.
In the respective embodiments described above, background blurring scene opens background blurring mould the need for being pre-set for terminal user
The scene of formula, such as terminal user can set background blurring scene for bedroom, and kitchen etc. needs to protect the place of privacy.Background is empty
Change the object video that background blurring pattern is opened the need for object video can pre-set for user, such as terminal user can set
Background blurring object video is put for father, mother, man or Ms etc. need to protect the object of privacy.
Video background provided in an embodiment of the present invention blurs terminal, by prestoring background blurring scene or background blurring
Object video, when terminal user carries out Video chat or video with background blurring scene or background blurring object video is prestored
When live, automatically turn on background blurring pattern, privacy of user protected by background blurring pattern well, improve Video chat or
Consumer's Experience in net cast.
Such as Fig. 9 for another video background that the present invention is provided blurs terminal 9, including:First judge module 91, second sentences
Disconnected module 92, starting module 93, memory 94, the first camera 95, second camera 96, controller 97 and display unit 98.
Wherein, the first judge module 91, for for determining current video according to current time and current location information
Scape;It is corresponding, starting module 93, for judging whether current video scene is to preset background blurring scene, current video scene
To preset during background blurring scene, background blurring pattern is opened.
In one embodiment, the current video scene includes:Office scene, home scenarios and common scene;Institute
Default background blurring scene is stated for home scenarios;
First judge module 91 includes:
Office scene determining module, for when the current time being the working time, and the current location information is
During business address, determine that current video scene is office scene;
Home scenarios determining module, for when the current time being the non-working time, and the current location information
During for home address, determine that current video scene is home scenarios;
Common scene determining module, for when the current location information be the business address and the home address with
During outer position, determine that current video scene is common scene.
Wherein, the working time preset for terminal user the working time of setting, such as Mon-Fri morning 9:00~12:
00, afternoon 2:00~5:00.Business address and home address can be configured and change according to the actual conditions of terminal user.
For example when terminal user's job change or removal of home, corresponding business address or home address can be changed in time, so as to knot
Fruit is accurately judged.
For example, the morning 10:20 points of terminal users open Video chat function, and the current location information of terminal device is user
The business address for setting in advance, then scene determining module of handling official business determines that current video scene is office scene, due to presetting background
Virtualization scene is home scenarios, and now, starting module 93 is not turned on background blurring module.
In another example, afternoon 10:20 points of terminal users open Video chat function, the present bit confidence of terminal device
The home address set in advance for user is ceased, then home scenarios determining module determines current video scene for home scenarios, due to
It is home scenarios to preset background blurring scene, and now, starting module 93 opens background blurring module.
In another embodiment, the morning 12:20 points of terminal users open Video chat function, the current location of terminal device
Information is the position beyond the business address and the home address, then common scene determining module determines current video scene
It is common scene, is home scenarios due to presetting background blurring scene, now, starting module 93 is not turned on background blurring module.
In another example, the morning 10:20 points of terminal users open Video chat function, the present bit confidence of terminal device
The home address set in advance for user is ceased, now, because current time and current location information can not be combined into above-mentioned office
Any one scene in scene, home scenarios and common scene, then the first judge module 91 current video scene is not carried out
Judge, the control display unit 98 of controller 97 shows the prompt message of " scene None- identified ", and terminal user is according to the prompting
Information can select manually opened starting module 93.Can specifically be realized by starting switch, wherein, starting switch can be
One physical entity button, by pressing the physical button, the pressure sensor being arranged in the physical button is sensed user
Pressure, so as to generate the instruction for opening starting module 93;Starting switch can also be provided on display unit 98 or touch-screen
Virtual key or icon, user is arranged on the pressure under the virtual key or icon by pressing the virtual key or icon
Sensor sensing opens the instruction of starting module 93 such that it is able to generate to pressure.
Obviously, in aforesaid way, terminal user can be manually opened as needed or closes background blurring pattern, specifically
Can be realized by starting switch.Wherein, starting switch can be a physical entity button, and user is by pressing the entity
Button, the pressure sensor being arranged in the physical button senses pressure, so as to generate the instruction for opening starting module 93;Open
Dynamic switch can also be provided in virtual key or icon on display unit 98 or touch-screen, and user is virtually pressed by pressing this
Key or icon, the pressure sensor being arranged under the virtual key or icon sense pressure, start such that it is able to generate to open
The instruction of module 93.
Second judge module 92, for determining current video object according to address list;Corresponding, starting module 93 is also used
In when the current video object is default background blurring object video, background blurring pattern is opened.
Memory 94, for storing background blurring scene and/or background blurring object video;It is corresponding, starting module 93,
For current video scene be the background blurring scene and/or, current video object be the background blurring object video when,
Open background blurring pattern.
First camera 95, for catching destination object, and focuses to the destination object;
Second camera 96, for when the background blurring pattern is opened, at the background blurring area row virtualization
Reason;
Controller 97, for will be carried out to the described background blurring region after the defocused destination object and virtualization treatment
Synthesis, and control the display unit 98 to show synthetic object.
Wherein, one or more people, animal in the destination object current video image etc..The background object is to work as
Other images in preceding video image in addition to the destination object.
In one embodiment, the method for catching destination object can be to catch current by face recognition technology
Face in video image, the destination object is set as by the face, if during Video chat, the face
Position there occurs change, the position where can continuing to track the face using face recognition technology, so that without user's hand
The position of dynamic switching selection target object, you can persistently catch the destination object in Video chat.Specifically, the user
Terminal can generate a preset pattern when Video chat is carried out in display unit, and the shape of the preset pattern can be with
Including but not limited to rectangle, the user terminal can capture the position of the destination object using face recognition technology, and
Frame choosing is carried out with the preset pattern, it (can also be one that the user terminal obtains an inner ellipse according to current rectangle
Individual circle, when this rectangle frame is for square), the region outside this inner ellipse is exactly the user terminal back of the body to be blurred
Scene area, so as to carry out virtualization treatment to the data in background area.
In another embodiment, the method that the user terminal catches destination object can also be manual by user
Selection target object, i.e., delineate track in described user terminal reception user video image.It can be user that this delineates track
On the touchscreen by sliding the closed trajectory for being formed.The user terminal according to the user delineate track by capture
Destination object in video, if user needs switching destination object in video, it is only necessary in the preview screen again
Delineate the profile of destination object.When the user terminal receives the track that the user delineates again, target is carried out
The switching of object.Specifically, the user terminal is delineated after track determines the destination object according to the user, described
Other regions beyond the destination object are defined as the background area for needing to be blurred, so as to background in preview screen
Data in region carry out virtualization treatment.
Further, the user terminal can be with the area where the destination object in the preview screen of the recording
Background blurring treatment is successively outwards carried out by default spacing centered on domain, so that the background of destination object has in the preview screen
There is the stereovision of distinctness, reach more preferably recording result.
It is automatic according to the destination object after the user terminal captures the destination object in the embodiment of the present invention
Determine the background object in video, when the background blurring pattern is opened, the destination object is carried out automatically background blurring
Treatment, the effect of end user privacy is protected to reach during Video chat or net cast.And the user terminal profit
With to the background object synthetic video picture after the defocused destination object and virtualization treatment, carried out by display unit
Represent in real time.User terminal records the video for providing the virtualization effect that has powerful connections by need not carrying out later stage treatment, solve existing
User is needed in technology by the image of software opening recording and manual frame selects background area background area caused by confine
The complicated problem of inaccurate and operating process, reaches and improves virtualization effect and simplify user's operation, save the effect of user time.
In another embodiment, it is possible to use binocular camera obtains the depth information of scene.Recycle scene
Depth information, is precisely separating out destination object and background object.
Binocular vision is simulation human vision principle, uses the method for the passive perceived distance of computer.From two or more
Point one object of observation, obtains image of the same object under different visual angles, according to pixel matching relation between image, by three
Angular measurement principle calculates the skew between pixel to obtain the three-dimensional information of object.Obtain the depth information of object, it is possible to
Calculate the actual range between object and camera, object dimensional size, actual range between 2 points.
In the present embodiment, the depth information of video scene can be obtained by the first camera 95 and second camera 96,
Such as Figure 10, the first camera 95 and second camera 96 are connected by connection member 90, it is generally the case that the length of connection member 90
Degree is non-telescoping, can so ensure that the relative position of the first camera 95 and second camera 96 is remained and immobilize,
So as to ensure that the first camera 95 and second camera 96 can with different view collect two width video images in synchronization.
The depth information method for obtaining scene comprises the following steps:
Step one:Off-line calibration
The purpose of demarcation is the intrinsic parameter for obtaining the first camera 95 and second camera 96:Focal length, picture centre, distortion
Coefficient etc. and outer parameter:R (rotation) matrix T (translation) matrix.Method the more commonly used at present is demarcated for the gridiron pattern of Zhang Zhengyou
There is realization on method, Opencv and Matlab.But it is general in order to obtain stated accuracy higher, using the glass of technical grade
Panel effect can be more preferable.And someone also advises using Matlab, because precision can be better including effect of visualization, and
The result of Matlab saves as xml, and Opencv can also directly read in, but step has bothered some relative to Opencv.
Concretely comprise the following steps:
(1) first camera 95 is demarcated, and obtains inside and outside parameter.
(2) second camera 96 is demarcated, and obtains inside and outside parameter.
(3) binocular calibration, obtains the translation rotation relationship between the first camera 95 and second camera 96.
Step 2:Binocular is corrected
The purpose of correction is to remove the influence that optical distortion brings, and the first camera 95 and second camera 96 are changed into mark
Quasi- form.Obtain with reference between figure and target figure, only exist the difference in X-direction.Improve the accuracy of disparity computation.
Correction is divided into two steps
1st, distortion correction
2nd, the first camera 95 and second camera 96 are converted into canonical form.
Because correction section, can to image position a little recalculate, thus the resolution ratio of algorithm process is got over
It is time-consuming bigger greatly, and generally require two images of real-time processing.It is excellent and this Algorithm parallelization strong normalization degree is higher
Choosing is hardened using IVE, is similar to the acceleration pattern in Opencv, first obtains mapping Map, then parallelization uses mapping Map weights
Newly obtain location of pixels.
Step 3:Binocular ranging
Binocular ranging is the core that binocular depth is estimated, has developed many years, also there is very many algorithms, main mesh
Be calculate with reference to pixel between figure and target figure relative match point, obtain disparity map, be broadly divided into local and non local
Algorithm.
Typically there are following several steps.
1st, matching error is calculated
2nd, error is integrated
3rd, disparity map calculates/optimization
4th, disparity map correction
General local algorithm, using fixed size or on-fixed size windows, optimal of a line where calculating therewith
With position.The optimal corresponding points position of a line is asked, left and right view X-coordinate position difference is disparity map.In order to increase noise, illumination
Robustness can be matched using stationary window, it is also possible to be matched again after being converted using LBP to image.General
Have with costing bio disturbance function:SAD, SSD, NCC etc..It is general that maximum search scope can also be limited using maximum disparity, it is also possible to
Speed-up computation is carried out using integrogram and Box Filter.The current preferable local matching algorithm of effect is based on Guided
The use Box Filter of Filter and the binocular ranging algorithm of integrogram, local algorithm are easy to parallelization, and calculating speed is fast, but
It is that the regional effect less for texture be not good, typically to image segmentation, image is divided into texture-rich and the sparse area of texture
Domain, adjusts matching window size, and texture sparse use wicket improves matching effect.
Non local matching algorithm, by search for parallax task regard as minimize one determination based on whole binocular rangings
To loss function, ask the minimum value of the loss function to can obtain optimal parallax relation, emphatically solve image in do not know
The matching problem in region, mainly there is Dynamic Programming (Dynamic Programming), belief propagation
(BliefPropagation), figure cuts algorithm (Graph Cut).What effect was best at present is also that figure cuts algorithm, is carried in Opencv
It is time-consuming very big that the figure of confession cuts algorithmic match.
Figure cuts algorithm primarily to solving dynamic programming algorithm can not merge horizontally and vertically direction continuity constraint
Problem, matching problem is regarded as and seeks minimal cut problem in the picture using these constraints.
Since it is considered that global energy minimization, non local algorithm typically take it is larger, poorly using hardware-accelerated.But
It is that, for blocking, it is preferable that the sparse situation of texture is solved.
Obtain after match point, typically by way of the sight line uniformity of left and right, be detected and determined with high confidence level
Match point.The thought matched to light stream before and after much like, is only just considered steady by the point of left and right sight line consistency check
Determine match point.Can also so find out because blocking, noise, the point that error hiding is obtained.
Step 4:3D distances are calculated
The purpose that 3D distances are calculated is the actual grade that certain point is calculated according to parallax, baseline, intrinsic parameter.
Referring to Figure 11 a, P is certain point, c in physical space1And c2For two video cameras are watched from diverse location, m and m ' are
P image spaces in different cameral.
According to the matching relationship of pixel between image, the skew between pixel is calculated by principle of triangulation to obtain
The three-dimensional information of object.As shown in figure 11b, P is certain point, O in spacelAnd OrRespectively left and right two camera centers, xlWith
xrIt is the imaging point of the right and left.
The parallax d=x of imaging points of the point P in left imagesl-xr, using below equation calculate P points apart from Z.
Wherein f is the focal length of the first camera 95 and second camera 96, wherein, the first camera 95 and second camera
96 focal length is equal, and T is the spacing between two digital camera heads.
Based on above-mentioned mobile terminal hardware configuration and communication system, the inventive method embodiment is proposed.
As shown in figure 12, the embodiment of the present invention proposes a kind of video background weakening method, including:
S101, current video scene is determined according to current time and current location information;
In one embodiment, the current video scene includes:First scene, the second scene ... N scenes;It is described
It is one or more scenes in first scene to the N scenes to preset background blurring scene;Wherein, the N is certainly
So count;
It is described to determine that current video scene includes according to current time and current location information:
The moment position with the scene information table of comparisons search it is corresponding with current time and current location information ought
Preceding video scene;
Wherein, the moment position includes time period, position range and current video scene with the scene information table of comparisons
Corresponding relation.
Specifically:
When the current time be the working time when, and the current location information be business address when, it is determined that working as forward sight
Frequency scene is office scene;
When the current time be the non-working time when, and the current location information be home address when, it is determined that currently
Video scene is home scenarios;
When the position beyond the current location information is the business address and the home address, it is determined that working as forward sight
Frequency scene is common scene.
Obviously, current video scene can also be comprising live scene, teaching scene etc..Can according to it is specific apply into
Row is set, and the present invention is not specifically limited.
S102, judge whether current video scene is to preset background blurring scene, if current video scene is default background
During virtualization scene, background blurring pattern is opened.
In this step, terminal user can pre-set the default background blurring scene, for example, can be by home scenarios
It is set to background blurring scene.
S103, current video object is determined according to address list;
Wherein, the current video object includes:First object video, the second object video ... M object videos;Example
Such as:Relatives and friends, colleague, acquaintance and stranger;
It is described to determine that current video object includes according to address list:
The communication information of current video object is obtained, determines that current video object is described first according to the communication information
One kind in object video to the M object videos.
Wherein, the communication information can including the name of communication object, phone number, micro-signal etc. contact method, with
And to the packet marking of the communication object.Wherein, packet marking can be including relatives and friends, colleague, acquaintance and stranger etc..
Packet marking can be end user manual mark, or automatic mark.
Wherein, automatic mark can be realized in the following way:
Step one:Statistics terminal user and the communication number of times of communication object;Wherein, the communication includes making a phone call to wait voice
Communication and video communication.
Step 2:The packet marking of communication object is marked according to the communication number of times;For example, communication number of times can be less than
The mark of 5 times is people, and mark of the communication number of times more than 5 times less than 10 times is that communication number of times is less than 15 more than 10 times
Secondary mark is that mark of the communication number of times more than 15 times is.
Step 3:Determine that current video object is regarded for first object video to the M according to the packet marking
One kind in frequency object.
S104, when the current video object be when presetting background blurring object video, to open background blurring pattern.
In this step, terminal user can pre-set the default background blurring object video, for example, the default back of the body
Scape virtualization object video could be arranged at least one of colleague, acquaintance or stranger.
S105, storage background blurring scene and/or background blurring object video;
Wherein, background blurring scene opens the scene of background blurring pattern the need for being pre-set for terminal user, such as eventually
End subscriber can set background blurring scene for bedroom, and kitchen etc. needs to protect the place of privacy.Background blurring object video can
The object video of background blurring pattern is opened the need for being pre-set for user, such as terminal user can set background blurring regarding
Frequency object is that father, mother, man or Ms etc. need to protect the object of privacy.
If S106, current video scene be the background blurring scene and/or, current video object is for described background blurring
During object video, background blurring pattern is opened.
It should be noted that in the embodiment of the present invention, step S101~S102 can be set and step S103 according to user
~S104 and step S105~S106 is performed according to the order of priority.User can be with setting steps S101~S102 and step
The priority of S103~S104, step S105~S106, for example, the highest priority of user's setting steps S101~S102, step
The preferential level of rapid S103~S104 is high, and the priority of step S105~S106 is minimum.Then in this embodiment, the morning
10:20 points of terminal users open Video chat function, and terminal device preferentially performs step S101~S102.For example:According to terminal
The current location information of equipment is the business address that user is set in advance, it is determined that current video scene is office scene, due to
It is home scenarios to preset background blurring scene, now, is not turned on background blurring module.Step S101~S102 is performed due to preferential
Afterwards, background blurring module is not opened, step S103~S104 can be continued executing with according to priority, such as performing step S103
Background blurring module is not opened during~S104 yet, then step S105~S106 is continued executing with according to priority;Such as performing step
Background blurring module is had been switched on during S103~S104, then need not perform step S105~S106.
In the present embodiment, it is also possible to only carry out step S101~S102, step S103~S104 with step according to user's selection
One group in rapid S105~S106.
Video background proposed by the present invention blurs terminal and method, is determined according to current time and current location information current
Video scene, current video object is determined according to address list, can in Video chat or net cast according to video scene or
Object video automatically begins to background blurring pattern, and privacy of user is protected well by background blurring pattern, improves Video chat
Or the Consumer's Experience in net cast.
It should be noted that herein, term " including ", "comprising" or its any other variant be intended to non-row
His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and
And also include other key elements being not expressly set out, or also include for this process, method, article or device institute are intrinsic
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably implementation method.Based on such understanding, technical scheme is substantially done to prior art in other words
The part for going out contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used to so that a station terminal terminal (can be mobile phone, computer, clothes
Business device, air-conditioner, or network terminal etc.) perform method described in each embodiment of the invention.
The preferred embodiments of the present invention are these are only, the scope of the claims of the invention is not thereby limited, it is every to utilize this hair
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of video background blurs terminal, it is characterised in that including the first judge module, starting module and background blurring mould
Block, wherein:
First judge module, for determining current video scene according to current time and current location information;
The starting module, for judging whether current video scene is to preset background blurring scene, current video scene is pre-
If during background blurring scene, opening background blurring pattern;
The background blurring module, for when background blurring pattern is opened, background blurring region being extracted, to described background blurring
Region carries out virtualization treatment.
2. video background according to claim 1 blurs terminal, it is characterised in that:The current video scene includes:The
One scene, the second scene ... N scenes;It is described to preset during background blurring scene is first scene to the N scenes
One or more scenes;Wherein, the N is natural number;
First judge module includes:
Memory cell, for moment position and the scene information table of comparisons;When the moment position includes with the scene information table of comparisons
Between section, position range and current video scene corresponding relation;
Query unit, for being searched and current time and current location information in the moment position and the scene information table of comparisons
Corresponding current video scene.
3. video background according to claim 1 blurs terminal, it is characterised in that also include:
Second judge module, for determining current video object according to address list;
The starting module, is additionally operable to when the current video object is when presetting background blurring object video, to open background empty
Change pattern.
4. video background according to claim 3 blurs terminal, it is characterised in that the current video object includes:The
One object video, the second object video ... M object videos;It is described to preset background blurring object video for first video pair
As to one or more object videos in the M object videos;Wherein, M is natural number;
Second judge module, the communication information for obtaining current video object determines current according to the communication information
Object video is the one kind in first object video to the M object videos.
5. video background according to claim 1 blurs terminal, it is characterised in that also include:
Memory, for storing background blurring scene and/or background blurring object video;
The starting module, be additionally operable to current video scene for the background blurring scene and/or, current video object is described
During background blurring object video, background blurring pattern is opened.
6. terminal is blurred according to any described video backgrounds of claim 1-5, it is characterised in that also include:First camera
And second camera, the controller being connected with first camera and the second camera respectively, connect with the controller
The display unit for connecing, wherein:
First camera, for catching destination object, and focuses to the destination object;
The second camera, for when the background blurring pattern is opened, being carried out at virtualization to the background blurring region
Reason;
The controller, for will be closed to the described background blurring region after the defocused destination object and virtualization treatment
Into, and control the display unit display synthetic object.
7. a kind of video background weakening method, it is characterised in that methods described includes:
Current video scene is determined according to current time and current location information;
Judge whether current video scene is to preset background blurring scene, if current video scene is to preset background blurring scene
When, open background blurring pattern;
When background blurring pattern is opened, background blurring region is extracted, virtualization treatment is carried out to the background blurring region.
8. video background weakening method according to claim 7, it is characterised in that:The current video scene includes:The
One scene, the second scene ... N scenes;It is described to preset during background blurring scene is first scene to the N scenes
One or more scenes;Wherein, the N is natural number;
It is described to determine that current video scene includes according to current time and current location information:
Work as forward sight with lookup is corresponding with current time and current location information in the scene information table of comparisons in the moment position
Frequency scene;
Wherein, the moment position includes the corresponding of time period, position range and current video scene with the scene information table of comparisons
Relation.
9. video background weakening method according to claim 7, it is characterised in that methods described also includes:
Current video object is determined according to address list;
When the current video object is when presetting background blurring object video, to open background blurring pattern.
10. video background weakening method according to claim 7, it is characterised in that methods described also includes:
Store background blurring scene and/or background blurring object video;
If current video scene be the background blurring scene and/or, current video object be the background blurring object video
When, open background blurring pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710106577.4A CN106878588A (en) | 2017-02-27 | 2017-02-27 | A kind of video background blurs terminal and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710106577.4A CN106878588A (en) | 2017-02-27 | 2017-02-27 | A kind of video background blurs terminal and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106878588A true CN106878588A (en) | 2017-06-20 |
Family
ID=59169093
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710106577.4A Pending CN106878588A (en) | 2017-02-27 | 2017-02-27 | A kind of video background blurs terminal and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106878588A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107493440A (en) * | 2017-09-14 | 2017-12-19 | 光锐恒宇(北京)科技有限公司 | A kind of method and apparatus of display image in the application |
CN107493452A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Video pictures processing method, device and terminal |
CN107613239A (en) * | 2017-09-11 | 2018-01-19 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107623823A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107623817A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | video background processing method, device and mobile terminal |
CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
CN107707864A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | video background processing method, device and mobile terminal |
CN107864357A (en) * | 2017-09-28 | 2018-03-30 | 努比亚技术有限公司 | Video calling special effect controlling method, terminal and computer-readable recording medium |
CN108174140A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of video communication |
CN108235054A (en) * | 2017-12-15 | 2018-06-29 | 北京奇虎科技有限公司 | A kind for the treatment of method and apparatus of live video data |
CN108600679A (en) * | 2018-01-25 | 2018-09-28 | 维沃移动通信有限公司 | A kind of video call method and terminal |
CN109087271A (en) * | 2018-09-28 | 2018-12-25 | 珠海格力电器股份有限公司 | Realize method, system and the mobile phone of video recording virtualization |
CN109379571A (en) * | 2018-12-13 | 2019-02-22 | 移康智能科技(上海)股份有限公司 | A kind of implementation method and intelligent peephole of intelligent peephole |
CN109672822A (en) * | 2018-12-29 | 2019-04-23 | 努比亚技术有限公司 | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium |
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN111028563A (en) * | 2019-11-26 | 2020-04-17 | 罗昊 | Multimedia teaching system for art design and method thereof |
CN113163153A (en) * | 2021-04-06 | 2021-07-23 | 游密科技(深圳)有限公司 | Method, device, medium and electronic equipment for processing violation information in video conference |
CN115760986A (en) * | 2022-11-30 | 2023-03-07 | 北京中环高科环境治理有限公司 | Image processing method and device based on neural network model |
CN115883959A (en) * | 2023-02-14 | 2023-03-31 | 深圳市湘凡科技有限公司 | Picture content control method for privacy protection and related product |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104378553A (en) * | 2014-12-08 | 2015-02-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN104580678A (en) * | 2013-10-26 | 2015-04-29 | 西安群丰电子信息科技有限公司 | Background communication implementation method for mobile phone |
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN105245420A (en) * | 2015-10-22 | 2016-01-13 | 小米科技有限责任公司 | Smart home furnishing controlling method and device |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
CN105872448A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Display method and device of video images in video calls |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
-
2017
- 2017-02-27 CN CN201710106577.4A patent/CN106878588A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN104580678A (en) * | 2013-10-26 | 2015-04-29 | 西安群丰电子信息科技有限公司 | Background communication implementation method for mobile phone |
CN104378553A (en) * | 2014-12-08 | 2015-02-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN105245420A (en) * | 2015-10-22 | 2016-01-13 | 小米科技有限责任公司 | Smart home furnishing controlling method and device |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
CN105872448A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Display method and device of video images in video calls |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107493452B (en) * | 2017-08-09 | 2021-08-20 | Oppo广东移动通信有限公司 | Video picture processing method and device and terminal |
CN107493452A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Video pictures processing method, device and terminal |
CN107623823B (en) * | 2017-09-11 | 2020-12-18 | Oppo广东移动通信有限公司 | Video communication background display method and device |
CN107623817B (en) * | 2017-09-11 | 2019-08-20 | Oppo广东移动通信有限公司 | Video background processing method, device and mobile terminal |
CN107623817A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | video background processing method, device and mobile terminal |
CN107707864A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | video background processing method, device and mobile terminal |
CN107623823A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107613239A (en) * | 2017-09-11 | 2018-01-19 | 广东欧珀移动通信有限公司 | Video communication background display methods and device |
CN107613239B (en) * | 2017-09-11 | 2020-09-11 | Oppo广东移动通信有限公司 | Video communication background display method and device |
CN107493440A (en) * | 2017-09-14 | 2017-12-19 | 光锐恒宇(北京)科技有限公司 | A kind of method and apparatus of display image in the application |
CN107864357A (en) * | 2017-09-28 | 2018-03-30 | 努比亚技术有限公司 | Video calling special effect controlling method, terminal and computer-readable recording medium |
CN107680060A (en) * | 2017-09-30 | 2018-02-09 | 努比亚技术有限公司 | A kind of image distortion correction method, terminal and computer-readable recording medium |
CN108174140A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | The method and mobile terminal of a kind of video communication |
CN108235054A (en) * | 2017-12-15 | 2018-06-29 | 北京奇虎科技有限公司 | A kind for the treatment of method and apparatus of live video data |
CN108600679A (en) * | 2018-01-25 | 2018-09-28 | 维沃移动通信有限公司 | A kind of video call method and terminal |
CN109087271A (en) * | 2018-09-28 | 2018-12-25 | 珠海格力电器股份有限公司 | Realize method, system and the mobile phone of video recording virtualization |
CN109379571A (en) * | 2018-12-13 | 2019-02-22 | 移康智能科技(上海)股份有限公司 | A kind of implementation method and intelligent peephole of intelligent peephole |
CN109672822A (en) * | 2018-12-29 | 2019-04-23 | 努比亚技术有限公司 | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium |
CN110198421A (en) * | 2019-06-17 | 2019-09-03 | Oppo广东移动通信有限公司 | Method for processing video frequency and Related product |
CN110198421B (en) * | 2019-06-17 | 2021-08-10 | Oppo广东移动通信有限公司 | Video processing method and related product |
CN111028563A (en) * | 2019-11-26 | 2020-04-17 | 罗昊 | Multimedia teaching system for art design and method thereof |
CN113163153A (en) * | 2021-04-06 | 2021-07-23 | 游密科技(深圳)有限公司 | Method, device, medium and electronic equipment for processing violation information in video conference |
CN115760986B (en) * | 2022-11-30 | 2023-07-25 | 北京中环高科环境治理有限公司 | Image processing method and device based on neural network model |
CN115760986A (en) * | 2022-11-30 | 2023-03-07 | 北京中环高科环境治理有限公司 | Image processing method and device based on neural network model |
CN115883959B (en) * | 2023-02-14 | 2023-06-06 | 深圳市湘凡科技有限公司 | Picture content control method for privacy protection and related product |
CN115883959A (en) * | 2023-02-14 | 2023-03-31 | 深圳市湘凡科技有限公司 | Picture content control method for privacy protection and related product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106878588A (en) | A kind of video background blurs terminal and method | |
CN105354838B (en) | The depth information acquisition method and terminal of weak texture region in image | |
CN105404484B (en) | Terminal split screen device and method | |
CN104835165B (en) | Image processing method and image processing device | |
CN106851104B (en) | A kind of method and device shot according to user perspective | |
CN107018331A (en) | A kind of imaging method and mobile terminal based on dual camera | |
CN106454121A (en) | Double-camera shooting method and device | |
CN106502693A (en) | A kind of method for displaying image and device | |
CN106909274A (en) | A kind of method for displaying image and device | |
CN106791204A (en) | Mobile terminal and its image pickup method | |
CN106534590B (en) | A kind of photo processing method, device and terminal | |
WO2017071476A1 (en) | Image synthesis method and device, and storage medium | |
CN106097284B (en) | A kind of processing method and mobile terminal of night scene image | |
CN106851113A (en) | A kind of photographic method and mobile terminal based on dual camera | |
CN106603931A (en) | Binocular shooting method and device | |
CN106713716A (en) | Double cameras shooting control method and device | |
CN106331499A (en) | Focusing method and shooting equipment | |
CN106791455A (en) | Panorama shooting method and device | |
CN106534696A (en) | Focusing apparatus and method | |
CN106686213A (en) | Shooting method and apparatus thereof | |
CN106954020B (en) | A kind of image processing method and terminal | |
CN106973227A (en) | Intelligent photographing method and device based on dual camera | |
CN106851125A (en) | A kind of mobile terminal and multiple-exposure image pickup method | |
CN106534693A (en) | Photo processing method, photo processing device and terminal | |
CN106803879A (en) | Cooperate with filming apparatus and the method for finding a view |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170620 |