CN110213485A - A kind of image processing method and terminal - Google Patents
A kind of image processing method and terminal Download PDFInfo
- Publication number
- CN110213485A CN110213485A CN201910481559.3A CN201910481559A CN110213485A CN 110213485 A CN110213485 A CN 110213485A CN 201910481559 A CN201910481559 A CN 201910481559A CN 110213485 A CN110213485 A CN 110213485A
- Authority
- CN
- China
- Prior art keywords
- video image
- terminal
- image
- target object
- compensated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephone Function (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to field of communication technology, a kind of image processing method and terminal, to solve in the prior art, the poor problem of the privacy of video image are provided.This method comprises: the moving parameter of detected target object;Obtain background video image;According to the moving parameter, the background video image is compensated;The video image of the compensated background video image and the target object is synthesized, composite video image is obtained.In this way, the background video image that terminal can be shot by the target object in video pictures and in advance synthesizes, to show the video image after synthesis, and does not have to show true background image, the privacy of user can be protected.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of image processing method and terminals.
Background technique
With the development of communication technology, the phenomenon that shooting video image using terminal is more more and more universal.For example, user can be with
Video is shot everywhere, and sends the video of shooting in social platform;For another example, user utilizes video call technology, in real time to other side
The image for showing user, to realize aspectant exchange between user.However, in some scenes, such as user is in private
When including privacy information when close environment or in the video image of shooting, use may result in by the video image that terminal is shot
The privacy leakage at family.
As it can be seen that in the prior art, the privacy of video image is poor.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method and terminal, to solve the private of video image in the prior art
The poor problem of close property.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, being applied to first terminal the embodiment of the invention provides a kind of image processing method, comprising:
The moving parameter of detected target object;
Obtain background video image;
According to the moving parameter, the background video image is compensated;
The video image of the compensated background video image and the target object is synthesized, synthesis view is obtained
Frequency image.
Second aspect, the embodiment of the present invention provide a kind of terminal, and the terminal is first terminal, comprising:
Detection module, the moving parameter for detected target object;
Module is obtained, for obtaining background video image;
Compensating module, for being compensated to the background video image according to the moving parameter;
Synthesis module, for closing the video image of the compensated background video image and the target object
At acquisition composite video image.
The third aspect, the embodiment of the present invention also provide a kind of terminal, comprising: memory, processor and are stored in memory
Computer program that is upper and can running on a processor, the processor are realized as described above when executing the computer program
Step in image processing method.
Fourth aspect, the embodiment of the present invention also provide a kind of computer readable storage medium, the computer-readable storage
Computer program is stored on medium, the computer program is realized when being executed by processor in image processing method as described above
The step of.
In the embodiment of the present invention, the moving parameter of detected target object;Obtain background video image;According to the mobile ginseng
Number, compensates the background video image;By the video of the compensated background video image and the target object
Image is synthesized, and composite video image is obtained.In this way, what terminal can be shot by the target object in video pictures and in advance
Background video image is synthesized, to show the video image after synthesis, and does not have to show true background image, Neng Goubao
Protect the privacy of user.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is one of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 1 a is the two of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 1 b is the interface schematic diagram of terminal provided in an embodiment of the present invention;
Fig. 2 is the three of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 3 is the four of the flow chart of image processing method provided in an embodiment of the present invention;
Fig. 4 is one of the structure chart of first terminal provided in an embodiment of the present invention;
Fig. 5 is the structure chart of the compensating module in first terminal provided in an embodiment of the present invention;
Fig. 6 is the structure chart of the detection module in first terminal provided in an embodiment of the present invention;
Fig. 7 is the two of the structure chart of first terminal provided in an embodiment of the present invention;
Fig. 8 is the three of the structure chart of first terminal provided in an embodiment of the present invention;
Fig. 9 is the four of the structure chart of first terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
It is the flow chart of image processing method provided in an embodiment of the present invention referring to Fig. 1, Fig. 1, this method can be applied to
First terminal, as shown in Figure 1, comprising the following steps:
The moving parameter of step 101, detected target object.
Wherein, first terminal can be any terminal, is intended merely to facilitate differentiation second terminal herein, terminal is named
For first terminal.
Target object can be any object in video image.For example, user, doggie.The video image can be with
It is that first terminal is stored, for example, it is receiving the transmission of other terminals or shot by first terminal and stored, it can also be
It acquires in real time, etc..For example, first terminal and second terminal are in the case where establishing video calling connection, first terminal is real-time
Acquire video image, and the moving parameter of detected target object.In this scenario, it is logical to establish video for first terminal and second terminal
Words connection can be the video calling connection carried out by any way, for example, carrying out video calling by social application program
The connection of progress, the video calling connection carried out when perhaps making a phone call can be is established by caller or called mode
Connection.
The moving parameter of the available target object of first terminal, for example, rate travel, moving direction, moving distance etc.
Deng.The moving parameter that target object can be specifically determined by comparing the image of different moments, can also obtain by other means
It takes.
Step 102 obtains background video image.
Background video image can be what first terminal was shot in advance, is also possible to first terminal and receives the transmission of other terminals
Or other modes obtain.Preferably, background video image is full-view video image, can be obtained so bigger
Visual angle can be improved the effect of image when compensating to background image, to improve the effect of composograph.
Step 103, according to the moving parameter, the background video image is compensated.
In order to improve the syncretizing effect of target object and background video image, in the case where target object is mobile, to back
Scape video image compensates, wherein the region of compensation may include borderline region, can also include central region.In this way, making
It obtains background video image to move with the movement of target object, can be improved the effect of composograph.
Step 104 synthesizes the video image of the compensated background video image and the target object, obtains
Obtain composite video image.
Wherein, the video image of target object can be video image only including target object, that is, acquire from camera
Initial video image in extract only include target object video image.By the video image and background video of target object
Image is synthesized, and the background image of composite video image is the background video image obtained in advance, rather than first terminal is real-time
The real background image of acquisition, can protect the privacy of user.
After obtaining composite video image, first terminal can send the composite video image to the second terminal.
So that second terminal is only capable of obtaining composite video image, rather than the real background image that first terminal acquires in real time can be protected
The privacy of user.
The embodiment in order to facilitate understanding is illustrated below in conjunction with specific implementation scene.
First make background video image.The background that oneself is liked can be passed through pan-shot by user, or can be passed through
Picture is collected, panorama background image is synthesized.The background video image of production is stored to first terminal, and video calling is set and is answered
Has access authority with program.
As shown in Figure 1a, for first terminal when receiving video calling request, user can choose replacement background image pair
The mark answered, and corresponding background video image is selected, first terminal starts video calling.For active transmission video request
Situation can be same as above.If not needing replacement background video image, user can choose video calling mark, to make end
End directly initiates video calling.
For example, as shown in Figure 1 b, when first terminal receives video calling request, interface display call request icon 1,
It replaces video background icon 2 and receives video calling and request icon 3.User can be by carrying out operation replacement back to icon 2
Scape.
In video call process, the camera of first terminal is refreshed with certain frame per second, meanwhile, camera obtains user
The size and rate travel of portrait, size and rate travel with background video image, compensate background video image, and
Refresh background video image, generate compensated background video image, and by compensated background video image and user images
It is merged, realizes user's portrait and background video image real time fusion.
At the end of video calling, camera stops working, and Background Motion Compensation and background refresh stopping.
In the embodiment of the present invention, above-mentioned image processing method can be applied to terminal, such as: mobile phone, tablet computer
(Tablet Personal Computer), laptop computer (Laptop Computer), personal digital assistant (personal
Digital assistant, abbreviation PDA), mobile Internet access device (Mobile Internet Device, MID) or wearable
Equipment (Wearable Device) etc..
The image processing method of the embodiment of the present invention, the moving parameter of detected target object;Obtain background video image;Root
According to the moving parameter, the background video image is compensated;By the compensated background video image and the mesh
The video image of mark object is synthesized, and composite video image is obtained.In this way, terminal can be by the target object in video pictures
The background video image shot in advance is synthesized, to show the video image after synthesis, and does not have to the true back of display
Scape image can be improved the privacy of information.
Referring to fig. 2, the main distinction of the present embodiment and above-described embodiment is, the moving parameter include rate travel and
Moving direction.
Fig. 2 is the flow chart of image processing method provided in an embodiment of the present invention, as shown in Figure 2, comprising the following steps:
The moving parameter of step 201, detected target object, the moving parameter include rate travel and moving direction.
Optionally, the moving parameter of the detection target object, comprising:
In the target video image, the mobile number of pixels of the target object is obtained;
The frame per second of camera based on the number of pixels and the first terminal calculates the mobile speed of the target object
Rate;
It wherein, include the image of the target object in the target video image.
In this embodiment, target video image can be that first terminal acquires or obtain by other means
, first terminal obtains the pixel number that target object moves in target video image.For example, terminal obtains phase in video image
Adjacent two frame pictures, the eyes or ear for determining user are vertically moving V pixel in H pixel of transverse shifting.Then
User is in traversal rate ah=F × H;In longitudinal movement rate av=F × V.Wherein, F is camera when obtaining target object
Frame per second.
In this way, being convenient for the rate travel of quick obtaining target object, thus according to the rate travel to background video image
It compensates, can be improved image synthetic effect.The embodiment also can be applied in the corresponding embodiment of Fig. 1 and reach
Identical beneficial effect.
Optionally, before the moving parameter of the detected target object, the method also includes:
Target video image is acquired, includes the image of the target object in the target video image;
In the target video image, the target object of identification and the characteristic matching of default object.
In this embodiment, first terminal acquisition includes the target video image of target object, and in target video figure
Target object is identified as in.
It for example, target object is user, and include multiple facial images in the target video image of first terminal acquisition, the
The matched image of face characteristic of one terminal recognition feature and user, is determined as target object, so that user will be removed
Other images except image replace with the background video image obtained in advance.In such manner, it is possible to protect the privacy of user.The implementation
Mode also can be applied in the corresponding embodiment of Fig. 1 and reach identical beneficial effect.
Step 202 obtains background video image.
Step 203, according to the rate travel and the moving direction, determine the area to be compensated of the background video image
Domain and compensation rate.
According to rate travel, compensation rate can be determined;According to moving direction, region to be compensated can be determined, that is, need
Increase the region of image content.For example, background moves right if user's hand-held camera is moved to the left in video image,
The speed mobile according to user is then needed, the region on the background frame left side is compensated.
Step 204, according to the compensation rate, the region to be compensated is compensated.
According to compensation rate, region to be compensated is gradually compensated, the picture in region to be compensated is stepped up.
For example, obtaining user in traversal rate ah=F × H;In the case where vertically moving rate av=F × V,
It can determine the rate travel of the feature object in background video image:
Rate travel oh=(- 1) × ah horizontally × (U/L)=(- 1) × F × H × (U/L);
Rate travel ov=(- 1) × av in the longitudinal direction × (U/L)=(- 1) × F × V × (U/L);
Wherein, U/L is motion compensated coefficient of the user agent to background, and -1 indicates the moving direction and user agent of background
Moving direction it is opposite.
Step 205 synthesizes the video image of the compensated background video image and the target object, obtains
Obtain composite video image.
Optionally, the background video image includes N frame image, and the composite video image includes M frame image;
Wherein, N >=M+1, M and N are positive integer.
In this embodiment, the preferable partial frame image of effect in available background video image, utilizes these figures
The video image of picture and target object is synthesized, and can obtain the preferable composite video image of effect in this way, and can be improved
Combined coefficient, and memory can be saved.The embodiment also can be applied in the corresponding embodiment of Fig. 1 and reach identical
Beneficial effect.
Above-mentioned steps 201, step 202, step 205 implementation may refer to the description in above-described embodiment, to keep away
Exempt to repeat, details are not described herein again.
Optionally, the video image by the compensated background video image and the target object closes
At, after obtaining composite video image, the method also includes:
The composite video image is sent to the second terminal, so that the second terminal shows the synthetic video figure
Picture.
In this embodiment, after first terminal obtains composite video image, synthetic video can be sent to second terminal
Image, in this way, second terminal is made to be only capable of obtaining composite video image, rather than real background image can protect that user's is hidden
It is private.
When the embodiment is applied in video calling scene, first terminal is establishing video calling company with second terminal
In the case where connecing, composite video image is sent to second terminal, so that second terminal shows synthesis view in video calling interface
Frequency image can protect that first terminal user's is hidden in this way, second terminal user is only capable of obtaining replaced background video image
It is private.The embodiment also can be applied in the corresponding embodiment of Fig. 1 and reach identical beneficial effect.
Optionally, the composite video image includes M frame image;
It is described to send the composite video image to the second terminal, comprising:
The partial frame or whole frame image in the M frame image are sent to the second terminal.
In this embodiment, after first terminal obtains the composite video image including M frame image, available M frame view
More visible parts of images or all images in frequency image.For example, composite video image includes 100 frame images, in this 100 frame
The preferable image of 25 effect frames is equably obtained in image and forms video image, and sending to second terminal includes 25 frame images
The video image.In such manner, it is possible to keep video image apparent, and it can be improved download efficiency.In the preferable situation of network,
Whole frame image can be sent to second terminal, to improve the effect of video image.
Optionally, the composite video image includes M frame image;
First terminal successively sends 1 to second terminal to K frame video image, wherein M >=K >=1.When second terminal determines
When preceding K frame video image meets the user demand of second terminal, the specific value of K is determined.It is above-mentioned " successively to send 1 to K frame video
Image " is it is to be understood that successively send the 1st frame video image, the 2nd frame video image, k-th frame video image;It also will be understood that
Successively to send 1 frame video image, 2 frame video images, K frame video image.
For example, composite video image includes 100 frame images, first terminal successively sends the 1st to the 30th frame to second terminal
Video image, wherein when first terminal sends the 1st frame to 29 frame video image to second terminal, 1 to 29 frame video image
The user demand for not meeting second terminal, when first terminal sends 30 frame video image to second terminal, second terminal is true
Fixed preceding 30 frame video image meets the user demand of second terminal, so that it is determined that K=30.
In this way, can make in the case where video image meets the user demand of second terminal, transmission of video pair is saved
The occupancy of network improves upload and download efficiency.
The image processing method of the embodiment of the present invention, according to rate travel and moving direction determine the rate for needing to compensate and
Region can be improved the effect of background video image and user's fusion.
A kind of flow chart of image processing method provided in an embodiment of the present invention referring to Fig. 3, Fig. 3, the embodiment with it is upper
The difference for stating embodiment is, is the method realized from the angle of second terminal.Method includes the following steps:
Step 301, in the case where establishing video calling with first terminal and connecting, receive the conjunction that the first terminal is sent
At video image.
Second terminal is established video calling with first terminal and is connect, and can be and establishes company by caller or called mode
It connects.
Step 302 shows the composite video image in video calling interface.
Wherein, the composite video image is, the first terminal is by the video image of target object and background video figure
Video image is formed by as carrying out synthesis.
In this step, second terminal can show composite video image in call interface, so that second terminal is corresponding
User check;In the interface display other content of second terminal, second terminal can also hide composite video image.
The embodiment of the present invention, second terminal show that composite video image to show to user, can be protected in call interface
The privacy of first terminal user, and the effect of call interface can be beautified, it will not influence user and carry out video calling.
Referring to fig. 4, Fig. 4 is the structure chart of terminal provided in an embodiment of the present invention, which is first terminal.Such as Fig. 4 institute
Show, first terminal 400 includes:
Detection module 401, the moving parameter for detected target object;
Module 402 is obtained, for obtaining background video image;
Compensating module 403, for being compensated to the background video image according to the moving parameter;
Synthesis module 404, for by the video image of the compensated background video image and the target object into
Row synthesis, obtains composite video image.
Optionally, as shown in figure 5, the moving parameter includes rate travel and moving direction;
The compensating module 403 includes:
Submodule 4031 is determined, for determining the background video figure according to the rate travel and the moving direction
The region to be compensated of picture and compensation rate;
Submodule 4032 is compensated, for being compensated to the region to be compensated according to the compensation rate.
Optionally, as shown in fig. 6, the detection module 401 includes:
Acquisition submodule 4011, in target video image, obtaining the mobile number of pixels of target object;
Computational submodule 4012 is calculated for the frame per second of the camera based on the number of pixels and the first terminal
The rate travel of the target object;
It wherein, include the image of the target object in the target video image.
Optionally, the background video image includes N frame image, and the composite video image includes M frame image;
Wherein, N >=M+1, M and N are positive integer.
Optionally, as shown in fig. 7, the terminal further include:
Acquisition module 405 includes the target object in the target video image for acquiring target video image
Image;
Identification module 406, for identifying the mesh with the characteristic matching of default object in the target video image
Mark object.
Optionally, as shown in figure 8, the terminal further include:
Sending module 407, for sending the composite video image to the second terminal, so that the second terminal is aobvious
Show the composite video image.
First terminal 400 can be realized each process that terminal is realized in above method embodiment, to avoid repeating, here
It repeats no more.
The first terminal 400 of the embodiment of the present invention, what terminal can be shot by the target object in video pictures and in advance
Background video image is synthesized, to show the video image after synthesis, and does not have to show true background image, Neng Goubao
Protect the privacy of user.
A kind of hardware structural diagram of Fig. 9 terminal of each embodiment to realize the present invention, the terminal 900 include but not
It is limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit
906, the components such as user input unit 907, interface unit 908, memory 909, processor 910 and power supply 911.This field
Technical staff is appreciated that the restriction of the not structure paired terminal of terminal structure shown in Fig. 9, and terminal may include than illustrating more
More or less component perhaps combines certain components or different component layouts.In embodiments of the present invention, terminal includes
But it is not limited to mobile phone, tablet computer, laptop, palm PC, vehicle mobile terminals, wearable device and pedometer
Deng.
Wherein, processor 910, the moving parameter for detected target object;Obtain background video image;According to the shifting
Dynamic parameter, compensates the background video image;By the compensated background video image and the target object
Video image is synthesized, and composite video image is obtained.
In this way, the background video image that terminal can be shot by the target object in video pictures and in advance synthesizes,
To show the video image after synthesis, and do not have to show true background image, the privacy of user can be protected.
Optionally, the moving parameter includes rate travel and moving direction;Processor 910 executes described according to the shifting
Dynamic parameter, compensates the background video image, comprising:
According to the rate travel and the moving direction, region to be compensated and the compensation of the background video image are determined
Rate;
According to the compensation rate, the region to be compensated is compensated.
Optionally, processor 910 executes the moving parameter of the detected target object, comprising:
In target video image, the mobile number of pixels of target object is obtained;
The frame per second of camera based on the number of pixels and the first terminal calculates the mobile speed of the target object
Rate;
It wherein, include the image of the target object in the target video image.
Optionally, the background video image includes N frame image, and the composite video image includes M frame image;
Wherein, N >=M+1, M and N are positive integer.
Optionally, it before the moving parameter that processor 910 executes the detected target object, is also used to:
Target video image is acquired, includes the image of the target object in the target video image;
In the target video image, the target object of identification and the characteristic matching of default object.
Optionally, processor 910 executes the view by the compensated background video image and the target object
Frequency image is synthesized, and after obtaining composite video image, is also used to:
The composite video image is sent to the second terminal, so that the second terminal shows the synthetic video figure
Picture.
It should be understood that the embodiment of the present invention in, radio frequency unit 901 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 910 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 901 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 901 can also by wireless communication system and network and other set
Standby communication.
Terminal provides wireless broadband internet by network module 902 for user and accesses, and such as user is helped to receive and dispatch electricity
Sub- mail, browsing webpage and access streaming video etc..
Audio output unit 903 can be received by radio frequency unit 901 or network module 902 or in memory 909
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 903 can also provide and end
The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that end 900 executes.Sound
Frequency output unit 903 includes loudspeaker, buzzer and receiver etc..
Input unit 904 is for receiving audio or video signal.Input unit 904 may include graphics processor
(Graphics Processing Unit, GPU) 9041 and microphone 9042, graphics processor 9041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 906.Through graphics processor 9041, treated that picture frame can be deposited
Storage is sent in memory 909 (or other storage mediums) or via radio frequency unit 901 or network module 902.Mike
Wind 9042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 901 is converted in the case where telephone calling model.
Terminal 900 further includes at least one sensor 905, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjusts the brightness of display panel 9061, and proximity sensor can close display panel when terminal 900 is moved in one's ear
9061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add
The size of speed can detect that size and the direction of gravity when static, can be used to identify terminal posture (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Sensor 905 can be with
Including fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer,
Infrared sensor etc., details are not described herein.
Display unit 906 is for showing information input by user or being supplied to the information of user.Display unit 906 can wrap
Display panel 9061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 9061.
User input unit 907 can be used for receiving the number or character information of input, and generates and set with the user of terminal
It sets and the related key signals of function control inputs.Specifically, user input unit 907 include touch panel 9071 and other
Input equipment 9072.Touch panel 9071, also referred to as touch screen, collect user on it or nearby touch operation (such as
User is using any suitable objects or attachment such as finger, stylus on touch panel 9071 or near touch panel 9071
Operation).Touch panel 9071 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 910, receives processor 910
The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves
Realize touch panel 9071.In addition to touch panel 9071, user input unit 907 can also include other input equipments 9072.
Specifically, other input equipments 9072 can include but is not limited to physical keyboard, function key (such as volume control button, switch
Key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 9071 can be covered on display panel 9061, when touch panel 9071 is detected at it
On or near touch operation after, send processor 910 to determine the type of touch event, be followed by subsequent processing device 910 according to touching
The type for touching event provides corresponding visual output on display panel 9061.Although in Fig. 9, touch panel 9071 and display
Panel 9061 is the function that outputs and inputs of realizing terminal as two independent components, but in certain embodiments, it can
The function that outputs and inputs of terminal is realized so that touch panel 9071 and display panel 9061 is integrated, is not limited herein specifically
It is fixed.
Interface unit 908 is the interface that external device (ED) is connect with terminal 900.For example, external device (ED) may include it is wired or
Wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, memory card port,
For connecting port, the port audio input/output (I/O), video i/o port, ear port of the device with identification module
Etc..Interface unit 908 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and will
One or more elements that the input received is transferred in terminal 900 or can be used for terminal 900 and external device (ED) it
Between transmit data.
Memory 909 can be used for storing software program and various data.Memory 909 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 909 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 910 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to
It crosses operation or executes the software program and/or module being stored in memory 909, and call and be stored in memory 909
Data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Processor 910 may include one
Or multiple processing units;Preferably, processor 910 can integrate application processor and modem processor, wherein application processing
The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage
Solution, above-mentioned modem processor can not also be integrated into processor 910.
Terminal 900 can also include the power supply 911 (such as battery) powered to all parts, it is preferred that power supply 911 can be with
It is logically contiguous by power-supply management system and processor 910, thus by power-supply management system realize management charging, electric discharge, with
And the functions such as power managed.
In addition, terminal 900 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal, including processor 910, and memory 909 is stored in storage
It is real when which is executed by processor 910 on device 909 and the computer program that can be run on the processor 910
Now each process in the corresponding image processing method embodiment in above-mentioned first terminal side, and identical technical effect can be reached,
To avoid repeating, which is not described herein again.
The hardware structure diagram of second terminal provided by the invention also may refer to shown in Fig. 9.
Wherein, processor 910 is used for, and in the case where establishing video calling with first terminal and connecting, receives described first
The composite video image that terminal is sent;
The composite video image is shown in video calling interface;
Wherein, the composite video image is, the first terminal is by the video image of target object and background video figure
Video image is formed by as carrying out synthesis.
Preferably, the embodiment of the present invention also provides a kind of terminal, including processor 910, and memory 909 is stored in storage
It is real when which is executed by processor 910 on device 909 and the computer program that can be run on the processor 910
Now each process in the corresponding image processing method embodiment in above-mentioned second terminal side, and identical technical effect can be reached,
To avoid repeating, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize above-mentioned first terminal and second terminal corresponding embodiment of the method when being executed by processor
Each process, and identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer can
Storage medium is read, such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random
Access Memory, abbreviation RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (14)
1. a kind of image processing method is applied to first terminal characterized by comprising
The moving parameter of detected target object;
Obtain background video image;
According to the moving parameter, the background video image is compensated;
The video image of the compensated background video image and the target object is synthesized, synthetic video figure is obtained
Picture.
2. the method according to claim 1, wherein the moving parameter includes rate travel and moving direction;
It is described according to the moving parameter, the background video image is compensated, comprising:
According to the rate travel and the moving direction, region to be compensated and the compensation speed of the background video image are determined
Rate;
According to the compensation rate, the region to be compensated is compensated.
3. the method according to claim 1, wherein the moving parameter of the detected target object, comprising:
In target video image, the mobile number of pixels of target object is obtained;
The frame per second of camera based on the number of pixels and the first terminal calculates the rate travel of the target object;
It wherein, include the image of the target object in the target video image.
4. method according to any one of claims 1 to 3, which is characterized in that the background video image includes N frame figure
Picture, the composite video image include M frame image;
Wherein, N >=M+1, M and N are positive integer.
5. described the method according to claim 1, wherein before the moving parameter of the detected target object
Method further include:
Target video image is acquired, includes the image of the target object in the target video image;
In the target video image, the target object of identification and the characteristic matching of default object.
6. the method according to claim 1, wherein described by the compensated background video image and described
The video image of target object is synthesized, after obtaining composite video image, the method also includes:
The composite video image is sent to second terminal, so that the second terminal shows the composite video image.
7. a kind of terminal, the terminal is first terminal characterized by comprising
Detection module, the moving parameter for detected target object;
Module is obtained, for obtaining background video image;
Compensating module, for being compensated to the background video image according to the moving parameter;
Synthesis module, for the video image of the compensated background video image and the target object to be synthesized,
Obtain composite video image.
8. terminal according to claim 7, which is characterized in that the moving parameter includes rate travel and moving direction;
The compensating module includes:
Determine submodule, for according to the rate travel and the moving direction, determine the background video image wait mend
Repay region and compensation rate;
Submodule is compensated, for being compensated to the region to be compensated according to the compensation rate.
9. terminal according to claim 7, which is characterized in that the detection module includes:
Acquisition submodule, in target video image, obtaining the mobile number of pixels of target object;
Computational submodule calculates the target for the frame per second of the camera based on the number of pixels and the first terminal
The rate travel of object;
It wherein, include the image of the target object in the target video image.
10. terminal according to any one of claims 7 to 9, which is characterized in that the background video image includes N frame figure
Picture, the composite video image include M frame image;
Wherein, N >=M+1, M and N are positive integer.
11. terminal according to claim 7, which is characterized in that the terminal further include:
Acquisition module includes the image of the target object for acquiring target video image, in the target video image;
Identification module, for identifying the target object with the characteristic matching of default object in the target video image.
12. terminal according to claim 7, which is characterized in that the terminal further include:
Sending module, for sending the composite video image to second terminal, so that the second terminal shows the synthesis
Video image.
13. a kind of terminal characterized by comprising memory, processor and storage can transport on a memory and on a processor
Capable computer program, the processor realize such as figure as claimed in any one of claims 1 to 6 when executing the computer program
As the step in processing method.
14. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium
Sequence is realized when the computer program is executed by processor as in image processing method as claimed in any one of claims 1 to 6
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910481559.3A CN110213485B (en) | 2019-06-04 | 2019-06-04 | Image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910481559.3A CN110213485B (en) | 2019-06-04 | 2019-06-04 | Image processing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110213485A true CN110213485A (en) | 2019-09-06 |
CN110213485B CN110213485B (en) | 2021-01-08 |
Family
ID=67790577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910481559.3A Active CN110213485B (en) | 2019-06-04 | 2019-06-04 | Image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110213485B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111131744A (en) * | 2019-12-26 | 2020-05-08 | 杭州当虹科技股份有限公司 | Privacy protection method based on video communication |
CN111405142A (en) * | 2020-03-30 | 2020-07-10 | 咪咕视讯科技有限公司 | Image processing method, device and computer readable storage medium |
CN112511741A (en) * | 2020-11-25 | 2021-03-16 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer storage medium |
CN112653851A (en) * | 2020-12-22 | 2021-04-13 | 维沃移动通信有限公司 | Video processing method and device and electronic equipment |
CN113973190A (en) * | 2021-10-28 | 2022-01-25 | 联想(北京)有限公司 | Video virtual background image processing method and device and computer equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100066910A1 (en) * | 2007-04-02 | 2010-03-18 | Kouji Kataoka | Video compositing method and video compositing system |
CN102405642A (en) * | 2009-03-10 | 2012-04-04 | 索尼公司 | Information processing device and method, and information processing system |
CN103607554A (en) * | 2013-10-21 | 2014-02-26 | 无锡易视腾科技有限公司 | Fully-automatic face seamless synthesis-based video synthesis method |
CN105450971A (en) * | 2014-08-15 | 2016-03-30 | 深圳Tcl新技术有限公司 | Privacy protection method and device of video call and television |
CN105898183A (en) * | 2016-04-26 | 2016-08-24 | 努比亚技术有限公司 | Method for controlling video call and mobile terminal |
CN106331521A (en) * | 2015-06-29 | 2017-01-11 | 天津万象科技发展有限公司 | Film and television production system based on combination of network virtual reality and real shooting |
US20170213371A1 (en) * | 2014-08-06 | 2017-07-27 | Nubia Technology Co., Ltd. | Method, device and storage medium for image synthesis |
CN107483857A (en) * | 2017-08-16 | 2017-12-15 | 卓智网络科技有限公司 | Micro- class method for recording and device |
CN107592490A (en) * | 2017-09-11 | 2018-01-16 | 广东欧珀移动通信有限公司 | Video background replacement method, device and mobile terminal |
CN108234825A (en) * | 2018-01-12 | 2018-06-29 | 广州市百果园信息技术有限公司 | Method for processing video frequency and computer storage media, terminal |
CN108537741A (en) * | 2017-03-03 | 2018-09-14 | 佳能株式会社 | Image processing apparatus and the control method for controlling image processing apparatus |
CN108737765A (en) * | 2018-08-02 | 2018-11-02 | 广东小天才科技有限公司 | A kind of video calling processing method, device, terminal device and storage medium |
-
2019
- 2019-06-04 CN CN201910481559.3A patent/CN110213485B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100066910A1 (en) * | 2007-04-02 | 2010-03-18 | Kouji Kataoka | Video compositing method and video compositing system |
CN102405642A (en) * | 2009-03-10 | 2012-04-04 | 索尼公司 | Information processing device and method, and information processing system |
CN103607554A (en) * | 2013-10-21 | 2014-02-26 | 无锡易视腾科技有限公司 | Fully-automatic face seamless synthesis-based video synthesis method |
US20170213371A1 (en) * | 2014-08-06 | 2017-07-27 | Nubia Technology Co., Ltd. | Method, device and storage medium for image synthesis |
CN105450971A (en) * | 2014-08-15 | 2016-03-30 | 深圳Tcl新技术有限公司 | Privacy protection method and device of video call and television |
CN106331521A (en) * | 2015-06-29 | 2017-01-11 | 天津万象科技发展有限公司 | Film and television production system based on combination of network virtual reality and real shooting |
CN105898183A (en) * | 2016-04-26 | 2016-08-24 | 努比亚技术有限公司 | Method for controlling video call and mobile terminal |
CN108537741A (en) * | 2017-03-03 | 2018-09-14 | 佳能株式会社 | Image processing apparatus and the control method for controlling image processing apparatus |
CN107483857A (en) * | 2017-08-16 | 2017-12-15 | 卓智网络科技有限公司 | Micro- class method for recording and device |
CN107592490A (en) * | 2017-09-11 | 2018-01-16 | 广东欧珀移动通信有限公司 | Video background replacement method, device and mobile terminal |
CN108234825A (en) * | 2018-01-12 | 2018-06-29 | 广州市百果园信息技术有限公司 | Method for processing video frequency and computer storage media, terminal |
CN108737765A (en) * | 2018-08-02 | 2018-11-02 | 广东小天才科技有限公司 | A kind of video calling processing method, device, terminal device and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111131744A (en) * | 2019-12-26 | 2020-05-08 | 杭州当虹科技股份有限公司 | Privacy protection method based on video communication |
CN111131744B (en) * | 2019-12-26 | 2021-04-20 | 杭州当虹科技股份有限公司 | Privacy protection method based on video communication |
CN111405142A (en) * | 2020-03-30 | 2020-07-10 | 咪咕视讯科技有限公司 | Image processing method, device and computer readable storage medium |
CN112511741A (en) * | 2020-11-25 | 2021-03-16 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer storage medium |
CN112653851A (en) * | 2020-12-22 | 2021-04-13 | 维沃移动通信有限公司 | Video processing method and device and electronic equipment |
CN113973190A (en) * | 2021-10-28 | 2022-01-25 | 联想(北京)有限公司 | Video virtual background image processing method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110213485B (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107509038B (en) | A kind of image pickup method and mobile terminal | |
CN107566730B (en) | A kind of panoramic picture image pickup method and mobile terminal | |
CN110213485A (en) | A kind of image processing method and terminal | |
CN107566748A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN107817939A (en) | A kind of image processing method and mobile terminal | |
CN108833753A (en) | A kind of image obtains and application method, terminal and computer readable storage medium | |
CN107682639B (en) | A kind of image processing method, device and mobile terminal | |
CN109688322A (en) | A kind of method, device and mobile terminal generating high dynamic range images | |
CN107566749A (en) | Image pickup method and mobile terminal | |
CN109743498A (en) | A kind of shooting parameter adjustment method and terminal device | |
CN109544486A (en) | A kind of image processing method and terminal device | |
CN107846583A (en) | A kind of image shadow compensating method and mobile terminal | |
CN108320263A (en) | A kind of method, device and mobile terminal of image procossing | |
CN107886321A (en) | A kind of method of payment and mobile terminal | |
CN107845057A (en) | One kind is taken pictures method for previewing and mobile terminal | |
CN108564613A (en) | A kind of depth data acquisition methods and mobile terminal | |
CN108391123A (en) | A kind of method and terminal generating video | |
CN107959755A (en) | A kind of photographic method and mobile terminal | |
CN109005314A (en) | A kind of image processing method and terminal | |
CN109639981A (en) | A kind of image capturing method and mobile terminal | |
CN109729336A (en) | A kind of display methods and device of video image | |
CN107566738A (en) | A kind of panorama shooting method, mobile terminal and computer-readable recording medium | |
CN108259739A (en) | A kind of method, device and mobile terminal of image taking | |
CN107734269A (en) | A kind of image processing method and mobile terminal | |
CN107995425B (en) | A kind of image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |