CN110087014A - Video complementing method, terminal and computer readable storage medium - Google Patents

Video complementing method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110087014A
CN110087014A CN201910356614.6A CN201910356614A CN110087014A CN 110087014 A CN110087014 A CN 110087014A CN 201910356614 A CN201910356614 A CN 201910356614A CN 110087014 A CN110087014 A CN 110087014A
Authority
CN
China
Prior art keywords
video
data
data stream
call data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910356614.6A
Other languages
Chinese (zh)
Other versions
CN110087014B (en
Inventor
王良兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201910356614.6A priority Critical patent/CN110087014B/en
Publication of CN110087014A publication Critical patent/CN110087014A/en
Application granted granted Critical
Publication of CN110087014B publication Critical patent/CN110087014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

This application provides a kind of video complementing method, the video complementing method includes: to control the video data of mobile terminal and wearable device acquisition current environment when carrying out video calling;The video data is analyzed according to preset algorithm to extract the current video communicating data of video user;Whether the video call data stream for obtaining and judging that network-side transmits is complete;If the video call data stream that the network-side transmits is imperfect, the current video communicating data is filled into the video call data stream, with video call data stream described in completion.The incomplete video call data stream that can be transmitted in this manner to network-side is filled, and with video call data stream described in completion, improves user experience effect.

Description

Video complementing method, terminal and computer readable storage medium
Technical field
This application involves technical field of image processing more particularly to a kind of video complementing methods, terminal and computer-readable Storage medium.
Background technique
With the fast development of network and electronic technology and the rapid proliferation of terminal, the function of terminal is become stronger day by day.More It is configured with camera come more terminals, user can shoot photo, video recording, Video chat, network by the camera of terminal Live streaming etc..
However, terminal is when carrying out Video chat etc. using network, if current network is bad, video clip is easy to appear Situations such as mosaic, green screen, Caton, seriously affects the visual effect of user, to user bring bad experience sense by.Therefore, How mosaic, green screen, the Caton etc. that video clip occurs are improved to bring better video tastes to user, As those skilled in the art's urgent problem to be solved.
Summary of the invention
The main purpose of the application is to propose a kind of video complementing method, terminal and computer readable storage medium, purport It is filled to the incomplete video call data stream that network-side transmits, with video call data stream described in completion, Improve user experience effect.
To achieve the above object, this application provides a kind of video complementing method, the video complementing method includes:
When carrying out video calling, the video data of mobile terminal and wearable device acquisition current environment is controlled;
The video data is analyzed according to preset algorithm to extract the current video communicating data of video user;
Whether the video call data stream for obtaining and judging that network-side transmits is complete;And
If the video call data stream that the network-side transmits is imperfect, the current video communicating data is filled out It is charged in the video call data stream, with video call data stream described in completion.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user The step of communicating data, comprising:
The characteristic information for obtaining the video user analyzes to mention the video data according to the preset algorithm Take with the characteristic information matched video data, wherein the video data of the extraction is that the video user works as forward sight Frequency communicating data.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user After the step of communicating data, further includes:
The current video communicating data of the video user is sent to the wearable device, so that described wearable Equipment stores the current video communicating data;Or
The current video communicating data of the video user is sent to the mobile terminal, so that the mobile terminal Store the current video communicating data.
Optionally, the video call data includes dynamic video data and audio data, and the dynamic video data is extremely Action data and background image data including the expression data of user, user, the audio data include at least user's less Voice data, the whether complete step of the video call data stream for judging that network-side transmits, comprising:
Judge whether are dynamic video data in video call data stream that the network-side transmits and audio data Completely.
Optionally, the step for judging whether the dynamic video data is complete, comprising:
Within the unit time, obtains the totalframes of picture frame in the dynamic video data and fall frame frame number;
Whether the ratio for falling frame frame number and the totalframes described in judgement is more than preset value;And
When the ratio is more than the preset value, determine that the dynamic video data is imperfect.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, packet It includes:
Determine whether the current video communicating data is identical as the resolution ratio of the video call data stream;And
If the current video communicating data is identical as the resolution ratio of the video call data stream, described forward sight will be worked as Frequency communicating data is filled into the video call data stream.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, also Include:
The video call data stream incomplete starting and end time is detected with the determination video calling number According to the stream incomplete period;And
Filling position is determined according to the video call data stream incomplete period, and will based on default filling algorithm The current video communicating data inserts corresponding position in the video call data stream.
Optionally, the video complementing method further include:
Filled video call data stream is decoded and is output to video calling interface.
The application also provides a kind of terminal, and the terminal includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
The computer program realizes the step of above-mentioned video complementing method when being executed by the processor.
The application also provides a kind of computer readable storage medium, and the computer readable storage medium has one or more Program, one or more of programs are executed by one or more processors, to realize above-mentioned video complementing method.
Video complementing method, terminal and computer readable storage medium provided by the present application, by carrying out video calling When, control the video data of mobile terminal and wearable device acquisition current environment;According to preset algorithm to the video data It is analyzed to extract the current video communicating data of video user;Obtain and judge the video calling number that network-side transmits It is whether complete according to flowing;If the video call data stream that the network-side transmits is imperfect, the current video is conversed Data are filled into the video call data stream, with video call data stream described in completion.That is, current by acquisition The video data of environment and the current video communicating data for extracting video user, and the video calling number transmitted in network-side According to flowing out in existing incomplete situation, the current video communicating data is filled to the incomplete video call data stream In, with video call data stream described in completion, video clip can be reduced and the generation of phenomena such as mosaic, green screen, Caton occur, improved User experience effect.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, for those of ordinary skill in the art Speech, without any creative labor, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of hardware structural diagram of embodiment of wearable device provided in an embodiment of the present invention;
Fig. 2 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 3 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 4 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 5 is a kind of hardware schematic of embodiment of wearable device provided by the embodiments of the present application;
Fig. 6 is the flow chart for the video complementing method that one embodiment of the application provides;
Fig. 7 is the structural schematic diagram for the terminal that one embodiment of the application provides.
The embodiments will be further described with reference to the accompanying drawings for realization, functional characteristics and the advantage of the application purpose.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
The wearable device provided in the embodiment of the present invention includes that Intelligent bracelet, smartwatch and smart phone etc. move Dynamic terminal.With the continuous development of Screen Technology, the appearance of the screens form such as flexible screen, Folding screen, smart phone etc. is mobile eventually End can also be used as wearable device.The wearable device provided in the embodiment of the present invention may include: RF (Radio Frequency, radio frequency) unit, WiFi module, audio output unit, A/V (audio/video) input unit, sensor, display The components such as unit, user input unit, interface unit, memory, processor and power supply.
It will be illustrated by taking wearable device as an example in subsequent descriptions, referring to Fig. 1, its each implementation to realize the present invention A kind of hardware structural diagram of wearable device of example, which may include: RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104, Sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, Yi Ji electricity The components such as source 111.It will be understood by those skilled in the art that wearable device structure shown in Fig. 1 is not constituted to wearable The restriction of equipment, wearable device may include perhaps combining certain components or difference than illustrating more or fewer components Component layout.
It is specifically introduced below with reference to all parts of the Fig. 1 to wearable device:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, radio frequency list Uplink information can be sent to base station by member 101, and after the downlink information that in addition can also be sent base station receives, being sent to can be worn The processor 110 for wearing equipment is handled, and base station can be to the downlink information that radio frequency unit 101 is sent and be sent out according to radio frequency unit 101 What the uplink information sent generated, it is also possible to actively push away to radio frequency unit 101 after the information update for detecting wearable device It send, for example, base station can penetrating to wearable device after detecting that geographical location locating for wearable device changes Frequency unit 101 sends the message informing of geographical location variation, and radio frequency unit 101, can should after receiving the message informing The processor 110 that message informing is sent to wearable device is handled, and it is logical that the processor 110 of wearable device can control the message Know on the display panel 1061 for being shown in wearable device;In general, radio frequency unit 101 include but is not limited to antenna, at least one Amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also pass through channel radio Letter communicated with network and other equipment, specifically may include: by wireless communication with the server communication in network system, example Such as, wearable device can download file resource from server by wireless communication, for example can download and answer from server With program, after wearable device completes the downloading of a certain application program, if the corresponding file of the application program in server Resource updates, then the server can be by wireless communication to the message informing of wearable device push resource updates, to remind User is updated the application program.Any communication standard or agreement can be used in above-mentioned wireless communication, including but not limited to GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, CDMA 2000), (Wideband Code Division Multiple Access, wideband code division are more by WCDMA Location), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, time division synchronous CDMA), (Frequency Division Duplexing-Long Term Evolution, frequency division duplex are long by FDD-LTE Phase evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex are drilled for a long time Into) etc..
In one embodiment, wearable device 100 can access existing communication network by insertion SIM card.
In another embodiment, wearable device 100 can be come real by the way that esim card (Embedded-SIM) is arranged Existing communication network is now accessed, by the way of esim card, the inner space of wearable device can be saved, reduce thickness.
It is understood that although Fig. 1 shows radio frequency unit 101, but it is understood that, radio frequency unit 101 its And it is not belonging to must be configured into for wearable device, it can according to need within the scope of not changing the essence of the invention and save completely Slightly., wearable device 100 can realize the communication connection with other equipment or communication network separately through wifi module 102, The embodiment of the present invention is not limited thereto.
WiFi belongs to short range wireless transmission technology, and wearable device can help user to receive and dispatch by WiFi module 102 Email, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1 WiFi module 102 is shown, but it is understood that, and it is not belonging to must be configured into for wearable device, it completely can root It is omitted within the scope of not changing the essence of the invention according to needs.
Audio output unit 103 can be in call signal reception pattern, call mode, record in wearable device 100 When under the isotypes such as mode, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or The audio data that person stores in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit 103 can also provide audio output relevant to the specific function that wearable device 100 executes (for example, call signal reception sound Sound, message sink sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model. Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition) The noise generated during frequency signal or interference.
In one embodiment, wearable device 100 includes one or more cameras, by opening camera, It can be realized the capture to image, realize the functions such as take pictures, record a video, the position of camera, which can according to need, to be configured.
Wearable device 100 further includes at least one sensor 105, for example, optical sensor, motion sensor and other Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring The light and shade of border light adjusts the brightness of display panel 1061, proximity sensor can when wearable device 100 is moved in one's ear, Close display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile phone posture when static It (for example pedometer, is struck using (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function Hit) etc..
In one embodiment, wearable device 100 further includes proximity sensor, can by using proximity sensor Wearable device can be realized non-contact manipulation, provide more modes of operation.
In one embodiment, wearable device 100 further includes heart rate sensor, when wearing, by close to using Person can be realized the detecting of heart rate.
In one embodiment, wearable device 100 can also include that fingerprint sensor can by reading fingerprint Realize the functions such as safety verification.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
In one embodiment, display panel 1061 uses flexible display screen, and wearable using flexible display screen sets For when wearing, screen is able to carry out bending, to more be bonded.Optionally, the flexible display screen can use OLED screen Body and graphene screen body, in other embodiments, the flexible display screen is also possible to other display materials, the present embodiment It is not limited thereto.
In one embodiment, the display panel 1061 of wearable device 100 can take rectangle, when convenient for wearing It surround.In other embodiments, other modes can also be taken.
User input unit 107 can be used for receiving the number or character information of input, and generate and wearable device User setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071 And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch behaviour of user on it or nearby Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel Operation near 1071), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touching Two parts of detection device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch behaviour Make bring signal, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.This Outside, touch panel 1071 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touching Panel 1071 is controlled, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 It can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operation One of bar etc. is a variety of, specifically herein without limitation.
In one embodiment, one or more buttons have can be set in the side of wearable device 100.Button can be with The various ways such as short-press, long-pressing, rotation are realized, to realize a variety of operating effects.The quantity of button can be different to be multiple It can be applied in combination between button, realize a variety of operating functions.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel 1061 be the function that outputs and inputs of realizing wearable device as two independent components, but in certain embodiments, Touch panel 1071 and display panel 1061 can be integrated and be realized the function that outputs and inputs of wearable device, specifically herein Without limitation.For example, processor 110 can be controlled when receiving the message informing of a certain application program by radio frequency unit 101 The message informing show in a certain predeterminable area of display panel 1061 by system, the predeterminable area and touch panel 1071 certain One region is corresponding, can be to corresponding to area on display panel 1061 by carrying out touch control operation to a certain region of touch panel 1071 The message informing shown in domain is controlled.
Interface unit 108 be used as at least one external device (ED) connect with wearable device 100 can by interface.Example Such as, external device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, You Xianhuo Wireless data communications port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in wearable device 100 or can For transmitting data between wearable device 100 and external device (ED).
In one embodiment, wearable device 100 interface unit 108 using contact structure, by contact with Corresponding other equipment connection, realizes the functions such as charging, connection.Use contact can be with waterproof.
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of wearable device, utilizes various interfaces and the entire wearable device of connection Various pieces, by running or execute the software program and/or module that are stored in memory 109, and call and be stored in Data in memory 109 execute the various functions and processing data of wearable device, to carry out to wearable device whole Monitoring.Processor 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulation Demodulation processor, wherein the main processing operation system of application processor, user interface and application program etc., modulation /demodulation processing Device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Wearable device 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 can be logically contiguous by power-supply management system and processor 110, thus charged by power-supply management system realization management, The functions such as electric discharge and power managed.
Although Fig. 1 is not shown, wearable device 100 can also be including bluetooth module etc., and details are not described herein.It is wearable to set Standby 100, by bluetooth, can connect with other terminal devices, realize communication and the interaction of information.
Fig. 2-Fig. 4 is please referred to, is the knot under a kind of a kind of embodiment of wearable device 100 provided in an embodiment of the present invention Structure schematic diagram.Wearable device 100 in the embodiment of the present invention, including flexible screen.In wearable device expansion, flexible screen Curtain is elongated;When wearable device 100 is in wearing state, flexible screen bending is annular in shape.Fig. 2 and Fig. 3 is shown can Structural schematic diagram when 100 screen of wearable device is unfolded, Fig. 4 show structural representation when 100 screen-bending of wearable device Figure.
Based on above-mentioned each embodiment, it can be seen that if the equipment is wrist-watch, bracelet or wearable device When, the screen of the equipment can not overlay device watchband region, can also be with the watchband region of overlay device.Here, this Shen It please propose a kind of optional embodiment, in the present embodiment, the equipment for wrist-watch, bracelet or wearable can be set Standby, the equipment includes screen and interconnecting piece.The screen can be flexible screen, and the interconnecting piece can be watchband.It can Choosing, the screen of the equipment or the viewing area of screen can be partly or completely covered on the watchband of equipment.Such as Fig. 5 Shown, Fig. 5 is a kind of a kind of hardware schematic of embodiment of wearable device 100 provided by the embodiments of the present application, described to set Standby screen extends to two sides, and part is covered on the watchband of equipment.In other embodiments, the screen of the equipment It can all be covered on the watchband of the equipment, the embodiment of the present application is not limited thereto.
Fig. 6 is the flow chart of the embodiment of video complementing method provided by the present application.Once the method quilt of the embodiment User's triggering, then the process in the embodiment passes through mobile terminal and/or the wearable device automatic running, wherein each Step can be when operation and successively carry out according to the sequence in such as flow chart, be also possible to multiple steps according to the actual situation Suddenly it carries out simultaneously, herein and without limitation.Video complementing method provided by the present application includes the following steps:
Step S310 controls mobile terminal and the wearable device 100 acquires current environment when carrying out video calling Video data;
Step S330 analyzes the video data according to preset algorithm logical to extract the current video of video user Talk about data;
Whether step S350, the video call data stream for obtaining and judging that network-side transmits are complete;
Step S370, if the video call data stream that the network-side transmits is imperfect, by the current video Communicating data is filled into the incomplete video call data stream, with video call data stream described in completion.
By above embodiment, current environment can be acquired by the mobile terminal and the wearable device 100 Video data and the video data is analyzed according to preset algorithm with extract the current video of video user call number According to, and in the case where the video call data that network-side transmits flows out existing incomplete situation, by current video call number According to filling into the incomplete video call data stream, with video call data stream described in completion, video clip can be reduced Phenomena such as mosaic, green screen, Caton occur occurs, and improves user experience effect.
Above-mentioned steps are specifically described below in conjunction with specific embodiment.
In step s310, it when carrying out video calling, controls mobile terminal and wearable device 100 acquires current environment Video data.
Specifically, the video data may include dynamic video data and audio data.The dynamic video data is extremely Action data and background image data including the expression data of user, user, i.e., the described dynamic video data can be used for less Show movement, the expression etc. of user.The audio data includes at least the voice data of user, i.e., the described audio data can be used In the sound of performance user.
In the present embodiment, when detection carrying out video calling when, can control the mobile terminal and it is described can The video data of the acquisition current environment of wearable device 100.That is, the control mobile terminal and the wearable device The video data of the current environment of 100 acquisitions may include, but be not limited to the video call data of video user.For example, current The video data of environment can be one can user video image data and audio data in shooting area.The shifting can be passed through The camera of mobile terminal described in dynamic terminal control and the camera of the wearable device 100 carry out pan-shot and are worked as with acquiring The dynamic video data (video image data) of preceding environment, and pass through the sound pick-up and the wearable device of the mobile terminal 100 sound pick-up acquires the audio data.It is appreciated that the mobile terminal can have letter with the wearable device 100 Number connection to carry out data transmission.
In step S330, the video data is analyzed according to preset algorithm and works as forward sight with extract video user Frequency communicating data.
Specifically, the preset algorithm can be characteristic matching method.In the present embodiment, described according to preset algorithm pair The video data is analyzed the step of to extract the current video communicating data of video user, comprising:
Step S3301 obtains the characteristic information of the video user, according to the preset algorithm to the video data into Row analysis with extract with the matched video data of the characteristic information, wherein the video data of the extraction is that the video is used The current video communicating data at family.
Specifically, the characteristic information of the video user can be the facial feature information and/or rainbow of the video user Film characteristic information.Face or iris are as one of human body biological characteristics, and the uniqueness having determines the uniqueness of identity, i.e., Everyone face feature or iris feature is different, thus can distinguish different people.The face figure of video user can be preset Picture and/or iris information.For example, by obtaining the facial feature information and/or iris feature information of the video user, The video data is analyzed to extract and the facial feature information and/or the iris feature according to characteristic matching method The video data of information, the video data of the extraction are the current video communicating data of video user.
In the present embodiment, described that the video data is analyzed to extract video user according to preset algorithm After the step of current video communicating data, further includes:
The current video communicating data of the video user is sent to the wearable device 100 by step S3302, with So that the wearable device 100 stores the current video communicating data;Or
The current video communicating data of the video user is sent to the mobile terminal, so that institute by step S3303 It states mobile terminal and stores the current video communicating data.
Specifically, user, which can choose, carries out Video chat by the wearable device 100 or the mobile terminal.
In the present embodiment, when user's selection carries out Video chat by the wearable device 100, the movement Terminal can analyze the video data according to preset algorithm to extract the current video communicating data of video user, and The current video communicating data of the video user is sent to the wearable device 100, institute by bluetooth or other modes It states wearable device 100 and receives and stores the current video communicating data, thus existing not in the outflow of subsequent video communicating data When there is mosaic, green screen, Caton in complete situation, that is, Video chat, by the current video communicating data fill to In incomplete video call data stream, user experience is improved.
In the present embodiment, when user's selection carries out Video chat by the mobile terminal, the mobile terminal The video data can be analyzed according to preset algorithm to extract the current video communicating data of video user and carry out It saves, the feelings such as mosaic, green screen, Caton occurs to flow out existing incomplete situation, that is, Video chat in subsequent video communicating data When condition, the current video communicating data is filled into incomplete video call data stream, improves user experience.At other In embodiment, it can also be that the wearable device 100 analyzes to extract the video data according to preset algorithm The current video communicating data of the video user is simultaneously passed through bluetooth or its other party by the current video communicating data of video user Formula is sent to the mobile terminal, and the mobile terminal receives and stores the current video communicating data, in subsequent video When mosaic, green screen, Caton occurs in the existing incomplete situation, that is, Video chat of communicating data outflow, described it will work as forward sight Frequency communicating data is filled into incomplete video call data stream, and user experience is improved.
In step S350, whether the video call data stream for obtaining and judging that network-side transmits is complete.
Specifically, the video call data includes the dynamic video data and the audio data.Obtain network-side The video call data stream that transmits simultaneously judges whether the video call data stream is complete.
In the present embodiment, the whether complete step of the video call data stream for judging that network-side transmits, Include:
Step S3501 judges dynamic video data and sound in video call data stream that the network-side transmits Whether frequency is according to complete.
It specifically, can be by judging whether sound continuously judges whether the audio data is complete.
In the present embodiment, the step for judging whether the dynamic video data is complete, comprising:
Step S35011 obtains the totalframes of picture frame in the dynamic video data and falls frame frame within the unit time Number;
Step S35012, whether the ratio for falling frame frame number and the totalframes described in judgement is more than default ratio;And
Step S35013 determines that the dynamic video data is imperfect when the ratio is more than default ratio.
Specifically, the dynamic video data occur it is incomplete it may is that Video chat occur mosaic, green screen, Situations such as Caton.Incomplete situation occur by the dynamic video data is for Caton occurs in Video chat, if game The movement of a consecutive variations drops to 15FPS from 20FPS in render process, then needs to refresh 20 frame pictures within original 1 second and become 15 frames, therefore to fall frame frame number just be 5 frames, is Caton if default ratio is to fall frame frame number each second more than or equal to 4 frames, There is Caton within above-mentioned 1 second, if being for this 5 times to occur at the continuous moment, just has nearly 1/6 second picture Caton. If this 5 different moments being dispersed in 1 second time, game was because of human eye vision effect, in 1/15 second time interval In seem that there will be no Catons, it is possible to according to actual needs select unit time length, theoretically the unit time is smaller, The Caton of judgement is also more accurate.
It, will be described current if the video call data stream that the network-side transmits is imperfect in step S370 Video call data is filled into the video call data stream, with video call data stream described in completion.
In the present embodiment, described to fill the current video communicating data into the video call data stream Step, comprising:
Step S3701, determine the current video communicating data and the video call data stream resolution ratio whether phase Together;
Step S3702 will if the current video communicating data is identical as the resolution ratio of the video call data stream The current video communicating data is filled into the video call data stream.
Specifically, it if the video call data stream that transmits of the network-side is imperfect, first determines and described works as forward sight Whether frequency communicating data is identical as the resolution ratio of the video call data stream.If the current video communicating data and the view The resolution ratio of frequency communicating data stream is identical, then fills the current video communicating data into the video call data stream, With video call data stream described in completion.It is appreciated that in the current video communicating data and the video call data stream The different situation of resolution ratio under, can resolution ratio to the current video communicating data and/or the video calling number It is adjusted according to the resolution ratio of stream, such as interpolation or compression, so that the current video communicating data and the video calling The resolution ratio of data flow is identical, so that filled video call data stream is more natural, true, effect is more preferable.
In the present embodiment, described to fill the current video communicating data into the video call data stream Step, further includes:
Step S3703 detects the video call data stream incomplete starting and end time described in determination The video call data stream incomplete period;
Step S3704 determines filling position according to the video call data stream incomplete period, and based on default The current video communicating data is inserted corresponding position in the video call data stream by filling algorithm.
Specifically, the video call data stream incomplete period, it is imperfect to can be the video call data stream At the beginning of time span between the end time, can also be the video call data stream incomplete time started Time span between current time.The current time can be any of the video call data stream imperfect period Moment.The filling position is the video call data stream incomplete period relatively described video call data stream Position.The default filling algorithm can be editor's filling algorithm.It is appreciated that editor's filling algorithm is will be described current Video call data inserts corresponding position in the video call data stream, so that overall value is maximum.The entirety valence Value can be the quality of editing obtained video, for example, the current video communicating data and the video call data stream it Between matching degree, the continuity between video clip etc..
In the present embodiment, if the video call data stream that transmits of the network-side is imperfect, described in detection Video call data stream incomplete starting and end time, according to starting and end time determination The video call data stream incomplete period, and filling position is determined according to the period, finally calculated based on default filling The current complete video communicating data is inserted corresponding position in the video call data stream by method.If being appreciated that The video call data stream that the network-side transmits is complete, then repeatedly step S350.
In the present embodiment, the video complementing method further include:
Step S380 is decoded filled video call data stream and is output to video calling interface.
Specifically, because in filled video call data stream dynamic video data and audio data be complete view Frequency picture and audio-frequency information, so by being decoded to filled video call data stream and being output to video calling circle There is the generation of phenomena such as mosaic, green screen, Caton, it can be achieved that reducing video calling interface in face, improves user experience.
By above embodiment, acquired by when carrying out video calling, controlling mobile terminal and wearable device 100 The video data of current environment;The video data is analyzed according to preset algorithm to extract the current video of video user Communicating data;Whether the video call data stream for obtaining and judging that network-side transmits is complete;If the network-side is transmitted across The video call data stream come is imperfect, then fills the current video communicating data into the video call data stream, With video call data stream described in completion.That is, by the video data for acquiring current environment and extracting video user Current video communicating data, and in the case where the video call data that network-side transmits flows out existing incomplete situation, it will be described Current video communicating data is filled into the incomplete video call data stream, with video call data stream described in completion, Video clip can be reduced and the generation of phenomena such as mosaic, green screen, Caton occur, improve user experience effect.
Fig. 7 is the structure composition schematic diagram of terminal 200 provided by the embodiments of the present application, and the terminal 200 includes: memory 109, processor 110 and it is stored in the computer program that can be run on the memory 109 and on the processor;The meter Calculation machine program realizes following steps when being executed by the processor 110:
When carrying out video calling, the video data of mobile terminal and wearable device acquisition current environment is controlled;According to Preset algorithm is analyzed the video data to extract the current video communicating data of video user;It obtains and judges network Hold the video call data stream transmitted whether complete;And if the video call data stream that the network-side transmits is endless It is whole, then the current video communicating data is filled into the video call data stream, with video call data described in completion Stream.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user The step of communicating data, comprising: the characteristic information for obtaining the video user, according to the preset algorithm to the video data It is analyzed with extraction and the matched video data of the characteristic information, wherein the video data of the extraction is the video The current video communicating data of user.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user After the step of communicating data, further includes: the current video communicating data of the video user is sent to described wearable set It is standby, so that the wearable device stores the current video communicating data;Or the current video of the video user is led to Words data are sent to the mobile terminal, so that the mobile terminal stores the current video communicating data.
Optionally, the video call data includes dynamic video data and audio data, and the dynamic video data is extremely Action data and background image data including the expression data of user, user, the audio data include at least user's less Voice data, the whether complete step of the video call data stream for judging that network-side transmits, comprising: judge the net Whether the dynamic video data and audio data that network end transmits in video call data stream are complete.
Optionally, the step for judging whether the dynamic video data is complete, comprising: within the unit time, obtain The totalframes of picture frame and fall frame frame number in the dynamic video data;Fall the ratio of frame frame number Yu the totalframes described in judgement It whether is more than preset value;And when the ratio is more than the preset value, determine that the dynamic video data is imperfect.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, packet It includes: determining whether the current video communicating data is identical as the resolution ratio of the video call data stream;And if described current Video call data is identical as the resolution ratio of the video call data stream, then fills the current video communicating data to institute It states in video call data stream.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, also It include: the detection video call data stream incomplete starting and end time with the determination video call data stream The incomplete period;And filling position is determined according to the video call data stream incomplete period, and based on default The current video communicating data is inserted corresponding position in the video call data stream by filling algorithm.
Optionally, following steps are also realized: filled video call data stream is decoded and are output to video and are led to Talk about interface.
By above-mentioned terminal 200, when carrying out video calling, controls mobile terminal and wearable device acquires current environment Video data;The video data is analyzed according to preset algorithm to extract the current video of video user call number According to;Whether the video call data stream for obtaining and judging that network-side transmits is complete;If the view that the network-side transmits Frequency communicating data stream is imperfect, then fills the current video communicating data into the video call data stream, with completion The video call data stream.That is, passing through the video data of acquisition current environment and that extracts video user work as forward sight Frequency communicating data, and in the case where the video call data that network-side transmits flows out existing incomplete situation, described it will work as forward sight Frequency communicating data is filled into the incomplete video call data stream, with video call data stream described in completion, can be reduced There is the generation of phenomena such as mosaic, green screen, Caton in video clip, improves user experience effect.
The embodiment of the present application also provides a kind of computer readable storage medium, and computer readable storage medium has one or more A program, one or more programs are executed by one or more processors, to realize following steps:
When carrying out video calling, the video data of mobile terminal and wearable device acquisition current environment is controlled;According to Preset algorithm is analyzed the video data to extract the current video communicating data of video user;It obtains and judges network Hold the video call data stream transmitted whether complete;And if the video call data stream that the network-side transmits is endless It is whole, then the current video communicating data is filled into the video call data stream, with video call data described in completion Stream.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user The step of communicating data, comprising: the characteristic information for obtaining the video user, according to the preset algorithm to the video data It is analyzed with extraction and the matched video data of the characteristic information, wherein the video data of the extraction is the video The current video communicating data of user.
Optionally, described the video data to be analyzed according to preset algorithm to extract the current video of video user After the step of communicating data, further includes: the current video communicating data of the video user is sent to described wearable set It is standby, so that the wearable device stores the current video communicating data;Or the current video of the video user is led to Words data are sent to the mobile terminal, so that the mobile terminal stores the current video communicating data.
Optionally, the video call data includes dynamic video data and audio data, and the dynamic video data is extremely Action data and background image data including the expression data of user, user, the audio data include at least user's less Voice data, the whether complete step of the video call data stream for judging that network-side transmits, comprising: judge the net Whether the dynamic video data and audio data that network end transmits in video call data stream are complete.
Optionally, the step for judging whether the dynamic video data is complete, comprising: within the unit time, obtain The totalframes of picture frame and fall frame frame number in the dynamic video data;Fall the ratio of frame frame number Yu the totalframes described in judgement It whether is more than preset value;And when the ratio is more than the preset value, determine that the dynamic video data is imperfect.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, packet It includes: determining whether the current video communicating data is identical as the resolution ratio of the video call data stream;And if described current Video call data is identical as the resolution ratio of the video call data stream, then fills the current video communicating data to institute It states in video call data stream.
Optionally, described that the current video communicating data is filled to the step into the video call data stream, also It include: the detection video call data stream incomplete starting and end time with the determination video call data stream The incomplete period;And filling position is determined according to the video call data stream incomplete period, and based on default The current video communicating data is inserted corresponding position in the video call data stream by filling algorithm.
Optionally, following steps are also realized: filled video call data stream is decoded and are output to video and are led to Talk about interface.
By above-mentioned computer readable storage medium, by when carrying out video calling, controlling mobile terminal and wearable The video data of equipment acquisition current environment;The video data is analyzed to extract video user according to preset algorithm Current video communicating data;Whether the video call data stream for obtaining and judging that network-side transmits is complete;If the network It holds the video call data stream transmitted imperfect, then fills the current video communicating data to the video calling number According in stream, with video call data stream described in completion.That is, by the video data for acquiring current environment and extracting video The current video communicating data of user, and in the case where the video call data that network-side transmits flows out existing incomplete situation, The current video communicating data is filled into the incomplete video call data stream, with video calling number described in completion According to stream, video clip can be reduced and the generation of phenomena such as mosaic, green screen, Caton occur, improve user experience effect.
The embodiment of the present application also provides a kind of computer readable storage mediums.Here computer readable storage medium is deposited Contain one or more program.Wherein, computer readable storage medium may include volatile memory, such as arbitrary access Memory;Memory also may include nonvolatile memory, such as read-only memory, flash memory, hard disk or solid-state are hard Disk;Memory can also include the combination of the memory of mentioned kind.
Corresponding technical characteristic in the respective embodiments described above do not cause scheme contradiction or it is not implementable under the premise of, can Mutually to use.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, the technical solution of the application substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the application.
Embodiments herein is described above in conjunction with attached drawing, but the application be not limited to it is above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the enlightenment of the application, when not departing from the application objective and scope of the claimed protection, can also it make very much Form, these are belonged within the protection of the application.

Claims (10)

1. a kind of video complementing method, which is characterized in that the video complementing method includes:
When carrying out video calling, the video data of mobile terminal and wearable device acquisition current environment is controlled;
The video data is analyzed according to preset algorithm to extract the current video communicating data of video user;
Whether the video call data stream for obtaining and judging that network-side transmits is complete;And
If the video call data stream that the network-side transmits is imperfect, by the current video communicating data fill to In the video call data stream, with video call data stream described in completion.
2. video complementing method as described in claim 1, which is characterized in that it is described according to preset algorithm to the video data The step of being analyzed to extract the current video communicating data of video user, comprising:
The characteristic information for obtaining the video user, according to the preset algorithm to the video data analyzed with extract with The matched video data of characteristic information, wherein the current video that the video data of the extraction is the video user leads to Talk about data.
3. video complementing method as described in claim 1, which is characterized in that it is described according to preset algorithm to the video data After the step of being analyzed to extract the current video communicating data of video user, further includes:
The current video communicating data of the video user is sent to the wearable device, so that the wearable device Store the current video communicating data;Or
The current video communicating data of the video user is sent to the mobile terminal, so that the mobile terminal stores The current video communicating data.
4. video complementing method as described in claim 1, which is characterized in that the video call data includes dynamic vision frequency According to and audio data, the dynamic video data include at least the action data and background image of the expression data of user, user Data, the audio data include at least the voice data of user, the video call data that the judgement network-side transmits Stream whether complete step, comprising:
Judge whether dynamic video data in video call data stream that the network-side transmits and audio data are complete.
5. video complementing method as claimed in claim 4, which is characterized in that described to judge whether the dynamic video data is complete Whole step, comprising:
Within the unit time, obtains the totalframes of picture frame in the dynamic video data and fall frame frame number;
Whether the ratio for falling frame frame number and the totalframes described in judgement is more than preset value;And
When the ratio is more than the preset value, determine that the dynamic video data is imperfect.
6. video complementing method as described in claim 1, which is characterized in that described to fill the current video communicating data To the step in the video call data stream, comprising:
Determine whether the current video communicating data is identical as the resolution ratio of the video call data stream;And
If the current video communicating data is identical as the resolution ratio of the video call data stream, the current video is led to Words data are filled into the video call data stream.
7. video complementing method as described in claim 1, which is characterized in that described to fill the current video communicating data To the step in the video call data stream, further includes:
The video call data stream incomplete starting and end time is detected with the determination video call data stream The incomplete period;And
Filling position is determined according to the video call data stream incomplete period, and will be described based on default filling algorithm Current video communicating data inserts corresponding position in the video call data stream.
8. video complementing method as described in claim 1, which is characterized in that the video complementing method further include:
Filled video call data stream is decoded and is output to video calling interface.
9. a kind of terminal, which is characterized in that the terminal includes:
Memory, processor and it is stored in the computer program that can be run on the memory and on the processor;
Such as video completion side of any of claims 1-8 is realized when the computer program is executed by the processor The step of method.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium has one or more journeys Sequence, one or more of programs are executed by one or more processors, to realize the described in any item videos of claim 1-8 Complementing method.
CN201910356614.6A 2019-04-29 2019-04-29 Video completion method, terminal and computer-readable storage medium Active CN110087014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910356614.6A CN110087014B (en) 2019-04-29 2019-04-29 Video completion method, terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910356614.6A CN110087014B (en) 2019-04-29 2019-04-29 Video completion method, terminal and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110087014A true CN110087014A (en) 2019-08-02
CN110087014B CN110087014B (en) 2022-04-19

Family

ID=67417727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910356614.6A Active CN110087014B (en) 2019-04-29 2019-04-29 Video completion method, terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110087014B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111225237A (en) * 2020-04-23 2020-06-02 腾讯科技(深圳)有限公司 Sound and picture matching method of video, related device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1599453A (en) * 2003-09-17 2005-03-23 联想(北京)有限公司 Method for dynamic regulating video transmission
US20120147122A1 (en) * 2009-08-31 2012-06-14 Zte Corporation Video data receiving and sending systems for videophone and video data processing method thereof
US20140104369A1 (en) * 2012-10-15 2014-04-17 Bank Of America Corporation Functionality during a hold period prior to a customer-service video conference
CN104469244A (en) * 2013-09-13 2015-03-25 联想(北京)有限公司 A network based video image adjusting method and system
CN107396198A (en) * 2017-07-24 2017-11-24 维沃移动通信有限公司 A kind of video call method and mobile terminal
CN108600683A (en) * 2018-05-09 2018-09-28 珠海格力电器股份有限公司 Call control method and device
CN108881780A (en) * 2018-07-17 2018-11-23 聚好看科技股份有限公司 Method, the server of clarity mode are dynamically adjusted in video calling
US20190037173A1 (en) * 2016-02-02 2019-01-31 Samsung Electronics Co., Ltd. Method and apparatus for providing image service
CN111147792A (en) * 2019-12-05 2020-05-12 商客通尚景科技(上海)股份有限公司 Method and equipment for completing video conference record

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1599453A (en) * 2003-09-17 2005-03-23 联想(北京)有限公司 Method for dynamic regulating video transmission
US20120147122A1 (en) * 2009-08-31 2012-06-14 Zte Corporation Video data receiving and sending systems for videophone and video data processing method thereof
US20140104369A1 (en) * 2012-10-15 2014-04-17 Bank Of America Corporation Functionality during a hold period prior to a customer-service video conference
CN104469244A (en) * 2013-09-13 2015-03-25 联想(北京)有限公司 A network based video image adjusting method and system
US20190037173A1 (en) * 2016-02-02 2019-01-31 Samsung Electronics Co., Ltd. Method and apparatus for providing image service
CN107396198A (en) * 2017-07-24 2017-11-24 维沃移动通信有限公司 A kind of video call method and mobile terminal
CN108600683A (en) * 2018-05-09 2018-09-28 珠海格力电器股份有限公司 Call control method and device
CN108881780A (en) * 2018-07-17 2018-11-23 聚好看科技股份有限公司 Method, the server of clarity mode are dynamically adjusted in video calling
CN111147792A (en) * 2019-12-05 2020-05-12 商客通尚景科技(上海)股份有限公司 Method and equipment for completing video conference record

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁磊清: "VoLTE视频通话质量提升方法研究", 《山东通信技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111225237A (en) * 2020-04-23 2020-06-02 腾讯科技(深圳)有限公司 Sound and picture matching method of video, related device and storage medium
CN111225237B (en) * 2020-04-23 2020-08-21 腾讯科技(深圳)有限公司 Sound and picture matching method of video, related device and storage medium
US11972778B2 (en) 2020-04-23 2024-04-30 Tencent Technology (Shenzhen) Company Limited Sound-picture matching method of video, related apparatus, and storage medium

Also Published As

Publication number Publication date
CN110087014B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN108540724A (en) A kind of image pickup method and mobile terminal
CN109660719A (en) A kind of information cuing method and mobile terminal
CN107786811B (en) A kind of photographic method and mobile terminal
CN109544486A (en) A kind of image processing method and terminal device
CN110299100A (en) Display direction method of adjustment, wearable device and computer readable storage medium
CN110109725A (en) A kind of interface color method of adjustment and wearable device
CN109461124A (en) A kind of image processing method and terminal device
CN110262849A (en) Using starting method, wearable device and computer readable storage medium
CN110099218A (en) Interaction control method, equipment and computer readable storage medium in a kind of shooting process
CN108174081B (en) A kind of image pickup method and mobile terminal
CN110362368A (en) Picture customization display methods, relevant device and the storage medium of wearable device
CN110062279A (en) Video method of cutting out, wearable device and computer readable storage medium
CN110069132A (en) Application control method, intelligent wearable device and computer readable storage medium
CN109688325A (en) A kind of image display method and terminal device
CN110225282A (en) A kind of video record control method, equipment and computer readable storage medium
CN110187769A (en) A kind of preview image inspection method, equipment and computer readable storage medium
CN110012258A (en) Best audio-video perception point acquisition methods, system, wearable device and storage medium
CN110177209A (en) A kind of video parameter regulation method, equipment and computer readable storage medium
CN109005337A (en) A kind of photographic method and terminal
CN110213442A (en) Speech playing method, terminal and computer readable storage medium
CN110177208A (en) A kind of association control method of video record, equipment and computer readable storage medium
CN110072071A (en) A kind of video record interaction control method, equipment and computer readable storage medium
CN110198411A (en) Depth of field control method, equipment and computer readable storage medium during a kind of video capture
CN110087014A (en) Video complementing method, terminal and computer readable storage medium
CN110113529A (en) A kind of acquisition parameters regulation method, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant